profile
viewpoint

Ask questionsCouldn't find key etcd_endpoints in ConfigMap kube-system/calico-config

**1. What kops version are you running? The command kops version, will display this information.** kops 1.12.1

2. What Kubernetes version are you running? kubectl version will print the version if a cluster is running or provide the Kubernetes version specified as a kops flag. Upgrading from v1.11.10 to 1.12.8

3. What cloud provider are you using? AWS

4. What commands did you run? What is the simplest way to reproduce this issue?

kops rolling-update cluster --cloudonly --master-interval=1s --node-interval=1s --yes

5. What happened after the commands executed?

master not healthy after update, stopping rolling-update: "error validating cluster after removing a node: cluster did not validate within a duration of "5m0s""

6. What did you expect to happen?

Validation to complete successfully

9. Anything else do we need to know?

I clearly messed up the upgrade from v1.11.10 to 1.12.8

I originally ran

   kops update...
   kops rolling-update cluster --yes

Above failed on first master with master not healthy after update, stopping rolling-update: "error validating cluster after removing a node: cluster did not validate within a duration of \"5m0s\""

Validation failing due to Pod kube-system/calico-complete-upgrade-v331-mz6z9 kube-system pod "calico-complete-upgrade-v331-mz6z9" is pending

Warning Failed XXXXX Error: Couldn't find key etcd_endpoints in ConfigMap kube-system/calico-config

I then ran the following as per offical docs kops rolling-update cluster --cloudonly --master-interval=1s --node-interval=1s --yes

Which upgraded all the nodes but the validation is still a failure due to error above.

Can I terminate the master the which originally failed?

Any help is appreciated

kubernetes/kops

Answer questions GMartinez-Sisti

I had to force rollout all the masters twice when upgrading from 1.11.9 to 1.12.9. This happened in two separate clusters. But only in one the calico-complete-upgrade failed.

Both of them are working fine, and I can't find any evidence of problems, either with etcd 3.x or calico.

useful!

Related questions

Unable to use a local filesystem state store hot 2
Kops 1.12-beta.2 won't/can't bring up etcd server, manager or kube-api hot 1
kube controller manager refuses to connect after upgrading from 1.10.6 to 1.11.7 hot 1
Missing kops controller support for cloudproviders hot 1
InstanceGroup not found (for etcd ap-southeast-2a/main): "ap-southeast-2a" hot 1
Rolling-update fails due to calico-node with 1.12.0-beta.2 hot 1
Kubelet Unable To Apply Reserved Cgroup Limits because Cgroup does not exist hot 1
etcd3 and kube-apiserver fail on terraform apply after terraform destroying w/ kops generated config hot 1
Upgrade from Kops 1.11 to 1.12 has failed. hot 1
Protokube has sustained cpu usage above 100% hot 1
Allow just one instance type in mixedInstancesPolicy hot 1
kubectl command: Unable to connect to the server: EOF hot 1
DNS record for public API address not updated hot 1
etcd3 and kube-apiserver fail on terraform apply after terraform destroying w/ kops generated config hot 1
Issues encountered deploying to OpenStack hot 1
source:https://uonfu.com/
Github User Rank List