Ask questionsDNS record for public API address not updated

  1. What kops version are you running? The command kops version, will display this information. 1.9.1 1.9.0

  2. What Kubernetes version are you running? kubectl version will print the version if a cluster is running or provide the Kubernetes version specified as a kops flag. 1.9.3 1.9.9

  3. What cloud provider are you using? AWS

  4. What commands did you run? What is the simplest way to reproduce this issue? Create cluster with --networking amazon-vpc-routed-eni.

  5. What happened after the commands executed? Cluster comes up, but the DNS record for the API is never updated. i.e. The records pointing to the internal IP addresses for components are updated correctly. i.e., and

In addition validate also detects the problem:

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates:  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.
  1. What did you expect to happen? The API DNS record is updated to resolve to the public IP address of a master node.

  2. Please provide your cluster manifest. Execute kops get --name -o yaml to display your cluster manifest. You may want to remove your cluster name and other sensitive information.

  3. Please run the commands with most verbose logging by adding the -v 10 flag. Paste the logs into this report, or in a gist and provide the gist link here. kops create and kops update logs for a cluster with this problem:

  4. Anything else do we need to know? Tried with the following versions: kops 1.9 and kubernetes 1.9.3 kops 1.9.1 and kubernetes 1.9.3, 1.9.9

DNS records are updated fine when using --networking calico


Answer questions tsuna

/remove-lifecycle stale


Related questions

Unable to use a local filesystem state store hot 2
Kops 1.12-beta.2 won't/can't bring up etcd server, manager or kube-api hot 1
kube controller manager refuses to connect after upgrading from 1.10.6 to 1.11.7 hot 1
Missing kops controller support for cloudproviders hot 1
InstanceGroup not found (for etcd ap-southeast-2a/main): "ap-southeast-2a" hot 1
Rolling-update fails due to calico-node with 1.12.0-beta.2 hot 1
Kubelet Unable To Apply Reserved Cgroup Limits because Cgroup does not exist hot 1
etcd3 and kube-apiserver fail on terraform apply after terraform destroying w/ kops generated config hot 1
Upgrade from Kops 1.11 to 1.12 has failed. hot 1
Couldn't find key etcd_endpoints in ConfigMap kube-system/calico-config hot 1
Protokube has sustained cpu usage above 100% hot 1
Allow just one instance type in mixedInstancesPolicy hot 1
kubectl command: Unable to connect to the server: EOF hot 1
etcd3 and kube-apiserver fail on terraform apply after terraform destroying w/ kops generated config hot 1
Issues encountered deploying to OpenStack hot 1
Github User Rank List