profile
viewpoint
Kelsey Hightower kelseyhightower Google, Inc Portland, OR

hashicorp/packer 9804

Packer is a tool for creating identical machine images for multiple platforms from a single source configuration.

hashicorp/nomad 5800

Nomad is an easy-to-use, flexible, and performant workload orchestrator that can deploy a mix of microservice, batch, containerized, and non-containerized applications. Nomad is easy to operate and scale and has native Consul and Vault integrations.

gregsramblings/google-cloud-4-words 2722

The Google Cloud Developer's Cheat Sheet

appc/spec 1249

App Container Specification and Tooling

bradfitz/talk-yapc-asia-2015 683

talk-yapc-asia-2015

containers/build 350

another build tool for container images

bketelsen/captainhook 293

A generic webhook endpoint that runs scripts based on the URL called

kelseyhightower/app 234

Example 12 Facter App

GoogleCloudPlatform/cloud-code-vscode 205

Cloud Code for Visual Studio Code: Issues, Documentation and more

appc/docker2aci 190

library and CLI tool to convert Docker images to ACIs

create barnchkelseyhightower/hipaa

branch : master

created branch time in 7 days

created repositorykelseyhightower/hipaa

Bad hipaa library

created time in 7 days

created tagkelseyhightower/run

tagv0.0.2

Cloud Run helpers

created time in 15 days

push eventkelseyhightower/run

Kelsey Hightower

commit sha 774537192160c9c0285e90406cd74a50974c1115

document logging methods

view details

push time in 15 days

PublicEvent

created tagkelseyhightower/run

tagv0.0.1

Cloud Run helpers

created time in 15 days

push eventkelseyhightower/run

Kelsey Hightower

commit sha 163178b373acceaee256a37a4b87aa09cdf15894

add docs

view details

push time in 15 days

push eventkelseyhightower/nocode

Kelsey Hightower

commit sha 6c073b08f7987018cbb2cb9a5747c84913b3608e

add style guide

view details

push time in a month

PublicEvent

startedtetratelabs/getenvoy

started time in 4 months

startedgoogle/ko

started time in 4 months

created tagkelseyhightower/jsonrpc

tagv0.0.1

created time in 4 months

delete tag kelseyhightower/jsonrpc

delete tag : 0.0.1

delete time in 4 months

push eventkelseyhightower/jsonrpc

Kelsey Hightower

commit sha 80844151dc31274bc60f28a1f11fc978d3e2d3b7

add godoc link

view details

push time in 4 months

push eventkelseyhightower/jsonrpc

Kelsey Hightower

commit sha 4aa33bd9c69db541325b27f6006391b98a04f141

add unit tests

view details

push time in 4 months

push eventkelseyhightower/jsonrpc

Kelsey Hightower

commit sha 061fed093810ff2ae980861fa3d229cfb97e7c7f

init

view details

push time in 4 months

push eventkelseyhightower/jsonrpc

Kelsey Hightower

commit sha 5ac06ee9530980c117963ce9ad6a4b1ae59267c1

init

view details

push time in 4 months

push eventkelseyhightower/jsonrpc

Kelsey Hightower

commit sha d074d451e8de92076834e5de719e44e56eb9eb01

init

view details

push time in 4 months

created tagkelseyhightower/jsonrpc

tag0.0.1

created time in 4 months

create barnchkelseyhightower/jsonrpc

branch : master

created branch time in 4 months

created repositorykelseyhightower/jsonrpc

created time in 4 months

issue commentkelseyhightower/kubernetes-the-hard-way

Proposal: Add translated list or contribution note about translation

This is a good idea. I need to think through how to ensure those translations are of quality and i'm not linking to something I would not approve of.

inductor

comment created time in 5 months

PR closed kelseyhightower/kubernetes-the-hard-way

Lecture Request

I'm loving the Certified Kubernetes Administrator (CKA) with Practice Tests. Wondering if you could add helm?

+2029 -1615

1 comment

74 changed files

eoludotun

pr closed time in 5 months

pull request commentkelseyhightower/kubernetes-the-hard-way

Lecture Request

Thanks for the suggestion but the goal is to keep this guide simple and focused. CKA is just out of scope what the goals of this guide.

eoludotun

comment created time in 5 months

created tagkelseyhightower/kubernetes-the-hard-way

tag1.15.3

Bootstrap Kubernetes the hard way on Google Cloud Platform. No scripts.

created time in 5 months

delete tag kelseyhightower/kubernetes-the-hard-way

delete tag : 1.15.3

delete time in 5 months

pull request commentkelseyhightower/kubernetes-the-hard-way

FIX docs/08-bootstrapping-kubernetes-controllers.md

Per the docs:

File containing PEM-encoded x509 RSA or ECDSA private or public keys, used to verify ServiceAccount tokens. The specified file can contain multiple keys, and the flag can be specified multiple times with different files. If unspecified, --tls-private-key-file is used. Must be specified when --service-account-signing-key is provided

I'm going to stick with the public key as that's all we really need here.

jenciso

comment created time in 5 months

pull request commentkelseyhightower/kubernetes-the-hard-way

Document the 10.32.0.1 IP address

I've updated this section based on your feedback. I've also noted that the kubernetes internal dns name will be linked to that IP address.

pdecat

comment created time in 5 months

push eventkelseyhightower/kubernetes-the-hard-way

Kelsey Hightower

commit sha 5c462220b7f2c03b4b699e89680d0cc007a76f91

Update to Kubernetes 1.15.3

view details

push time in 5 months

push eventkelseyhightower/kubernetes-the-hard-way

Kelsey Hightower

commit sha 3e1ee60a02cdb2d99a7e9531a60b257fcc1227cf

Update to Kubernetes 1.15.3

view details

push time in 5 months

issue closedkelseyhightower/kubernetes-the-hard-way

Using `tar -C` on a symbolic link may have dire results

Just a note on this section

It states

  sudo mv runsc-50c283b9f56bb7200938d9e207355f05f79f0d17 runsc
  sudo mv runc.amd64 runc
  chmod +x kubectl kube-proxy kubelet runc runsc
  sudo mv kubectl kube-proxy kubelet runc runsc /usr/local/bin/
  sudo tar -xvf crictl-v1.12.0-linux-amd64.tar.gz -C /usr/local/bin/
  sudo tar -xvf cni-plugins-amd64-v0.6.0.tgz -C /opt/cni/bin/
  sudo tar -xvf containerd-1.2.0-rc.0.linux-amd64.tar.gz -C /

Specifically this command may be an issue sudo tar -xvf containerd-1.2.0-rc.0.linux-amd64.tar.gz -C / for people (like myself) running Fedora 28 and newer.

On Fedora /bin/ is a symbolic link

[root@jumpbox k8s-the-hard-way]# ll /bin
lrwxrwxrwx. 1 root root 7 Jul 12 21:48 /bin -> usr/bin

I had the issue of sudo tar -xvf containerd-1.2.0-rc.0.linux-amd64.tar.gz -C / replacing the symlink with a /bin dir contained in the tarball.

I know this was written for Ubuntu; but others following along could make use of a note.

closed time in 5 months

christianh814

issue commentkelseyhightower/kubernetes-the-hard-way

Using `tar -C` on a symbolic link may have dire results

This should now be fixed on master. I'm now extracting the binaries in a local directory then moving them under bin.

christianh814

comment created time in 5 months

PR closed kelseyhightower/kubernetes-the-hard-way

Update 09-bootstrapping-kubernetes-workers.md

Command 'sudo tar -xvf containerd-1.1.0.linux-amd64.tar.gz -C /' overwrite /bin/ directory. After reboot system doesn't boot correctly.

+2 -1

2 comments

1 changed file

ivanov-aleksander

pr closed time in 5 months

pull request commentkelseyhightower/kubernetes-the-hard-way

Update 09-bootstrapping-kubernetes-workers.md

Great suggestion. I've updated this section based on your feedback. I'm now extracting the binaries into a local direction then moving them.

ivanov-aleksander

comment created time in 5 months

push eventkelseyhightower/kubernetes-the-hard-way

Kelsey Hightower

commit sha 690ac48a630c658701f194f00f896340ca26e401

Update to Kubernetes 1.15.3

view details

push time in 5 months

PR closed kelseyhightower/kubernetes-the-hard-way

Do not overwrite /bin by destroying symlink

As per #434 when /bin/ is a symlink to another path (e.g. /usr/bin/) using tar -C / will overwrite the symlink and tank the whole system.

Un-tar containerd binaries into the user's home folder and then sudo mv them into the correct location

+3 -1

1 comment

1 changed file

matalo33

pr closed time in 5 months

pull request commentkelseyhightower/kubernetes-the-hard-way

Do not overwrite /bin by destroying symlink

Great suggestion. I've updated this section based on your feedback. I'm now extracting the binaries into a local direction then moving them.

matalo33

comment created time in 5 months

push eventkelseyhightower/kubernetes-the-hard-way

Kelsey Hightower

commit sha 58efebfa9bb492346175da2e6268a5c2f025962d

Update to Kubernetes 1.15.3

view details

push time in 5 months

issue commentkelseyhightower/kubernetes-the-hard-way

SSL extensions in cfssl kubernetes profiles

I'm sure with a little work we can make additional profiles for each component but in an attempt to help simply things I've chosen to keep things as simple as possible. If some one is willing to do the work regarding separate profiles I'd be happy to review and consider updating the guide.

schlitzered

comment created time in 5 months

issue closedkelseyhightower/kubernetes-the-hard-way

SSL extensions in cfssl kubernetes profiles

the kubernetes cfssl profile lists the following ssl extensions:

"signing", "key encipherment", "server auth", "client auth"

are all extensions needed by all certificates? if not, could someone please explain what ssl extension is needed by which certificate type?

Kind Regards.

closed time in 5 months

schlitzered

issue closedkelseyhightower/kubernetes-the-hard-way

Why do we need the --cluster-signing-key-file flag?

Please can someone explain why we need to set the --cluster-signing-key-file flag in the kube-controller-manager.service file, when we are not using TLS bootstrapping?

I'm not sure why the kube-controller-manager has the CA private key, ca-key.pem, if it is not going to receive CSRs from kubelets.

closed time in 5 months

category

issue commentkelseyhightower/kubernetes-the-hard-way

Why do we need the --cluster-signing-key-file flag?

I enabled this flags to enable support for generic certificate requests not just for kubelet bootstrapping.

category

comment created time in 5 months

pull request commentkelseyhightower/kubernetes-the-hard-way

Add alternative names for API Server

Thanks for the suggestion. Along with some other fixes I've added the additional hostnames.

sawlanipradeep

comment created time in 5 months

push eventkelseyhightower/kubernetes-the-hard-way

Kelsey Hightower

commit sha e1c3599db915f8f4158267dfb9079368b71ae451

Update to Kubernetes 1.15.3

view details

push time in 5 months

issue closedkelseyhightower/kubernetes-the-hard-way

cfssljson supports -version (now)

https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/02-client-tools.md

The cfssljson command line utility does not provide a way to print its version.

This is not true in at least 1.3.2 as you can do cfssljson -version. I'll have a look later and update this. I assume it should also be 1.2.0+.

closed time in 5 months

Code0x58

issue commentkelseyhightower/kubernetes-the-hard-way

cfssljson supports -version (now)

Thanks for the suggestion but at this point I'm going to build and host my own cfssl binaries and link to upstream for people who wish to build their own. I'm now using 1.3.4 and show people how to print the version for both binaries.

Code0x58

comment created time in 5 months

PR closed kelseyhightower/kubernetes-the-hard-way

Update 02-client-tools.md

The newest version of cfssljson now provides a way to get version info

+13 -1

1 comment

1 changed file

gamename

pr closed time in 5 months

pull request commentkelseyhightower/kubernetes-the-hard-way

Update 02-client-tools.md

Thanks for the suggestion but at this point I'm going to build and host my own cfssl binaries and link to upstream for people who wish to build their own. I've also updated the docs to show how to print the version for both binaries.

gamename

comment created time in 5 months

PR closed kelseyhightower/kubernetes-the-hard-way

Add checksum validation for cfssl cfssljson

Add commands to execute checksum for cfssl_linux-amd64 cfssljson_linux-amd64 for Mac and Linux.

+12 -0

1 comment

1 changed file

igorsobot

pr closed time in 5 months

pull request commentkelseyhightower/kubernetes-the-hard-way

Add checksum validation for cfssl cfssljson

Thanks for the suggestion but at this point I'm going to build and host my own cfssl binaries and link to upstream for people who wish to build their own.

igorsobot

comment created time in 5 months

push eventkelseyhightower/kubernetes-the-hard-way

Kelsey Hightower

commit sha d416e32de7ad2885ba4a6807d59c465c589dc8de

Update to Kubernetes 1.15.3

view details

push time in 5 months

PR closed kelseyhightower/kubernetes-the-hard-way

Running kubelet fails when swap is on

When running on a fresh ubuntu 16.04 / linux image swap is enabled, this prevents kubelet from starting.

+1 -0

2 comments

1 changed file

sjors101

pr closed time in 5 months

pull request commentkelseyhightower/kubernetes-the-hard-way

Running kubelet fails when swap is on

Instead of enabling this flag I've added documentation regarding how to disable swap and why the kubelet fails to start when swap is enabled.

sjors101

comment created time in 5 months

push eventkelseyhightower/kubernetes-the-hard-way

Kelsey Hightower

commit sha fc96ece3a465d558f79cd85ef7fa7ac380f8622f

Update to Kubernetes 1.15.3

view details

push time in 5 months

issue closedkelseyhightower/kubernetes-the-hard-way

Unable to connect to the server: dial tcp: lookup https on 127.0.0.53:53

Hi, I'm currently following KTHW, but I'm struck here: https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/08-bootstrapping-kubernetes-controllers.md

If I run kubectl get componentstatuses --kubeconfig admin.kubeconfig on controller1, it gives me:

Unable to connect to the server: dial tcp: lookup https on 127.0.0.53:53: server misbehaving

I'm not getting any output from controller2 whatsoever.

Here are the systemctl statuses for the controller 1 & 2

host1@host1:~$ sudo systemctl status kube-apiserver.service kube-controller-manager.service kube-scheduler.service etcd.service
● kube-apiserver.service - Kubernetes API Server
   Loaded: loaded (/etc/systemd/system/kube-apiserver.service; enabled; vendor preset: enabled)
   Active: active (running) since Sat 2019-03-30 06:02:06 UTC; 41min ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 19056 (kube-apiserver)
    Tasks: 8 (limit: 505)
   CGroup: /system.slice/kube-apiserver.service
           └─19056 /usr/local/bin/kube-apiserver --advertise-address=192.168.0.51 --allow-privileged=true --apiserver-count=3 --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-pat

Mar 30 06:43:05 host1 kube-apiserver[19056]: Trace[1586471569]: [284.749701ms] [284.749701ms] About to convert to expected version
Mar 30 06:43:05 host1 kube-apiserver[19056]: Trace[1586471569]: [1.394079475s] [1.109218446s] Object stored in database
Mar 30 06:43:05 host1 kube-apiserver[19056]: I0330 06:43:05.931374   19056 trace.go:76] Trace[1564652250]: "GuaranteedUpdate etcd3: *apiregistration.APIService" (started: 2019-03-30 06:43:02.435479491 +0000 UTC 
Mar 30 06:43:05 host1 kube-apiserver[19056]: Trace[1564652250]: [643.106159ms] [643.106159ms] initial value restored
Mar 30 06:43:05 host1 kube-apiserver[19056]: Trace[1564652250]: [3.286391109s] [2.64328495s] END
Mar 30 06:43:05 host1 kube-apiserver[19056]: I0330 06:43:05.931568   19056 trace.go:76] Trace[252937668]: "Update /apis/apiregistration.k8s.io/v1/apiservices/v1.autoscaling/status" (started: 2019-03-30 06:43:01.
Mar 30 06:43:05 host1 kube-apiserver[19056]: Trace[252937668]: [449.605228ms] [449.605228ms] About to convert to expected version
Mar 30 06:43:05 host1 kube-apiserver[19056]: Trace[252937668]: [4.626887082s] [4.171128177s] Object stored in database
Mar 30 06:43:06 host1 kube-apiserver[19056]: I0330 06:43:06.099588   19056 trace.go:76] Trace[813819929]: "Get /api/v1/namespaces/default/services/kubernetes" (started: 2019-03-30 06:43:05.200067666 +0000 UTC m=
Mar 30 06:43:06 host1 kube-apiserver[19056]: Trace[813819929]: [899.169135ms] [899.116557ms] About to write a response

● kube-controller-manager.service - Kubernetes Controller Manager
   Loaded: loaded (/etc/systemd/system/kube-controller-manager.service; enabled; vendor preset: enabled)
   Active: active (running) since Sat 2019-03-30 05:08:34 UTC; 1h 34min ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 894 (kube-controller)
    Tasks: 6 (limit: 505)
   CGroup: /system.slice/kube-controller-manager.service
           └─894 /usr/local/bin/kube-controller-manager --address=0.0.0.0 --cluster-cidr=10.200.0.0/16 --cluster-name=kubernetes --cluster-signing-cert-file=/var/lib/kubernetes/ca.pem --cluster-signing-key-file=

Mar 30 06:42:30 host1 kube-controller-manager[894]: E0330 06:42:30.944749     894 leaderelection.go:252] error retrieving resource lock kube-system/kube-controller-manager: Get https://https/127.0.0.1:6443:6443/
Mar 30 06:42:35 host1 kube-controller-manager[894]: E0330 06:42:35.674571     894 leaderelection.go:252] error retrieving resource lock kube-system/kube-controller-manager: Get https://https/127.0.0.1:6443:6443/
Mar 30 06:42:38 host1 kube-controller-manager[894]: E0330 06:42:38.533139     894 leaderelection.go:252] error retrieving resource lock kube-system/kube-controller-manager: Get https://https/127.0.0.1:6443:6443/
Mar 30 06:42:42 host1 kube-controller-manager[894]: E0330 06:42:42.216497     894 leaderelection.go:252] error retrieving resource lock kube-system/kube-controller-manager: Get https://https/127.0.0.1:6443:6443/
Mar 30 06:42:45 host1 kube-controller-manager[894]: E0330 06:42:45.326778     894 leaderelection.go:252] error retrieving resource lock kube-system/kube-controller-manager: Get https://https/127.0.0.1:6443:6443/
Mar 30 06:42:48 host1 kube-controller-manager[894]: E0330 06:42:48.378743     894 leaderelection.go:252] error retrieving resource lock kube-system/kube-controller-manager: Get https://https/127.0.0.1:6443:6443/
Mar 30 06:42:53 host1 kube-controller-manager[894]: E0330 06:42:53.041259     894 leaderelection.go:252] error retrieving resource lock kube-system/kube-controller-manager: Get https://https/127.0.0.1:6443:6443/
Mar 30 06:42:57 host1 kube-controller-manager[894]: E0330 06:42:57.827988     894 leaderelection.go:252] error retrieving resource lock kube-system/kube-controller-manager: Get https://https/127.0.0.1:6443:6443/
Mar 30 06:43:02 host1 kube-controller-manager[894]: E0330 06:43:02.847398     894 leaderelection.go:252] error retrieving resource lock kube-system/kube-controller-manager: Get https://https/127.0.0.1:6443:6443/
Mar 30 06:43:06 host1 kube-controller-manager[894]: E0330 06:43:06.613272     894 leaderelection.go:252] error retrieving resource lock kube-system/kube-controller-manager: Get https://https/127.0.0.1:6443:6443/

● kube-scheduler.service - Kubernetes Scheduler
   Loaded: loaded (/etc/systemd/system/kube-scheduler.service; enabled; vendor preset: enabled)
   Active: active (running) since Sat 2019-03-30 05:08:35 UTC; 1h 34min ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 1019 (kube-scheduler)
    Tasks: 8 (limit: 505)
   CGroup: /system.slice/kube-scheduler.service
           └─1019 /usr/local/bin/kube-scheduler --config=/etc/kubernetes/config/kube-scheduler.yaml --v=2

Mar 30 06:43:06 host1 kube-scheduler[1019]: E0330 06:43:05.993183    1019 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.StatefulSet: Get https://https/127.0.0.1:6443:6443/apis/a
Mar 30 06:43:07 host1 kube-scheduler[1019]: E0330 06:43:07.584782    1019 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.ReplicationController: Get https://https/127.0.0.1:6443:6
Mar 30 06:43:07 host1 kube-scheduler[1019]: E0330 06:43:07.586142    1019 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:178: Failed to list *v1.Pod: Get https://https/127.0.0.1:6443:6443/a
Mar 30 06:43:07 host1 kube-scheduler[1019]: E0330 06:43:07.586394    1019 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.Service: Get https://https/127.0.0.1:6443:6443/api/v1/ser
Mar 30 06:43:07 host1 kube-scheduler[1019]: E0330 06:43:07.586585    1019 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.StorageClass: Get https://https/127.0.0.1:6443:6443/apis/
Mar 30 06:43:07 host1 kube-scheduler[1019]: E0330 06:43:07.586628    1019 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.StatefulSet: Get https://https/127.0.0.1:6443:6443/apis/a
Mar 30 06:43:07 host1 kube-scheduler[1019]: E0330 06:43:07.586764    1019 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.PersistentVolumeClaim: Get https://https/127.0.0.1:6443:6
Mar 30 06:43:07 host1 kube-scheduler[1019]: E0330 06:43:07.586906    1019 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1beta1.PodDisruptionBudget: Get https://https/127.0.0.1:644
Mar 30 06:43:07 host1 kube-scheduler[1019]: E0330 06:43:07.586948    1019 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.Node: Get https://https/127.0.0.1:6443:6443/api/v1/nodes?
Mar 30 06:43:07 host1 kube-scheduler[1019]: E0330 06:43:07.587140    1019 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.ReplicaSet: Get https://https/127.0.0.1:6443:6443/apis/ap
Mar 30 06:43:07 host1 kube-scheduler[1019]: E0330 06:43:07.587530    1019 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.PersistentVolume: Get https://https/127.0.0.1:6443:6443/a

● etcd.service - etcd
   Loaded: loaded (/etc/systemd/system/etcd.service; enabled; vendor preset: enabled)
   Active: active (running) since Sat 2019-03-30 05:08:34 UTC; 1h 34min ago
     Docs: https://github.com/coreos
 Main PID: 895 (etcd)
    Tasks: 19 (limit: 505)
   CGroup: /system.slice/etcd.service
           └─895 /usr/local/bin/etcd --name host1 --cert-file=/etc/etcd/kubernetes.pem --key-file=/etc/etcd/kubernetes-key.pem --peer-cert-file=/etc/etcd/kubernetes.pem --peer-key-file=/etc/etcd/kubernetes-key.p

Mar 30 06:43:00 host1 etcd[895]: read-only range request "key:\"/registry/namespaces/default\" " took too long (6.094537828s) to execute
Mar 30 06:43:00 host1 etcd[895]: read-only range request "key:\"/registry/horizontalpodautoscalers\" range_end:\"/registry/horizontalpodautoscalert\" count_only:true " took too long (340.625664ms) to execute
Mar 30 06:43:00 host1 etcd[895]: read-only range request "key:\"/registry/controllers\" range_end:\"/registry/controllert\" count_only:true " took too long (341.497314ms) to execute
Mar 30 06:43:00 host1 etcd[895]: read-only range request "key:\"/registry/storageclasses\" range_end:\"/registry/storageclasset\" count_only:true " took too long (5.129414869s) to execute
Mar 30 06:43:00 host1 etcd[895]: read-only range request "key:\"/registry/persistentvolumeclaims\" range_end:\"/registry/persistentvolumeclaimt\" count_only:true " took too long (5.800977487s) to execute
Mar 30 06:43:02 host1 etcd[895]: read-only range request "key:\"/registry/namespaces/kube-public\" " took too long (178.119829ms) to execute
Mar 30 06:43:02 host1 etcd[895]: read-only range request "key:\"/registry/namespaces/default\" " took too long (151.27636ms) to execute
Mar 30 06:43:03 host1 etcd[895]: request "header:<ID:5643689612246479706 username:\"kubernetes\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.0.52\" mod_revision:1242 > success
Mar 30 06:43:03 host1 etcd[895]: read-only range request "key:\"/registry/rolebindings\" range_end:\"/registry/rolebindingt\" count_only:true " took too long (323.650716ms) to execute
Mar 30 06:43:04 host1 etcd[895]: read-only range request "key:\"/registry/events\" range_end:\"/registry/eventt\" count_only:true " took too long (666.331566ms) to execute
lines 27-80/80 (END)

sudo systemctl status kube-apiserver.service kube-controller-manager.service kube-scheduler.service etcd.service 


● kube-apiserver.service - Kubernetes API Server
   Loaded: loaded (/etc/systemd/system/kube-apiserver.service; enabled; vendor p
   Active: active (running) since Sat 2019-03-30 05:10:29 UTC; 1h 28min ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 1671 (kube-apiserver)
    Tasks: 10 (limit: 505)
   CGroup: /system.slice/kube-apiserver.service
           └─1671 /usr/local/bin/kube-apiserver --advertise-address=192.168.0.52

Mar 30 06:37:30 host2 kube-apiserver[1671]: Trace[458478933]: [585.95922ms] [585
Mar 30 06:37:31 host2 kube-apiserver[1671]: W0330 06:37:31.020036    1671 lease.
Mar 30 06:37:41 host2 kube-apiserver[1671]: I0330 06:37:41.719481    1671 trace.
Mar 30 06:37:41 host2 kube-apiserver[1671]: Trace[824794083]: [530.446218ms] [53
Mar 30 06:37:44 host2 kube-apiserver[1671]: I0330 06:37:44.023672    1671 trace.
Mar 30 06:37:44 host2 kube-apiserver[1671]: Trace[1331404827]: [555.699335ms] [5
Mar 30 06:37:53 host2 kube-apiserver[1671]: I0330 06:37:53.418320    1671 trace.
Mar 30 06:37:53 host2 kube-apiserver[1671]: Trace[2080092854]: [1.324621212s] [1
Mar 30 06:37:53 host2 kube-apiserver[1671]: I0330 06:37:53.537091    1671 trace.
Mar 30 06:37:53 host2 kube-apiserver[1671]: Trace[1561288346]: [9.509692674s] [9

● kube-controller-manager.service - Kubernetes Controller Manager
   Loaded: loaded (/etc/systemd/system/kube-controller-manager.service; enabled;
   Active: active (running) since Sat 2019-03-30 05:08:37 UTC; 1h 30min ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 775 (kube-controller)
    Tasks: 6 (limit: 505)
   CGroup: /system.slice/kube-controller-manager.service
           └─775 /usr/local/bin/kube-controller-manager --address=0.0.0.0 --clus

Mar 30 06:37:32 host2 kube-controller-manager[775]: E0330 06:37:32.377336     77
Mar 30 06:37:36 host2 kube-controller-manager[775]: E0330 06:37:36.059263     77
Mar 30 06:37:46 host2 kube-controller-manager[775]: E0330 06:37:46.479484     77
Mar 30 06:37:52 host2 kube-controller-manager[775]: E0330 06:37:52.869964     77
Mar 30 06:38:06 host2 kube-controller-manager[775]: E0330 06:38:06.790355     77
Mar 30 06:38:20 host2 kube-controller-manager[775]: E0330 06:38:20.463135     77
Mar 30 06:38:34 host2 kube-controller-manager[775]: E0330 06:38:34.646368     77
Mar 30 06:38:47 host2 kube-controller-manager[775]: E0330 06:38:47.953633     77
Mar 30 06:39:01 host2 kube-controller-manager[775]: E0330 06:39:01.930054     77
Mar 30 06:39:15 host2 kube-controller-manager[775]: E0330 06:39:15.947474     77

● kube-scheduler.service - Kubernetes Scheduler
   Loaded: loaded (/etc/systemd/system/kube-scheduler.service; enabled; vendor p
   Active: active (running) since Sat 2019-03-30 05:08:37 UTC; 1h 30min ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 797 (kube-scheduler)
    Tasks: 8 (limit: 505)
   CGroup: /system.slice/kube-scheduler.service
           └─797 /usr/local/bin/kube-scheduler --config=/etc/kubernetes/config/k

Mar 30 06:39:20 host2 kube-scheduler[797]: E0330 06:39:20.165311     797 reflect
Mar 30 06:39:20 host2 kube-scheduler[797]: E0330 06:39:20.165407     797 reflect
Mar 30 06:39:20 host2 kube-scheduler[797]: E0330 06:39:20.165480     797 reflect
Mar 30 06:39:20 host2 kube-scheduler[797]: E0330 06:39:20.165533     797 reflect
Mar 30 06:39:20 host2 kube-scheduler[797]: E0330 06:39:20.165582     797 reflect
Mar 30 06:39:20 host2 kube-scheduler[797]: E0330 06:39:20.165629     797 reflect
Mar 30 06:39:20 host2 kube-scheduler[797]: E0330 06:39:20.165736     797 reflect
Mar 30 06:39:20 host2 kube-scheduler[797]: E0330 06:39:20.165796     797 reflect
Mar 30 06:39:20 host2 kube-scheduler[797]: E0330 06:39:20.165845     797 reflect
Mar 30 06:39:20 host2 kube-scheduler[797]: E0330 06:39:20.165894     797 reflect

● etcd.service - etcd
   Loaded: loaded (/etc/systemd/system/etcd.service; enabled; vendor preset: ena
   Active: active (running) since Sat 2019-03-30 05:08:38 UTC; 1h 30min ago
     Docs: https://github.com/coreos
 Main PID: 829 (etcd)
    Tasks: 8 (limit: 505)
   CGroup: /system.slice/etcd.service
           └─829 /usr/local/bin/etcd --name host2 --cert-file=/etc/etcd/kubernet

Mar 30 06:38:24 host2 etcd[829]: read-only range request "key:\"/registry/apireg
Mar 30 06:38:24 host2 etcd[829]: read-only range request "key:\"/registry/apireg
Mar 30 06:38:31 host2 etcd[829]: read-only range request "key:\"/registry/poddis
Mar 30 06:38:35 host2 etcd[829]: read-only range request "key:\"/registry/servic
Mar 30 06:38:35 host2 etcd[829]: read-only range request "key:\"/registry/master
Mar 30 06:38:45 host2 etcd[829]: read-only range request "key:\"/registry/servic
Mar 30 06:39:16 host2 etcd[829]: read-only range request "key:\"/registry/master
Mar 30 06:39:16 host2 etcd[829]: read-only range request "key:\"/registry/apireg
Mar 30 06:39:17 host2 etcd[829]: read-only range request "key:\"/registry/apireg
Mar 30 06:39:17 host2 etcd[829]: read-only range request "key:\"/registry/apireg
lines 57-79/79 (END)

Here are the systemd files:

Controller 1

host1@host1:~$ cat /etc/systemd/system/kube-apiserver.service 
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-apiserver \
  --advertise-address=192.168.0.51 \
  --allow-privileged=true \
  --apiserver-count=3 \
  --audit-log-maxage=30 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/var/log/audit.log \
  --authorization-mode=Node,RBAC \
  --bind-address=0.0.0.0 \
  --client-ca-file=/var/lib/kubernetes/ca.pem \
  --enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
  --enable-swagger-ui=true \
  --etcd-cafile=/var/lib/kubernetes/ca.pem \
  --etcd-certfile=/var/lib/kubernetes/kubernetes.pem \
  --etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \
  --etcd-servers=https://192.168.0.51:2379,https://192.168.0.52:2379 \
  --event-ttl=1h \
  --experimental-encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \
  --kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \
  --kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \
  --kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \
  --kubelet-https=true \
  --runtime-config=api/all \
  --service-account-key-file=/var/lib/kubernetes/service-account.pem \
  --service-cluster-ip-range=10.32.0.0/24 \
  --service-node-port-range=30000-32767 \
  --tls-cert-file=/var/lib/kubernetes/kubernetes.pem \
  --tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \
  --v=2
  --resolv-conf=/run/systemd/resolve/resolv.conf
  --kubelet-preferred-address-types=InternalIP, ExternalIP, Hostname
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
host1@host1:~$ cat /etc/systemd/system/kube-scheduler.service 
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-scheduler \
  --config=/etc/kubernetes/config/kube-scheduler.yaml \
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
host1@host1:~$ cat /etc/systemd/system/kube-controller-manager.service 
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
  --address=0.0.0.0 \
  --cluster-cidr=10.200.0.0/16 \
  --cluster-name=kubernetes \
  --cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \
  --cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \
  --kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \
  --leader-elect=true \
  --root-ca-file=/var/lib/kubernetes/ca.pem \
  --service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \
  --service-cluster-ip-range=10.32.0.0/24 \
  --use-service-account-credentials=true \
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

Controller 2:

host2@host2:~$ cat /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-apiserver \
  --advertise-address=192.168.0.52 \
  --allow-privileged=true \
  --apiserver-count=3 \
  --audit-log-maxage=30 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/var/log/audit.log \
  --authorization-mode=Node,RBAC \
  --bind-address=0.0.0.0 \
  --client-ca-file=/var/lib/kubernetes/ca.pem \
  --enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
  --enable-swagger-ui=true \
  --etcd-cafile=/var/lib/kubernetes/ca.pem \
  --etcd-certfile=/var/lib/kubernetes/kubernetes.pem \
  --etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \
  --etcd-servers=https://192.168.0.51:2379,https://192.168.0.52:2379 \
  --event-ttl=1h \
  --experimental-encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \
  --kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \
  --kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \
  --kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \
  --kubelet-https=true \
  --runtime-config=api/all \
  --service-account-key-file=/var/lib/kubernetes/service-account.pem \
  --service-cluster-ip-range=10.32.0.0/24 \
  --service-node-port-range=30000-32767 \
  --tls-cert-file=/var/lib/kubernetes/kubernetes.pem \
  --tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \
  --v=2
  --kubelet-preferred-address-types=InternalIP,InternalDNS,Hostname, ExternalIP, ExternalDNS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
host2@host2:~$ cat /etc/systemd/system/kube-scheduler.service 
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-scheduler \
  --config=/etc/kubernetes/config/kube-scheduler.yaml \
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
host2@host2:~$ 
host2@host2:~$ cat /etc/systemd/system/kube-controller-manager.service 
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
  --address=0.0.0.0 \
  --cluster-cidr=10.200.0.0/16 \
  --cluster-name=kubernetes \
  --cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \
  --cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \
  --kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \
  --leader-elect=true \
  --root-ca-file=/var/lib/kubernetes/ca.pem \
  --service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \
  --service-cluster-ip-range=10.32.0.0/24 \
  --use-service-account-credentials=true \
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
host2@host2:~$ 

/etc/hosts

host2@host2:~$ cat /etc/hosts
127.0.0.1	localhost.localdomain	localhost
::1		localhost6.localdomain6	localhost6
192.168.0.51    host1
192.168.0.52    host2
192.168.0.53    host3
192.168.0.54    host4

# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
host2@host2:~$ 

closed time in 5 months

joydeep1701

issue commentkelseyhightower/kubernetes-the-hard-way

Unable to connect to the server: dial tcp: lookup https on 127.0.0.53:53

Looks like you are attempting to follow this guide under a different set of machines and or OS. I'm not sure what's going on but I don't have the bandwidth troubleshoot anything outside of what's documented here.

joydeep1701

comment created time in 5 months

issue closedkelseyhightower/kubernetes-the-hard-way

Translation into Japanese

Hi, I'm Sho from Japan. Would you mind if I translate this repo into Japanese?

closed time in 5 months

sh-tatsuno

issue commentkelseyhightower/kubernetes-the-hard-way

Translation into Japanese

I've updated the copyright on the repo and have chosen to adopt a creative commons license which grants you permission to do the translation and as long as you agree to share to content under a similar license and not for commercial usage with prior approval.

sh-tatsuno

comment created time in 5 months

push eventkelseyhightower/kubernetes-the-hard-way

Kelsey Hightower

commit sha 32631f83d7732e9be8353b423f3b7ab0de5eef5d

Update to Kubernetes 1.15.3

view details

push time in 5 months

push eventkelseyhightower/kubernetes-the-hard-way

Kelsey Hightower

commit sha c35e1fc446204c56d35f0307441e8c971e6b537d

Update to Kubernetes 1.15.3

view details

push time in 5 months

push eventkelseyhightower/kubernetes-the-hard-way

Kelsey Hightower

commit sha b1052893a1a1a9e2dbbd143b3354850abfc1f53e

Update to Kubernetes 1.15.3

view details

push time in 5 months

issue closedkelseyhightower/kubernetes-the-hard-way

Unclear if RBAC for Kubelet Authorization needs to be run on all controllers

Most controller/worker commands must be executed on every instance, and this is often explicitly mentioned:

The commands in this lab must be run on each controller instance: controller-0, controller-1, and controller-2. Login to each controller instance using the gcloud command. Example:

However, the section on RBAC for Kubelet Authorization only mentions controller-0 and doesn't give any indication that this must be run on each controller instance.

gcloud compute ssh controller-0

https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/08-bootstrapping-kubernetes-controllers.md#rbac-for-kubelet-authorization

If setting up RBAC is actually different from the rest, this should be explicitly mentioned; otherwise, if this command needs to be run across all the controllers, then that should be mentioned. I'd be happy to put together a pull request for this.

closed time in 5 months

irbull

issue commentkelseyhightower/kubernetes-the-hard-way

Unclear if RBAC for Kubelet Authorization needs to be run on all controllers

Thanks for the suggestion. I've added clarifying comments to the section. This is now fixed on master.

irbull

comment created time in 5 months

push eventkelseyhightower/kubernetes-the-hard-way

Kelsey Hightower

commit sha 221c4101dcf69cf06ab694274ad3bb421bd3516b

Update to Kubernetes 1.15.3

view details

push time in 5 months

PR closed kelseyhightower/kubernetes-the-hard-way

Add "Type=notify" on "etcd.service" file

if not set, it can show a message "forgot to set Type=notify in systemd service file?" on log.

+1 -0

1 comment

1 changed file

blackfoxsar

pr closed time in 5 months

pull request commentkelseyhightower/kubernetes-the-hard-way

Add "Type=notify" on "etcd.service" file

Fixed on maser.

blackfoxsar

comment created time in 5 months

PR closed kelseyhightower/kubernetes-the-hard-way

Update path to ca.pem

The desired file is in the /var/lib/kubernetes directory

+1 -1

1 comment

1 changed file

haywoood

pr closed time in 5 months

pull request commentkelseyhightower/kubernetes-the-hard-way

Update path to ca.pem

The command in the guide is correct. I've added some comments that inform the reader that the commands should be run from their local machine and not one of the controller nodes.

haywoood

comment created time in 5 months

push eventkelseyhightower/kubernetes-the-hard-way

Kelsey Hightower

commit sha c0f3178f82925709b2f97977ca479e56a26808eb

Update to Kubernetes 1.15.3

view details

push time in 5 months

PR closed kelseyhightower/kubernetes-the-hard-way

Update 05-kubernetes-configuration-files.md

Mention that these steps cannot be run from an arbitrary directory.

Since step 4 of the tutorial created a lot of files I was considering running step 5 from within a different directory. That didn't work too well.

lt-dellh8s6vz1:kthw-5 cgeretz$ for instance in worker-0 worker-1 worker-2; do
>   kubectl config set-cluster kubernetes-the-hard-way \
>     --certificate-authority=ca.pem \
>     --embed-certs=true \
>     --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
>     --kubeconfig=${instance}.kubeconfig
> 
>   kubectl config set-credentials system:node:${instance} \
>     --client-certificate=${instance}.pem \
>     --client-key=${instance}-key.pem \
>     --embed-certs=true \
>     --kubeconfig=${instance}.kubeconfig
> 
>   kubectl config set-context default \
>     --cluster=kubernetes-the-hard-way \
>     --user=system:node:${instance} \
>     --kubeconfig=${instance}.kubeconfig
> 
>   kubectl config use-context default --kubeconfig=${instance}.kubeconfig
> done
error: could not read certificate-authority data from ca.pem: open ca.pem: no such file or directory
error: error reading client-certificate data from worker-0.pem: open worker-0.pem: no such file or directory
Context "default" created.
Switched to context "default".
error: could not read certificate-authority data from ca.pem: open ca.pem: no such file or directory
error: error reading client-certificate data from worker-1.pem: open worker-1.pem: no such file or directory
Context "default" created.
Switched to context "default".
error: could not read certificate-authority data from ca.pem: open ca.pem: no such file or directory
error: error reading client-certificate data from worker-2.pem: open worker-2.pem: no such file or directory
Context "default" created.
Switched to context "default".
+2 -0

1 comment

1 changed file

geretz

pr closed time in 5 months

pull request commentkelseyhightower/kubernetes-the-hard-way

Update 05-kubernetes-configuration-files.md

Thanks for the suggestion. I've added additional comments in that sections based on your feedback.

geretz

comment created time in 5 months

push eventkelseyhightower/kubernetes-the-hard-way

Kelsey Hightower

commit sha 28bb380aa5e8e5b5903ce07c90fcf572b81da56b

Update to Kubernetes 1.15.3

view details

push time in 5 months

PR closed kelseyhightower/kubernetes-the-hard-way

Remove wrapping curly brackets

The brackets prevented the commands from being copied into a terminal.

Thanks for such a great tutorial.

+11 -43

3 comments

2 changed files

trshafer

pr closed time in 5 months

pull request commentkelseyhightower/kubernetes-the-hard-way

Remove wrapping curly brackets

The wrapping works for me on MacOS and Linux in bash. The wrapping is there to cut down on the number of commands people need to copy and paste.

trshafer

comment created time in 5 months

PR closed kelseyhightower/kubernetes-the-hard-way

Update 01-prerequisites.md

I'd like to propose this change simply because when I was first attempting to get tmux up and running I thought what I was supposed to be typing was "synchronize-panes: " then hitting ctrl+b and shift and :. I've never used tmux, although have used screen before so once I realized they were essentially the same thing I felt much more comfortable with everything, but while trying to get going with the tutorial it felt bad failing at something before I even started. It proves to be invaluable later on in the tutorial so I would hate for someone else to go through the same thing. I was thinking maybe an image of how the screen changes and what it looks like when you've successfully completed the steps might be helpful. Something like : https://photos.app.goo.gl/buC41xzCEcQeDqSR8

The "shift+:" change I'm a little less committed to; my thinking here was that it didn't register in my head as a key sequence and looked more like something I was supposed to write out.

+2 -2

1 comment

1 changed file

mikenadal

pr closed time in 5 months

pull request commentkelseyhightower/kubernetes-the-hard-way

Update 01-prerequisites.md

I've fixed the highlighting based on your suggestions.

mikenadal

comment created time in 5 months

PR closed kelseyhightower/kubernetes-the-hard-way

Update 01-prerequisites.md

The gcloud compute zones list command will return following error if it's not authenticated with the Google Cloud SDK.

ERROR: (gcloud.compute.zones.list) Some requests did not succeed:
 - Failed to find project [Project ID]
+1 -0

1 comment

1 changed file

reoim

pr closed time in 5 months

pull request commentkelseyhightower/kubernetes-the-hard-way

Update 01-prerequisites.md

Thanks for the suggestion. I've reworded this section of the guide on master.

reoim

comment created time in 5 months

push eventkelseyhightower/kubernetes-the-hard-way

Kelsey Hightower

commit sha 946d811095552926ed4b829acd7a941b434ba15c

Update to Kubernetes 1.15.3

view details

push time in 5 months

PR closed kelseyhightower/kubernetes-the-hard-way

Added portmap cni plugin

Added CNI portmap plugin to allow for hostPort in Pod definitions. We could put this in a separate CNI configuration file but i decided to keep both portmap and bridge plugins in one conflist. Don't know what you think but it works.

+17 -8

1 comment

1 changed file

amimof

pr closed time in 5 months

pull request commentkelseyhightower/kubernetes-the-hard-way

Added portmap cni plugin

Thanks for the suggestion but I prefer to stick to a single config and allow others to experiment after following the happy path.

amimof

comment created time in 5 months

issue closedkelseyhightower/kubernetes-the-hard-way

kubelet, worker-0 (combined from similar events): Failed create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox

root@ubuntu-xenial:~# kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns.yaml serviceaccount/coredns created clusterrole.rbac.authorization.k8s.io/system:coredns created clusterrolebinding.rbac.authorization.k8s.io/system:coredns created configmap/coredns created deployment.extensions/coredns created service/kube-dns created root@ubuntu-xenial:~# kubectl get pods -l k8s-app=kube-dns -n kube-system NAME READY STATUS RESTARTS AGE coredns-699f8ddd77-r2z5v 0/1 ContainerCreating 0 112s coredns-699f8ddd77-r58wz 0/1 ContainerCreating 0 112s root@ubuntu-xenial:~/files# kubectl describe pod coredns-699f8ddd77-r58wz -n kube-system Name: coredns-699f8ddd77-r58wz Namespace: kube-system Priority: 0 PriorityClassName: <none> Node: worker-0/10.240.0.20 Start Time: Thu, 08 Nov 2018 10:42:47 +0000 Labels: k8s-app=kube-dns pod-template-hash=699f8ddd77 Annotations: <none> Status: Pending IP:
Controlled By: ReplicaSet/coredns-699f8ddd77 Containers: coredns: Container ID:
Image: coredns/coredns:1.2.2 Image ID:
Ports: 53/UDP, 53/TCP, 9153/TCP Host Ports: 0/UDP, 0/TCP, 0/TCP Args: -conf /etc/coredns/Corefile State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Limits: memory: 170Mi Requests: cpu: 100m memory: 70Mi Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5 Environment: <none> Mounts: /etc/coredns from config-volume (ro) /var/run/secrets/kubernetes.io/serviceaccount from coredns-token-2lfx5 (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: config-volume: Type: ConfigMap (a volume populated by a ConfigMap) Name: coredns Optional: false coredns-token-2lfx5: Type: Secret (a volume populated by a Secret) SecretName: coredns-token-2lfx5 Optional: false QoS Class: Burstable Node-Selectors: <none> Tolerations: CriticalAddonsOnly node-role.kubernetes.io/master:NoSchedule node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message


Normal Scheduled 59m default-scheduler Successfully assigned kube-system/coredns-699f8ddd77-r58wz to worker-0 Warning FailedCreatePodSandBox 59m kubelet, worker-0 Failed create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "91383d1dd59c98b0f65881989b6a969b3d006dda444cc34fe16184f292c8fe97": failed to find network info for sandbox "91383d1dd59c98b0f65881989b6a969b3d006dda444cc34fe16184f292c8fe97" Warning FailedCreatePodSandBox 58m kubelet, worker-0 Failed create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "555a3d6eab5c3e76d1cc20fba1eb2f006dc89259475f2ba02135a91b03b7d8ba": failed to find network info for sandbox "555a3d6eab5c3e76d1cc20fba1eb2f006dc89259475f2ba02135a91b03b7d8ba" Warning FailedCreatePodSandBox 58m kubelet, worker-0 Failed create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "16a21d2129821239044e3c42cf76dccb8a6e1502d26bf2fa27bed768f7050e83": failed to find network info for sandbox "16a21d2129821239044e3c42cf76dccb8a6e1502d26bf2fa27bed768f7050e83" Warning FailedCreatePodSandBox 58m kubelet, worker-0 Failed create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "0291d022c46fc4f24690304ac7fe442e0d3f227ea9b579e346057372c12584ca": failed to find network info for sandbox "0291d022c46fc4f24690304ac7fe442e0d3f227ea9b579e346057372c12584ca" Warning FailedCreatePodSandBox 58m kubelet, worker-0 Failed create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "c1ff8444d7b86ef041d71f227939264e5128cdd151c451e0a4287fee4d9daa3f": failed to find network info for sandbox "c1ff8444d7b86ef041d71f227939264e5128cdd151c451e0a4287fee4d9daa3f" Warning FailedCreatePodSandBox 57m kubelet, worker-0 Failed create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "0fb82d75cf4d3f3f887b083337f8f63084ac1a1a5b9098b7125316a315173e60": failed to find network info for sandbox "0fb82d75cf4d3f3f887b083337f8f63084ac1a1a5b9098b7125316a315173e60" Warning FailedCreatePodSandBox 57m kubelet, worker-0 Failed create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "8c703875e3e16447c9d517e6ee16f3cb2ae2a96bdbc45297523a4c0e522e18b5": failed to find network info for sandbox "8c703875e3e16447c9d517e6ee16f3cb2ae2a96bdbc45297523a4c0e522e18b5" Warning FailedCreatePodSandBox 57m kubelet, worker-0 Failed create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "d0b6961d9fa4cf9032c4fd465517b70e69b72037d09407ec398d65f990eaae86": failed to find network info for sandbox "d0b6961d9fa4cf9032c4fd465517b70e69b72037d09407ec398d65f990eaae86" Warning FailedCreatePodSandBox 57m kubelet, worker-0 Failed create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "158bbab998175dc547ab75e515bbf4ab618a26fdbabc140da51aa6c7e17b2ec0": failed to find network info for sandbox "158bbab998175dc547ab75e515bbf4ab618a26fdbabc140da51aa6c7e17b2ec0" Warning FailedCreatePodSandBox 4m (x244 over 57m) kubelet, worker-0 (combined from similar events): Failed create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "e01153d06de9d9fd082bed392f43d81404fe29333ed42105f6bfc7249f92577d": failed to find network info for sandbox "e01153d06de9d9fd082bed392f43d81404fe29333ed42105f6bfc7249f92577d"

closed time in 5 months

sandeepunix

issue closedkelseyhightower/kubernetes-the-hard-way

Verification's after bootstrapping not working - noob, help!

Hi all,

I am new to building K8s, and I have followed the instructions on GCP cutting and pasting, so it's frustrating to hit a brick wall and not know why. I have a feeling it is going to be a simple thing I have yet to learn. Firstly, when I run this on any controller - kubectl get componentstatuses --kubeconfig admin.kubeconfig I get this - The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?

Secondly, when I try to verify the NGINX health check I do this as indicated in Section 8 - curl -H "Host: kubernetes.default.svc.cluster.local" -i http://127.0.0.1/healthz I get this - HTTP/1.1 502 Bad Gateway Server: nginx/1.14.0 (Ubuntu) Date: Fri, 14 Jun 2019 21:37:40 GMT Content-Type: text/html Content-Length: 182 Connection: keep-alive <html> <head><title>502 Bad Gateway</title></head> <body bgcolor="white"> <center><h1>502 Bad Gateway</h1></center> <hr><center>nginx/1.14.0 (Ubuntu)</center> </body>

Any help is gladly received. Ant

closed time in 5 months

monk78anthony

issue closedkelseyhightower/kubernetes-the-hard-way

Unable to read /coredns.yaml (12-dns-addon.md)

Tried: kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns.yaml Got: error: unable to read URL "https://storage.googleapis.com/kubernetes-the-hard-way/coredns.yaml", server reported 403 Forbidden, status code=403

closed time in 5 months

svatwork

PR closed kelseyhightower/kubernetes-the-hard-way

kube-scheduler change api version as the current is unsupported

Logs that confirm this change:

WARNING: the provided config file is an unsupported apiVersion ("componentconfig/v1alpha1"), which will be removed in future releases

WARNING: switch to command-line flags or update your config file apiVersion to "kubescheduler.config.k8s.io/v1alpha1"

WARNING: apiVersions at alpha-level are not guaranteed to be supported in future releases
+1 -1

1 comment

1 changed file

hmilkovi

pr closed time in 5 months

pull request commentkelseyhightower/kubernetes-the-hard-way

kube-scheduler change api version as the current is unsupported

This is fixed on master.

hmilkovi

comment created time in 5 months

PR closed kelseyhightower/kubernetes-the-hard-way

Updating dead link with active link to create single node cluster

The old link of "getting started" (http://kubernetes.io/docs/getting-started-guides/) returns q 404.

This PR updates it to point to https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/

+1 -1

1 comment

1 changed file

kaushikchaubal

pr closed time in 5 months

pull request commentkelseyhightower/kubernetes-the-hard-way

Updating dead link with active link to create single node cluster

This is fixed on master and points to the high level getting started guides.

kaushikchaubal

comment created time in 5 months

PR closed kelseyhightower/kubernetes-the-hard-way

Include missing param --zone in Cleaning Up step

Include missing param --zone with the compute zone value specified in step 01

+1 -1

1 comment

1 changed file

oscarnevarezleal

pr closed time in 5 months

pull request commentkelseyhightower/kubernetes-the-hard-way

Include missing param --zone in Cleaning Up step

This is fixed on master.

oscarnevarezleal

comment created time in 5 months

PR closed kelseyhightower/kubernetes-the-hard-way

Add zone to cleanup command

When I ran the command as described in the tutorial I got the following:

ERROR: (gcloud.compute.instances.delete) Underspecified resource [controller-0, controller-1, controller-2, worker-0, worker-1, worker-2]. Specify the [--zone] flag.

I believe I do have gcloud set up correctly (although, perhaps I'm wrong)

> gcloud config list compute/zone
[compute]
zone = us-central1-a
+1 -1

1 comment

1 changed file

dimitropoulos

pr closed time in 5 months

pull request commentkelseyhightower/kubernetes-the-hard-way

Add zone to cleanup command

Good idea and based on other feedback this has been fixed on master.

dimitropoulos

comment created time in 5 months

push eventkelseyhightower/kubernetes-the-hard-way

Kelsey Hightower

commit sha c9b5e7862cf24e7348a38a6947e8501998958f33

Update to Kubernetes 1.15.3

view details

push time in 5 months

pull request commentkelseyhightower/kubernetes-the-hard-way

Use RuntimeClass for supporting sandboxed pods

I've chosen to drop gVisor support as this is making things a bit more complicated than required for teaching people the core of Kubernetes.

ianlewis

comment created time in 5 months

more