profile
viewpoint

issue closedbanzaicloud/pipeline

on-premises cluster: Issues with traefik

Hi, I'm a lecturer in Computer Science in a top rated French school of engineering (CentraleSupelec). Banzai pipeline seems to be a very promising project. I tried to install it on our on-premises cluster. I failed with traefik deployment that never ended. I followed the pke installation guide, even if I have ESXi without VSphere. Everything seems to end well, with an healthy kubernetes with metallb.

sudo curl -vL https://banzaicloud.com/downloads/pke/latest -o /usr/local/bin/pke
sudo chmod +x /usr/local/bin/pke
export PATH=$PATH:/usr/local/bin/

export master=192.168.0.81
export server=192.168.0.3
export port=443
export datastore=Disk
export username=root
export password=j01m01a70
export lbrange=192.168.0.110-120
sudo /usr/local/bin/pke install master --kubernetes-advertise-address=$master --kubernetes-api-server=$master:6443 --vsphere-server=$server --vsphere-port=$port --vsphere-datastore=$datastore --vsphere-username=$username --vsphere-password=$password --lb-range=$lbrange

Result:

[kubernetes-version] Kubernetes version "1.17.0" is supported
[container-runtime] running
rpm [--query centos-release]
rpm [--query centos-release] err: exec: "rpm": executable file not found in $PATH 329.499µs
rpm [--query redhat-release]
rpm [--query redhat-release] err: exec: "rpm": executable file not found in $PATH 222.905µs
/etc/redhat-release: "", err: open /etc/redhat-release: no such file or directory
/usr/bin/lsb_release [-si]
  out> Ubuntu
  out> 
/usr/bin/lsb_release [-si] err: <nil> 124.380563ms
/usr/bin/lsb_release [-sr]
  out> 18.04
  out> 
...
kubectl [apply -f -] err: <nil> 886.063291ms
kubectl [apply -f https://raw.githubusercontent.com/google/metallb/v0.8.1/manifests/metallb.yaml]
  out> namespace/metallb-system created
  out> podsecuritypolicy.policy/speaker created
  out> serviceaccount/controller created
  out> serviceaccount/speaker created
  out> clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
  out> clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
  out> role.rbac.authorization.k8s.io/config-watcher created
  out> clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
  out> clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
  out> rolebinding.rbac.authorization.k8s.io/config-watcher created
  out> daemonset.apps/speaker created
  out> deployment.apps/controller created
kubectl [apply -f https://raw.githubusercontent.com/google/metallb/v0.8.1/manifests/metallb.yaml] err: <nil> 1.635117521s
kubectl [apply -f /etc/kubernetes/metallb-config.yaml]
  out> configmap/config created
kubectl [apply -f /etc/kubernetes/metallb-config.yaml] err: <nil> 490.563573ms
skipping NoSchedule taint removal

My cluster owns one master, and three workers:

kubectl get nodes -o wide                                                                                                                                                                          master1: Tue May  5 10:04:56 2020

NAME      STATUS   ROLES    AGE     VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
master1   Ready    master   8h      v1.17.0   192.168.0.81   <none>        Ubuntu 18.04.4 LTS   4.15.0-99-generic   containerd://1.3.3
slave1    Ready    <none>   7h58m   v1.17.0   192.168.0.84   <none>        Ubuntu 18.04.4 LTS   4.15.0-99-generic   containerd://1.3.3
slave2    Ready    <none>   7h58m   v1.17.0   192.168.0.85   <none>        Ubuntu 18.04.4 LTS   4.15.0-99-generic   containerd://1.3.3
slave3    Ready    <none>   7h58m   v1.17.0   192.168.0.86   <none>        Ubuntu 18.04.4 LTS   4.15.0-99-generic   containerd://1.3.3

Then, I installed pipeline:

curl https://getpipeline.sh | sh
sudo banzai pipeline init --provider=k8s
sudo banzai pipeline up --workspace="/home/hdlbq/.banzai/pipeline/default"

Result:

kubectl get pods --all-namespaces -o wide                                                                                                                                                          master1: Tue May  5 09:46:42 2020

NAMESPACE        NAME                                       READY   STATUS    RESTARTS   AGE     IP              NODE      NOMINATED NODE   READINESS GATES
banzaicloud      cadence-frontend-dcc47b89f-tssgl           1/1     Running   0          9m24s   10.20.140.71    slave2    <none>           <none>
banzaicloud      cadence-history-6bbd984884-qm7mf           1/1     Running   0          9m24s   10.20.140.72    slave2    <none>           <none>
banzaicloud      cadence-matching-5bb86678cd-bkh2l          1/1     Running   1          9m24s   10.20.140.70    slave2    <none>           <none>
banzaicloud      cadence-worker-7cf54dfcf5-s4bzc            1/1     Running   0          9m24s   10.20.140.195   slave1    <none>           <none>
banzaicloud      mysql-657cf4cf99-dcvtl                     1/1     Running   0          16m     10.20.140.194   slave1    <none>           <none>
banzaicloud      tiller-deploy-68776996db-5w28q             1/1     Running   0          17m     10.20.140.65    slave2    <none>           <none>
banzaicloud      traefik-6d8d58cfcd-wlkzk                   1/1     Running   0          16m     10.20.140.67    slave2    <none>           <none>
banzaicloud      vault-0                                    4/4     Running   4          16m     10.20.77.5      slave3    <none>           <none>
kube-system      auto-approver-7d75c87f67-d28lf             1/1     Running   1          8h      10.20.137.71    master1   <none>           <none>
kube-system      calico-kube-controllers-6b64bcd855-jmmvr   1/1     Running   1          8h      10.20.137.69    master1   <none>           <none>
kube-system      calico-node-978nn                          1/1     Running   1          8h      192.168.0.81    master1   <none>           <none>
kube-system      calico-node-q2mwl                          1/1     Running   1          7h39m   192.168.0.86    slave3    <none>           <none>
kube-system      calico-node-qzgrm                          1/1     Running   1          7h39m   192.168.0.84    slave1    <none>           <none>
kube-system      calico-node-v7p9h                          1/1     Running   1          7h39m   192.168.0.85    slave2    <none>           <none>
kube-system      coredns-6c9b57d966-cwnj5                   1/1     Running   1          8h      10.20.137.70    master1   <none>           <none>
kube-system      coredns-6c9b57d966-vlrlz                   1/1     Running   1          8h      10.20.137.72    master1   <none>           <none>
kube-system      etcd-master1                               1/1     Running   1          8h      192.168.0.81    master1   <none>           <none>
kube-system      kube-apiserver-master1                     1/1     Running   1          8h      192.168.0.81    master1   <none>           <none>
kube-system      kube-controller-manager-master1            1/1     Running   2          8h      192.168.0.81    master1   <none>           <none>
kube-system      kube-proxy-d4p2q                           1/1     Running   1          7h39m   192.168.0.84    slave1    <none>           <none>
kube-system      kube-proxy-kr6w6                           1/1     Running   1          7h39m   192.168.0.85    slave2    <none>           <none>
kube-system      kube-proxy-nzw5f                           1/1     Running   1          8h      192.168.0.81    master1   <none>           <none>
kube-system      kube-proxy-t97mw                           1/1     Running   1          7h39m   192.168.0.86    slave3    <none>           <none>
kube-system      kube-scheduler-master1                     1/1     Running   2          8h      192.168.0.81    master1   <none>           <none>
kube-system      local-path-provisioner-74bbd6b97d-k6ppl    1/1     Running   2          8h      10.20.77.3      slave3    <none>           <none>
metallb-system   controller-6bcfdfd677-f2mjk                1/1     Running   1          8h      10.20.77.4      slave3    <none>           <none>
metallb-system   speaker-6fxvs                              1/1     Running   1          7h22m   192.168.0.86    slave3    <none>           <none>
metallb-system   speaker-dq5xt                              1/1     Running   1          7h21m   192.168.0.85    slave2    <none>           <none>
metallb-system   speaker-swcl7                              1/1     Running   1          7h22m   192.168.0.84    slave1    <none>           <none>
metallb-system   speaker-sxngx                              1/1     Running   1          8h      192.168.0.81    master1   <none>           <none>

trace in the console:

INFO[0000] docker run --rm --net=host --user=0 docker.io/banzaicloud/pipeline-installer:latest sh -c tar czh /export | base64 
INFO[0168] Pulling Banzai Cloud Pipeline installer image... 
INFO[0168] /usr/bin/docker pull docker.io/banzaicloud/pipeline-installer:latest 
latest: Pulling from banzaicloud/pipeline-installer
Digest: sha256:afe839f2d960d42fcefaa2a33a3fb6cbef0f3618a66bd9020e61c3e63358f15c
Status: Image is up to date for banzaicloud/pipeline-installer:latest
docker.io/banzaicloud/pipeline-installer:latest
INFO[0176] docker run --rm --net=host --user=0 -v /home/hdlbq/.banzai/pipeline/default:/workspace -v /home/hdlbq/.banzai/pipeline/default/state.tf.json:/terraform/state.tf.json -v /home/hdlbq/.banzai/pipeline/default/.terraform/terraform.tfstate:/terraform/.terraform/terraform.tfstate -ti -e KUBECONFIG docker.io/banzaicloud/pipeline-installer:latest terraform init -input=false -force-copy 
Initializing modules...

Initializing the backend...

Successfully configured the backend "local"! Terraform will automatically
use this backend unless the backend configuration changes.

Initializing provider plugins...

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
INFO[0181] Deploying Banzai Cloud Pipeline to Kubernetes cluster... 
INFO[0181] docker run --rm --net=host --user=0 -v /home/hdlbq/.banzai/pipeline/default:/workspace -v /home/hdlbq/.banzai/pipeline/default/state.tf.json:/terraform/state.tf.json -v /home/hdlbq/.banzai/pipeline/default/.terraform/terraform.tfstate:/terraform/.terraform/terraform.tfstate -ti -e KUBECONFIG docker.io/banzaicloud/pipeline-installer:latest terraform apply -var workdir=/workspace -refresh=true -auto-approve 
data.local_file.values: Refreshing state...
module.dex.data.template_file.dex-storage-k8s: Refreshing state...
module.cadence.data.template_file.mysql[0]: Refreshing state...
module.anchore.data.helm_repository.stable: Refreshing state...
module.cadence.data.helm_repository.banzaicloud-stable: Refreshing state...
module.dex.data.helm_repository.banzaicloud-stable: Refreshing state...
module.pipeline.data.helm_repository.banzaicloud-stable: Refreshing state...
module.mysql.data.helm_repository.stable[0]: Refreshing state...
module.vault.data.helm_repository.banzaicloud-stable[0]: Refreshing state...
module.ui.data.helm_repository.banzaicloud-stable[0]: Refreshing state...
module.pipeline.random_string.tokensigningkey: Creating...
random_string.anchore-db-password: Creating...
random_string.anchore-admin-password: Creating...
random_string.dex-db-password: Creating...
module.dex.random_string.dex_client_secret: Creating...
random_string.anchore-db-password: Creation complete after 0s [id=ovEBLxb2NvD3LBl8n9q2Y4GZjmWObYaH]
module.pipeline.random_string.tokensigningkey: Creation complete after 0s [id=jDsYmjuaqQHXxktTuKighq7hYvTq6Hyg]
random_string.anchore-admin-password: Creation complete after 0s [id=scXP3Dr1inDJe4HZ1DPjL57bq5WGJD4f]
random_string.vault-db-password: Creating...
random_string.cadence-db-password: Creating...
random_string.cicd-db-password: Creating...
random_string.dex-db-password: Creation complete after 0s [id=1ZpraQEqiqbiOqMKR7cLoDZkWTt8C2QV]
module.dex.random_string.dex_client_secret: Creation complete after 0s [id=3uAzOi3vPq4FJIYCpKKEQIs5o90Aj0Nh]
random_string.vault-db-password: Creation complete after 0s [id=mBsL0W4p5z9qGzS8X1MEGuJwRR1vXMC6]
random_string.cadence-db-password: Creation complete after 0s [id=4CUpln7L1SjiWE3RpBMl28zGgoA2oKiO]
random_string.pipeline-user-password: Creating...
random_string.pipeline-db-password: Creating...
random_string.cicd-db-password: Creation complete after 0s [id=eBmIC6dMAzooictHdcpsDSTmdGw53w51]
data.template_file.vault-postgres: Refreshing state...
data.template_file.anchore-postgres: Refreshing state...
data.template_file.vault-mysql: Refreshing state...
random_string.pipeline-db-password: Creation complete after 0s [id=l19lwzKTcfhiqqRliTxb2nDwyJhZgFgU]
random_string.pipeline-user-password: Creation complete after 0s [id=BlgvYd9FeZYeSigi]
data.template_file.dex-postgres: Refreshing state...
data.template_file.cadence-mysql: Refreshing state...
data.template_file.dex-mysql: Refreshing state...
data.template_file.cicd-postgres: Refreshing state...
kubernetes_namespace.namespace: Creating...
data.template_file.cicd-mysql: Refreshing state...
data.template_file.pipeline-mysql: Refreshing state...
data.template_file.pipeline-postgres: Refreshing state...
kubernetes_namespace.namespace: Creation complete after 0s [id=banzaicloud]
kubernetes_service_account.tiller: Creating...
module.dex.data.template_file.connector_static: Refreshing state...
kubernetes_service_account.tiller: Creation complete after 1s [id=banzaicloud/banzaicloud-tiller]
kubernetes_cluster_role_binding.tiller[0]: Creating...
kubernetes_cluster_role_binding.tiller[0]: Creation complete after 0s [id=banzaicloud-tiller]
kubernetes_deployment.tiller: Creating...
kubernetes_deployment.tiller: Still creating... [10s elapsed]
kubernetes_deployment.tiller: Still creating... [20s elapsed]
kubernetes_deployment.tiller: Still creating... [30s elapsed]
kubernetes_deployment.tiller: Still creating... [40s elapsed]
kubernetes_deployment.tiller: Creation complete after 47s [id=banzaicloud/tiller-deploy]
module.mysql.kubernetes_secret.mysql-init[0]: Creating...
module.mysql.kubernetes_secret.mysql-init[0]: Creation complete after 0s [id=banzaicloud/mysql-init]
module.traefik.data.template_file.traefik: Refreshing state...
module.mysql.data.template_file.mysql[0]: Refreshing state...
module.vault.data.template_file.vault[0]: Refreshing state...
module.cadence.data.template_file.persistence_sql[0]: Refreshing state...
module.dex.data.template_file.dex-storage-sql: Refreshing state...
module.vault.data.template_file.vault-storage[0]: Refreshing state...
module.vault.helm_release.vault[0]: Creating...
module.cadence.helm_release.cadence: Creating...
module.mysql.helm_release.mysql[0]: Creating...
module.traefik.helm_release.traefik[0]: Creating...
module.vault.helm_release.vault[0]: Still creating... [10s elapsed]
module.cadence.helm_release.cadence: Still creating... [10s elapsed]
module.mysql.helm_release.mysql[0]: Still creating... [10s elapsed]
module.traefik.helm_release.traefik[0]: Still creating... [10s elapsed]
...
module.cadence.helm_release.cadence: Creation complete after 9m1s [id=cadence]
module.traefik.helm_release.traefik[0]: Still creating... [9m1s elapsed]
module.traefik.helm_release.traefik[0]: Still creating... [9m11s elapsed]
module.traefik.helm_release.traefik[0]: Still creating... [9m21s elapsed]
module.traefik.helm_release.traefik[0]: Still creating... [9m31s elapsed]
module.traefik.helm_release.traefik[0]: Still creating... [9m41s elapsed]
module.traefik.helm_release.traefik[0]: Still creating... [9m51s elapsed]
module.traefik.helm_release.traefik[0]: Still creating... [10m1s elapsed]
module.traefik.helm_release.traefik[0]: Still creating... [10m11s elapsed]
module.traefik.helm_release.traefik[0]: Still creating... [10m21s elapsed]
module.traefik.helm_release.traefik[0]: Still creating... [10m31s elapsed]
module.traefik.helm_release.traefik[0]: Still creating... [10m41s elapsed]
module.traefik.helm_release.traefik[0]: Still creating... [10m51s elapsed]
module.traefik.helm_release.traefik[0]: Still creating... [11m1s elapsed]
module.traefik.helm_release.traefik[0]: Still creating... [11m11s elapsed]
...
module.traefik.helm_release.traefik[0]: Still creating... [1h28m42s elapsed]
...

The kubernetes services are:

kubectl get service --all-namespaces
NAMESPACE     NAME                        TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
banzaicloud   cadence-frontend            ClusterIP      10.10.233.2     <none>        7933/TCP                     3m22s
banzaicloud   cadence-frontend-headless   ClusterIP      None            <none>        7933/TCP,9090/TCP            3m22s
banzaicloud   cadence-history-headless    ClusterIP      None            <none>        7934/TCP,9090/TCP            3m22s
banzaicloud   cadence-matching-headless   ClusterIP      None            <none>        7935/TCP,9090/TCP            3m22s
banzaicloud   cadence-worker-headless     ClusterIP      None            <none>        7939/TCP,9090/TCP            3m22s
banzaicloud   mysql                       ClusterIP      10.10.54.12     <none>        3306/TCP                     10m
banzaicloud   traefik                     LoadBalancer   10.10.119.125   <pending>     80:32665/TCP,443:31326/TCP   10m
banzaicloud   vault                       ClusterIP      10.10.3.214     <none>        8200/TCP,8201/TCP            10m
default       kubernetes                  ClusterIP      10.10.0.1       <none>        443/TCP                      7h56m
kube-system   kube-dns                    ClusterIP      10.10.0.10      <none>        53/UDP,53/TCP,9153/TCP       7h56m

Thanks for any hint Henri

closed time in 2 months

hdlbq

issue commentbanzaicloud/pipeline

on-premises cluster: Issues with traefik

I solved the problem by re-installing metallb with my own configuration

hdlbq

comment created time in 2 months

issue commentjupyterhub/zero-to-jupyterhub-k8s

proxy-public service stalled/pending

@angrymeir I'm also using an esxi, on my academic cluster. Could you please describe your solution ? Thanks

miramar-labs

comment created time in 2 months

issue commentbanzaicloud/pipeline

on-premises cluster: Issues with traefik

Hi,

thanks for your answer. The pie installed before pipeline comes with a fully running Metallb pods. Does it supply a loadbalancing service to the kubernetes ? Best regards Henri

Le 5 mai 2020 à 11:26, Márk Sági-Kazár notifications@github.com a écrit :

Traefik by default creates a loadbalancer service. If you don't have a loadbalancer integration installed in your on-prem environment, it won't work.

You can overwrite the service type in your values.yaml by setting the following value:

ingressHostPort: true — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/banzaicloud/pipeline/issues/2906#issuecomment-623950586, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAGO2R5PN76Y6HLVVBDV3JLRP7LS7ANCNFSM4MZMSQJQ.

hdlbq

comment created time in 2 months

issue openedbanzaicloud/pipeline

on-premises cluster: Issues with traefik

Hi, I'm a lecturer in Computer Science in a top rated French school of engineering (CentraleSupelec). Banzai pipeline seems to be a very promising project. I tried to install it on our on-premises cluster. I failed with traefik deployment that never ended. I followed the pke installation guide, even if I have ESXi without VSphere. Everything seems to end well, with an healthy kubernetes with metallb.

sudo curl -vL https://banzaicloud.com/downloads/pke/latest -o /usr/local/bin/pke
sudo chmod +x /usr/local/bin/pke
export PATH=$PATH:/usr/local/bin/

export master=192.168.0.81
export server=192.168.0.3
export port=443
export datastore=Disk
export username=root
export password=j01m01a70
export lbrange=192.168.0.110-120
sudo /usr/local/bin/pke install master --kubernetes-advertise-address=$master --kubernetes-api-server=$master:6443 --vsphere-server=$server --vsphere-port=$port --vsphere-datastore=$datastore --vsphere-username=$username --vsphere-password=$password --lb-range=$lbrange

Result:

[kubernetes-version] Kubernetes version "1.17.0" is supported
[container-runtime] running
rpm [--query centos-release]
rpm [--query centos-release] err: exec: "rpm": executable file not found in $PATH 329.499µs
rpm [--query redhat-release]
rpm [--query redhat-release] err: exec: "rpm": executable file not found in $PATH 222.905µs
/etc/redhat-release: "", err: open /etc/redhat-release: no such file or directory
/usr/bin/lsb_release [-si]
  out> Ubuntu
  out> 
/usr/bin/lsb_release [-si] err: <nil> 124.380563ms
/usr/bin/lsb_release [-sr]
  out> 18.04
  out> 
...
kubectl [apply -f -] err: <nil> 886.063291ms
kubectl [apply -f https://raw.githubusercontent.com/google/metallb/v0.8.1/manifests/metallb.yaml]
  out> namespace/metallb-system created
  out> podsecuritypolicy.policy/speaker created
  out> serviceaccount/controller created
  out> serviceaccount/speaker created
  out> clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
  out> clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
  out> role.rbac.authorization.k8s.io/config-watcher created
  out> clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
  out> clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
  out> rolebinding.rbac.authorization.k8s.io/config-watcher created
  out> daemonset.apps/speaker created
  out> deployment.apps/controller created
kubectl [apply -f https://raw.githubusercontent.com/google/metallb/v0.8.1/manifests/metallb.yaml] err: <nil> 1.635117521s
kubectl [apply -f /etc/kubernetes/metallb-config.yaml]
  out> configmap/config created
kubectl [apply -f /etc/kubernetes/metallb-config.yaml] err: <nil> 490.563573ms
skipping NoSchedule taint removal

My cluster owns one master, and three workers:

kubectl get nodes -o wide                                                                                                                                                                          master1: Tue May  5 10:04:56 2020

NAME      STATUS   ROLES    AGE     VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
master1   Ready    master   8h      v1.17.0   192.168.0.81   <none>        Ubuntu 18.04.4 LTS   4.15.0-99-generic   containerd://1.3.3
slave1    Ready    <none>   7h58m   v1.17.0   192.168.0.84   <none>        Ubuntu 18.04.4 LTS   4.15.0-99-generic   containerd://1.3.3
slave2    Ready    <none>   7h58m   v1.17.0   192.168.0.85   <none>        Ubuntu 18.04.4 LTS   4.15.0-99-generic   containerd://1.3.3
slave3    Ready    <none>   7h58m   v1.17.0   192.168.0.86   <none>        Ubuntu 18.04.4 LTS   4.15.0-99-generic   containerd://1.3.3

Then, I installed pipeline:

curl https://getpipeline.sh | sh
sudo banzai pipeline init --provider=k8s
sudo banzai pipeline up --workspace="/home/hdlbq/.banzai/pipeline/default"

Result:

kubectl get pods --all-namespaces -o wide                                                                                                                                                          master1: Tue May  5 09:46:42 2020

NAMESPACE        NAME                                       READY   STATUS    RESTARTS   AGE     IP              NODE      NOMINATED NODE   READINESS GATES
banzaicloud      cadence-frontend-dcc47b89f-tssgl           1/1     Running   0          9m24s   10.20.140.71    slave2    <none>           <none>
banzaicloud      cadence-history-6bbd984884-qm7mf           1/1     Running   0          9m24s   10.20.140.72    slave2    <none>           <none>
banzaicloud      cadence-matching-5bb86678cd-bkh2l          1/1     Running   1          9m24s   10.20.140.70    slave2    <none>           <none>
banzaicloud      cadence-worker-7cf54dfcf5-s4bzc            1/1     Running   0          9m24s   10.20.140.195   slave1    <none>           <none>
banzaicloud      mysql-657cf4cf99-dcvtl                     1/1     Running   0          16m     10.20.140.194   slave1    <none>           <none>
banzaicloud      tiller-deploy-68776996db-5w28q             1/1     Running   0          17m     10.20.140.65    slave2    <none>           <none>
banzaicloud      traefik-6d8d58cfcd-wlkzk                   1/1     Running   0          16m     10.20.140.67    slave2    <none>           <none>
banzaicloud      vault-0                                    4/4     Running   4          16m     10.20.77.5      slave3    <none>           <none>
kube-system      auto-approver-7d75c87f67-d28lf             1/1     Running   1          8h      10.20.137.71    master1   <none>           <none>
kube-system      calico-kube-controllers-6b64bcd855-jmmvr   1/1     Running   1          8h      10.20.137.69    master1   <none>           <none>
kube-system      calico-node-978nn                          1/1     Running   1          8h      192.168.0.81    master1   <none>           <none>
kube-system      calico-node-q2mwl                          1/1     Running   1          7h39m   192.168.0.86    slave3    <none>           <none>
kube-system      calico-node-qzgrm                          1/1     Running   1          7h39m   192.168.0.84    slave1    <none>           <none>
kube-system      calico-node-v7p9h                          1/1     Running   1          7h39m   192.168.0.85    slave2    <none>           <none>
kube-system      coredns-6c9b57d966-cwnj5                   1/1     Running   1          8h      10.20.137.70    master1   <none>           <none>
kube-system      coredns-6c9b57d966-vlrlz                   1/1     Running   1          8h      10.20.137.72    master1   <none>           <none>
kube-system      etcd-master1                               1/1     Running   1          8h      192.168.0.81    master1   <none>           <none>
kube-system      kube-apiserver-master1                     1/1     Running   1          8h      192.168.0.81    master1   <none>           <none>
kube-system      kube-controller-manager-master1            1/1     Running   2          8h      192.168.0.81    master1   <none>           <none>
kube-system      kube-proxy-d4p2q                           1/1     Running   1          7h39m   192.168.0.84    slave1    <none>           <none>
kube-system      kube-proxy-kr6w6                           1/1     Running   1          7h39m   192.168.0.85    slave2    <none>           <none>
kube-system      kube-proxy-nzw5f                           1/1     Running   1          8h      192.168.0.81    master1   <none>           <none>
kube-system      kube-proxy-t97mw                           1/1     Running   1          7h39m   192.168.0.86    slave3    <none>           <none>
kube-system      kube-scheduler-master1                     1/1     Running   2          8h      192.168.0.81    master1   <none>           <none>
kube-system      local-path-provisioner-74bbd6b97d-k6ppl    1/1     Running   2          8h      10.20.77.3      slave3    <none>           <none>
metallb-system   controller-6bcfdfd677-f2mjk                1/1     Running   1          8h      10.20.77.4      slave3    <none>           <none>
metallb-system   speaker-6fxvs                              1/1     Running   1          7h22m   192.168.0.86    slave3    <none>           <none>
metallb-system   speaker-dq5xt                              1/1     Running   1          7h21m   192.168.0.85    slave2    <none>           <none>
metallb-system   speaker-swcl7                              1/1     Running   1          7h22m   192.168.0.84    slave1    <none>           <none>
metallb-system   speaker-sxngx                              1/1     Running   1          8h      192.168.0.81    master1   <none>           <none>

trace in the console:

INFO[0000] docker run --rm --net=host --user=0 docker.io/banzaicloud/pipeline-installer:latest sh -c tar czh /export | base64 
INFO[0168] Pulling Banzai Cloud Pipeline installer image... 
INFO[0168] /usr/bin/docker pull docker.io/banzaicloud/pipeline-installer:latest 
latest: Pulling from banzaicloud/pipeline-installer
Digest: sha256:afe839f2d960d42fcefaa2a33a3fb6cbef0f3618a66bd9020e61c3e63358f15c
Status: Image is up to date for banzaicloud/pipeline-installer:latest
docker.io/banzaicloud/pipeline-installer:latest
INFO[0176] docker run --rm --net=host --user=0 -v /home/hdlbq/.banzai/pipeline/default:/workspace -v /home/hdlbq/.banzai/pipeline/default/state.tf.json:/terraform/state.tf.json -v /home/hdlbq/.banzai/pipeline/default/.terraform/terraform.tfstate:/terraform/.terraform/terraform.tfstate -ti -e KUBECONFIG docker.io/banzaicloud/pipeline-installer:latest terraform init -input=false -force-copy 
Initializing modules...

Initializing the backend...

Successfully configured the backend "local"! Terraform will automatically
use this backend unless the backend configuration changes.

Initializing provider plugins...

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
INFO[0181] Deploying Banzai Cloud Pipeline to Kubernetes cluster... 
INFO[0181] docker run --rm --net=host --user=0 -v /home/hdlbq/.banzai/pipeline/default:/workspace -v /home/hdlbq/.banzai/pipeline/default/state.tf.json:/terraform/state.tf.json -v /home/hdlbq/.banzai/pipeline/default/.terraform/terraform.tfstate:/terraform/.terraform/terraform.tfstate -ti -e KUBECONFIG docker.io/banzaicloud/pipeline-installer:latest terraform apply -var workdir=/workspace -refresh=true -auto-approve 
data.local_file.values: Refreshing state...
module.dex.data.template_file.dex-storage-k8s: Refreshing state...
module.cadence.data.template_file.mysql[0]: Refreshing state...
module.anchore.data.helm_repository.stable: Refreshing state...
module.cadence.data.helm_repository.banzaicloud-stable: Refreshing state...
module.dex.data.helm_repository.banzaicloud-stable: Refreshing state...
module.pipeline.data.helm_repository.banzaicloud-stable: Refreshing state...
module.mysql.data.helm_repository.stable[0]: Refreshing state...
module.vault.data.helm_repository.banzaicloud-stable[0]: Refreshing state...
module.ui.data.helm_repository.banzaicloud-stable[0]: Refreshing state...
module.pipeline.random_string.tokensigningkey: Creating...
random_string.anchore-db-password: Creating...
random_string.anchore-admin-password: Creating...
random_string.dex-db-password: Creating...
module.dex.random_string.dex_client_secret: Creating...
random_string.anchore-db-password: Creation complete after 0s [id=ovEBLxb2NvD3LBl8n9q2Y4GZjmWObYaH]
module.pipeline.random_string.tokensigningkey: Creation complete after 0s [id=jDsYmjuaqQHXxktTuKighq7hYvTq6Hyg]
random_string.anchore-admin-password: Creation complete after 0s [id=scXP3Dr1inDJe4HZ1DPjL57bq5WGJD4f]
random_string.vault-db-password: Creating...
random_string.cadence-db-password: Creating...
random_string.cicd-db-password: Creating...
random_string.dex-db-password: Creation complete after 0s [id=1ZpraQEqiqbiOqMKR7cLoDZkWTt8C2QV]
module.dex.random_string.dex_client_secret: Creation complete after 0s [id=3uAzOi3vPq4FJIYCpKKEQIs5o90Aj0Nh]
random_string.vault-db-password: Creation complete after 0s [id=mBsL0W4p5z9qGzS8X1MEGuJwRR1vXMC6]
random_string.cadence-db-password: Creation complete after 0s [id=4CUpln7L1SjiWE3RpBMl28zGgoA2oKiO]
random_string.pipeline-user-password: Creating...
random_string.pipeline-db-password: Creating...
random_string.cicd-db-password: Creation complete after 0s [id=eBmIC6dMAzooictHdcpsDSTmdGw53w51]
data.template_file.vault-postgres: Refreshing state...
data.template_file.anchore-postgres: Refreshing state...
data.template_file.vault-mysql: Refreshing state...
random_string.pipeline-db-password: Creation complete after 0s [id=l19lwzKTcfhiqqRliTxb2nDwyJhZgFgU]
random_string.pipeline-user-password: Creation complete after 0s [id=BlgvYd9FeZYeSigi]
data.template_file.dex-postgres: Refreshing state...
data.template_file.cadence-mysql: Refreshing state...
data.template_file.dex-mysql: Refreshing state...
data.template_file.cicd-postgres: Refreshing state...
kubernetes_namespace.namespace: Creating...
data.template_file.cicd-mysql: Refreshing state...
data.template_file.pipeline-mysql: Refreshing state...
data.template_file.pipeline-postgres: Refreshing state...
kubernetes_namespace.namespace: Creation complete after 0s [id=banzaicloud]
kubernetes_service_account.tiller: Creating...
module.dex.data.template_file.connector_static: Refreshing state...
kubernetes_service_account.tiller: Creation complete after 1s [id=banzaicloud/banzaicloud-tiller]
kubernetes_cluster_role_binding.tiller[0]: Creating...
kubernetes_cluster_role_binding.tiller[0]: Creation complete after 0s [id=banzaicloud-tiller]
kubernetes_deployment.tiller: Creating...
kubernetes_deployment.tiller: Still creating... [10s elapsed]
kubernetes_deployment.tiller: Still creating... [20s elapsed]
kubernetes_deployment.tiller: Still creating... [30s elapsed]
kubernetes_deployment.tiller: Still creating... [40s elapsed]
kubernetes_deployment.tiller: Creation complete after 47s [id=banzaicloud/tiller-deploy]
module.mysql.kubernetes_secret.mysql-init[0]: Creating...
module.mysql.kubernetes_secret.mysql-init[0]: Creation complete after 0s [id=banzaicloud/mysql-init]
module.traefik.data.template_file.traefik: Refreshing state...
module.mysql.data.template_file.mysql[0]: Refreshing state...
module.vault.data.template_file.vault[0]: Refreshing state...
module.cadence.data.template_file.persistence_sql[0]: Refreshing state...
module.dex.data.template_file.dex-storage-sql: Refreshing state...
module.vault.data.template_file.vault-storage[0]: Refreshing state...
module.vault.helm_release.vault[0]: Creating...
module.cadence.helm_release.cadence: Creating...
module.mysql.helm_release.mysql[0]: Creating...
module.traefik.helm_release.traefik[0]: Creating...
module.vault.helm_release.vault[0]: Still creating... [10s elapsed]
module.cadence.helm_release.cadence: Still creating... [10s elapsed]
module.mysql.helm_release.mysql[0]: Still creating... [10s elapsed]
module.traefik.helm_release.traefik[0]: Still creating... [10s elapsed]
...
module.cadence.helm_release.cadence: Creation complete after 9m1s [id=cadence]
module.traefik.helm_release.traefik[0]: Still creating... [9m1s elapsed]
module.traefik.helm_release.traefik[0]: Still creating... [9m11s elapsed]
module.traefik.helm_release.traefik[0]: Still creating... [9m21s elapsed]
module.traefik.helm_release.traefik[0]: Still creating... [9m31s elapsed]
module.traefik.helm_release.traefik[0]: Still creating... [9m41s elapsed]
module.traefik.helm_release.traefik[0]: Still creating... [9m51s elapsed]
module.traefik.helm_release.traefik[0]: Still creating... [10m1s elapsed]
module.traefik.helm_release.traefik[0]: Still creating... [10m11s elapsed]
module.traefik.helm_release.traefik[0]: Still creating... [10m21s elapsed]
module.traefik.helm_release.traefik[0]: Still creating... [10m31s elapsed]
module.traefik.helm_release.traefik[0]: Still creating... [10m41s elapsed]
module.traefik.helm_release.traefik[0]: Still creating... [10m51s elapsed]
module.traefik.helm_release.traefik[0]: Still creating... [11m1s elapsed]
module.traefik.helm_release.traefik[0]: Still creating... [11m11s elapsed]
...
module.traefik.helm_release.traefik[0]: Still creating... [1h28m42s elapsed]
...

The kubernetes services are:

kubectl get service --all-namespaces
NAMESPACE     NAME                        TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
banzaicloud   cadence-frontend            ClusterIP      10.10.233.2     <none>        7933/TCP                     3m22s
banzaicloud   cadence-frontend-headless   ClusterIP      None            <none>        7933/TCP,9090/TCP            3m22s
banzaicloud   cadence-history-headless    ClusterIP      None            <none>        7934/TCP,9090/TCP            3m22s
banzaicloud   cadence-matching-headless   ClusterIP      None            <none>        7935/TCP,9090/TCP            3m22s
banzaicloud   cadence-worker-headless     ClusterIP      None            <none>        7939/TCP,9090/TCP            3m22s
banzaicloud   mysql                       ClusterIP      10.10.54.12     <none>        3306/TCP                     10m
banzaicloud   traefik                     LoadBalancer   10.10.119.125   <pending>     80:32665/TCP,443:31326/TCP   10m
banzaicloud   vault                       ClusterIP      10.10.3.214     <none>        8200/TCP,8201/TCP            10m
default       kubernetes                  ClusterIP      10.10.0.1       <none>        443/TCP                      7h56m
kube-system   kube-dns                    ClusterIP      10.10.0.10      <none>        53/UDP,53/TCP,9153/TCP       7h56m

Thanks for any hint Henri

created time in 2 months

issue closedkubernetes-incubator/external-storage

Dynamic provisioning doesn't work

Hello, I installed a K8s using the kubeadm way on my own cluster 8VMs: 2 haproxy (LB with keepalived), 3 masters, 3 slaves and a local registry. I installed helm, and everything seems to work well

kube-system   coredns-66bff467f8-c6zdj                 1/1     Running   2          9d    10.244.4.6     slave2    <none>           <none>
kube-system   coredns-66bff467f8-wl8s4                 1/1     Running   2          9d    10.244.4.7     slave2    <none>           <none>
kube-system   etcd-master1                             1/1     Running   10         9d    192.168.0.81   master1   <none>           <none>
kube-system   etcd-master2                             1/1     Running   5          9d    192.168.0.82   master2   <none>           <none>
kube-system   etcd-master3                             1/1     Running   3          9d    192.168.0.83   master3   <none>           <none>
kube-system   kube-apiserver-master1                   1/1     Running   15         9d    192.168.0.81   master1   <none>           <none>
kube-system   kube-apiserver-master2                   1/1     Running   8          9d    192.168.0.82   master2   <none>           <none>
kube-system   kube-apiserver-master3                   1/1     Running   6          9d    192.168.0.83   master3   <none>           <none>
kube-system   kube-controller-manager-master1          1/1     Running   9          9d    192.168.0.81   master1   <none>           <none>
kube-system   kube-controller-manager-master2          1/1     Running   7          9d    192.168.0.82   master2   <none>           <none>
kube-system   kube-controller-manager-master3          1/1     Running   3          9d    192.168.0.83   master3   <none>           <none>
kube-system   kube-flannel-ds-amd64-692cr              1/1     Running   3          9d    192.168.0.84   slave1    <none>           <none>
kube-system   kube-flannel-ds-amd64-crgrx              1/1     Running   3          9d    192.168.0.86   slave3    <none>           <none>
kube-system   kube-flannel-ds-amd64-g7ctn              1/1     Running   4          9d    192.168.0.85   slave2    <none>           <none>
kube-system   kube-flannel-ds-amd64-j6vwg              1/1     Running   4          9d    192.168.0.83   master3   <none>           <none>
kube-system   kube-flannel-ds-amd64-rtktw              1/1     Running   7          9d    192.168.0.81   master1   <none>           <none>
kube-system   kube-flannel-ds-amd64-z6kn7              1/1     Running   6          9d    192.168.0.82   master2   <none>           <none>
kube-system   kube-proxy-7lgsb                         1/1     Running   3          9d    192.168.0.81   master1   <none>           <none>
kube-system   kube-proxy-9mxzh                         1/1     Running   3          9d    192.168.0.82   master2   <none>           <none>
kube-system   kube-proxy-b8xv9                         1/1     Running   2          9d    192.168.0.84   slave1    <none>           <none>
kube-system   kube-proxy-dl8fr                         1/1     Running   2          9d    192.168.0.83   master3   <none>           <none>
kube-system   kube-proxy-jmhqc                         1/1     Running   2          9d    192.168.0.85   slave2    <none>           <none>
kube-system   kube-proxy-vs4q4                         1/1     Running   2          9d    192.168.0.86   slave3    <none>           <none>
kube-system   kube-scheduler-master1                   1/1     Running   9          9d    192.168.0.81   master1   <none>           <none>
kube-system   kube-scheduler-master2                   1/1     Running   5          9d    192.168.0.82   master2   <none>           <none>
kube-system   kube-scheduler-master3                   1/1     Running   4          9d    192.168.0.83   master3   <none>           <none>
kube-system   tiller-deploy-5c4cfb859c-bglsb           1/1     Running   1          9d    10.244.5.3     slave3    <none>           <none>

The next step was to add persistent storage. I followed this very precise tutorial: https://blog.exxactcorp.com/deploying-dynamic-nfs-provisioning-in-kubernetes/.My external NFS server is working, and reachable from any node.

● nfs-server.service - NFS server and services
   Loaded: loaded (/lib/systemd/system/nfs-server.service; enabled; vendor preset: enabled)
   Active: active (exited) since Fri 2020-04-24 16:52:00 CEST; 16h ago
 Main PID: 7058 (code=exited, status=0/SUCCESS)
    Tasks: 0 (limit: 4701)
   Memory: 0B
   CGroup: /system.slice/nfs-server.service

The nfs-client-provisioner seems to be ok:

Name:         nfs-client-provisioner-9dfb69cdb-r29vg
Namespace:    default
Priority:     0
Node:         slave1/192.168.0.84
Start Time:   Fri, 24 Apr 2020 17:54:29 +0200
Labels:       app=nfs-client-provisioner
              pod-template-hash=9dfb69cdb
Annotations:  <none>
Status:       Running
IP:           10.244.3.6
IPs:
  IP:           10.244.3.6
Controlled By:  ReplicaSet/nfs-client-provisioner-9dfb69cdb
Containers:
  nfs-client-provisioner:
    Container ID:   docker://c3d88415320c067ea9d1a288f9a2cf02092ce475dafdbaaadc0d101c6346f956
    Image:          quay.io/external_storage/nfs-client-provisioner:latest
    Image ID:       docker-pullable://quay.io/external_storage/nfs-client-provisioner@sha256:022ea0b0d69834b652a4c53655d78642ae23f0324309097be874fb58d09d2919
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Fri, 24 Apr 2020 17:54:33 +0200
    Ready:          True
    Restart Count:  0
    Environment:
      PROVISIONER_NAME:  registry/nfsVol
      NFS_SERVER:        192.168.0.87
      NFS_PATH:          /nfsVol
    Mounts:
      /persistentvolumes from nfs-client-root (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from nfs-client-provisioner-token-5s49f (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  nfs-client-root:
    Type:      NFS (an NFS mount that lasts the lifetime of a pod)
    Server:    192.168.0.87
    Path:      /nfsVol
    ReadOnly:  false
  nfs-client-provisioner-token-5s49f:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  nfs-client-provisioner-token-5s49f
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>

the "rbac.yaml" has been created:

run-nfs-client-provisioner                             ClusterRole/nfs-client-provisioner-runner                                          17h

Unfortunately, dynamic provisioning failed. Even if the tutorial explains that the provisioner will create the persistent volume, I created the storage class and the persistent volume claim, but the persistent volume claim remains pending. I tried to create the storage class as default and "not default", with the same result.

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: registry/nfsVol
parameters:
  archiveOnDelete: "false"
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc1
spec:
  storageClassName: managed-nfs-storage
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 500Mi
NAME                         STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS          AGE
persistentvolumeclaim/pvc1   Pending                                      managed-nfs-storage   8h
NAME                                                        PROVISIONER       RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
storageclass.storage.k8s.io/managed-nfs-storage (default)   registry/nfsVol   Delete          Immediate           false                  8h

I missed probably something important. Thanks for any hint. Henri

closed time in 2 months

hdlbq

issue commentkubernetes-incubator/external-storage

Dynamic provisioning doesn't work

Solved my problem using this guide

hdlbq

comment created time in 2 months

issue openedkubernetes-incubator/external-storage

Dynamic provisioning doesn't work

Hello, I installed a K8s using the kubeadm way on my own cluster: 2 haproxy, 3 masters, 3 slaves and a local registry. I installed helm, and everything seems to work well kube-system coredns-66bff467f8-c6zdj 1/1 Running 2 9d 10.244.4.6 slave2 <none> <none> kube-system coredns-66bff467f8-wl8s4 1/1 Running 2 9d 10.244.4.7 slave2 <none> <none> kube-system etcd-master1 1/1 Running 10 9d 192.168.0.81 master1 <none> <none> kube-system etcd-master2 1/1 Running 5 9d 192.168.0.82 master2 <none> <none> kube-system etcd-master3 1/1 Running 3 9d 192.168.0.83 master3 <none> <none> kube-system kube-apiserver-master1 1/1 Running 15 9d 192.168.0.81 master1 <none> <none> kube-system kube-apiserver-master2 1/1 Running 8 9d 192.168.0.82 master2 <none> <none> kube-system kube-apiserver-master3 1/1 Running 6 9d 192.168.0.83 master3 <none> <none> kube-system kube-controller-manager-master1 1/1 Running 9 9d 192.168.0.81 master1 <none> <none> kube-system kube-controller-manager-master2 1/1 Running 7 9d 192.168.0.82 master2 <none> <none> kube-system kube-controller-manager-master3 1/1 Running 3 9d 192.168.0.83 master3 <none> <none> kube-system kube-flannel-ds-amd64-692cr 1/1 Running 3 9d 192.168.0.84 slave1 <none> <none> kube-system kube-flannel-ds-amd64-crgrx 1/1 Running 3 9d 192.168.0.86 slave3 <none> <none> kube-system kube-flannel-ds-amd64-g7ctn 1/1 Running 4 9d 192.168.0.85 slave2 <none> <none> kube-system kube-flannel-ds-amd64-j6vwg 1/1 Running 4 9d 192.168.0.83 master3 <none> <none> kube-system kube-flannel-ds-amd64-rtktw 1/1 Running 7 9d 192.168.0.81 master1 <none> <none> kube-system kube-flannel-ds-amd64-z6kn7 1/1 Running 6 9d 192.168.0.82 master2 <none> <none> kube-system kube-proxy-7lgsb 1/1 Running 3 9d 192.168.0.81 master1 <none> <none> kube-system kube-proxy-9mxzh 1/1 Running 3 9d 192.168.0.82 master2 <none> <none> kube-system kube-proxy-b8xv9 1/1 Running 2 9d 192.168.0.84 slave1 <none> <none> kube-system kube-proxy-dl8fr 1/1 Running 2 9d 192.168.0.83 master3 <none> <none> kube-system kube-proxy-jmhqc 1/1 Running 2 9d 192.168.0.85 slave2 <none> <none> kube-system kube-proxy-vs4q4 1/1 Running 2 9d 192.168.0.86 slave3 <none> <none> kube-system kube-scheduler-master1 1/1 Running 9 9d 192.168.0.81 master1 <none> <none> kube-system kube-scheduler-master2 1/1 Running 5 9d 192.168.0.82 master2 <none> <none> kube-system kube-scheduler-master3 1/1 Running 4 9d 192.168.0.83 master3 <none> <none> kube-system tiller-deploy-5c4cfb859c-bglsb 1/1 Running 1 9d 10.244.5.3 slave3 <none> <none> The next step was to add persistent storage. I followed this very precise tutorial: https://blog.exxactcorp.com/deploying-dynamic-nfs-provisioning-in-kubernetes/.My external NFS server is working, and reachable from any node. The nfs-client-provisioner seems to be ok: Name: nfs-client-provisioner-9dfb69cdb-r29vg Namespace: default Priority: 0 Node: slave1/192.168.0.84 Start Time: Fri, 24 Apr 2020 17:54:29 +0200 Labels: app=nfs-client-provisioner pod-template-hash=9dfb69cdb Annotations: <none> Status: Running IP: 10.244.3.6 IPs: IP: 10.244.3.6 Controlled By: ReplicaSet/nfs-client-provisioner-9dfb69cdb Containers: nfs-client-provisioner: Container ID: docker://c3d88415320c067ea9d1a288f9a2cf02092ce475dafdbaaadc0d101c6346f956 Image: quay.io/external_storage/nfs-client-provisioner:latest Image ID: docker-pullable://quay.io/external_storage/nfs-client-provisioner@sha256:022ea0b0d69834b652a4c53655d78642ae23f0324309097be874fb58d09d2919 Port: <none> Host Port: <none> State: Running Started: Fri, 24 Apr 2020 17:54:33 +0200 Ready: True Restart Count: 0 Environment: PROVISIONER_NAME: registry/nfsVol NFS_SERVER: 192.168.0.87 NFS_PATH: /nfsVol Mounts: /persistentvolumes from nfs-client-root (rw) /var/run/secrets/kubernetes.io/serviceaccount from nfs-client-provisioner-token-5s49f (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: nfs-client-root: Type: NFS (an NFS mount that lasts the lifetime of a pod) Server: 192.168.0.87 Path: /nfsVol ReadOnly: false nfs-client-provisioner-token-5s49f: Type: Secret (a volume populated by a Secret) SecretName: nfs-client-provisioner-token-5s49f Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: <none> Unfortunately, dynamic provisioning failed. I created the storage class and the persistent volume claim, but the later remains unbound. `NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/pvc1 Pending managed-nfs-storage 8h

NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE storageclass.storage.k8s.io/managed-nfs-storage (default) registry/nfsVol Delete Immediate false 8h `

I missed probably something. Thanks for any hint. Henri

created time in 2 months

more