profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/rio/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.
Rio Kierkels rio Ambassadors Lab Whatever, Earth

rio/dex 1

OpenID Connect Identity (OIDC) and OAuth 2.0 Provider with Pluggable Connectors

rio/argo-cd 0

Declarative continuous deployment for Kubernetes.

rio/authelia 0

The Single Sign-On Multi-Factor portal for web apps

rio/charts 0

Curated applications for Kubernetes

rio/cilium 0

eBPF-based Networking, Security, and Observability

rio/cluster-network-addons-operator 0

Deploy additional networking components on top of your Kubernetes cluster

rio/cockroach 0

CockroachDB - the open source, cloud-native SQL database.

rio/cog-book 0

Cog's Documentation in book form

rio/cpp_notes 0

just my notes on learning c++

startednavidrome/navidrome

started time in 2 days

startedbenbjohnson/clock

started time in 4 days

issue commentfluxcd/flux2

Unreachable readiness probes with k3s + Cilium CNI and default NetworkPolicies

I actually ran in to this when doing the exact same k3s + cilium thing but with argocd. It's actually sort of the fault of k3s which deploys it's own network policy controller. Next when cilium sees the argocd/flux/any-other network policies it will enforce them as normal but because of some iptables rules that k3s' network policy controller injects cilium will label any health/readiness probes as coming from "world" and thus dropping the packets according to the network policy. This will obviously make all the probes fail and thus this issue.

I've made some changes to the cilium docs (https://github.com/cilium/cilium/pull/16755) viewable at https://docs.cilium.io/en/v1.10/gettingstarted/k3s/. Basically it means adding --disable-network-policy to the k3s server and it should just start working again. No need to fully reinstall the cluster, this is basically treated as a server upgrade from the k3s side.

clementnuss

comment created time in a month

startedslok/sloth

started time in a month

startedharvester/harvester

started time in a month

fork rio/authelia

The Single Sign-On Multi-Factor portal for web apps

https://www.authelia.com

fork in 2 months

PullRequestReviewEvent

issue commenttraefik/traefik-helm-chart

CRD's in helm 3 do not support templating

As a side note, the helm website mentions the use of a separate chart for CRD management. This might be a way to continue to support helm charts after 10.0.0 on k8s <v1.16 by letting people install the CRD chart before installing the newer chart with the --skip-crds flag.

helm install deprecated-crds traefik/deprecated-crds
helm install traefik-without-crd traefik/traefik --version 10.0.0
rio

comment created time in 2 months

issue commenttraefik/traefik-helm-chart

CRD's in helm 3 do not support templating

After some testing with kind and different versions of kubernetes (v1.21, v1.16, v1.15) and different versions of helm (v3.6, v3.2) I've come to the conclusion that helm will not check if a version of an api is available before it tries to install that CRD. This means that coming kubernetes v1.22 when the apiextensions.k8s.io/v1beta1 api is removed version <9.20.0 of this helm chart will also no longer install.

That's why I propose to revert the change made in 9.20.0 and remove the templating in the CRD's only leaving apiextensions.k8s.io/v1beta1 to remain compatible with k8s <v1.22 and release that chart as 9.20.1.

Next replace all apiextensions.k8s.io/v1beta1 CRD's with apiextensions.k8s.io/v1 and release that chart as 10.0.0 marking it incompatible with k8s <v1.15.

For now I've had to pin all our manifests to traefik 9.19.2 because all our automated test deployment pipelines are failing :(

rio

comment created time in 2 months

issue openedtraefik/traefik-helm-chart

CRD's in helm 3 do not support templating

The changes in 9.20.0 basically break all helm 3 installs as the apiserver tries to parse the golang templates in the CRD as json since they aren't templated out.

Error: failed to install CRD crds/ingressroute.yaml: error parsing : json: line 0: invalid character '{' looking for beginning of object key string

created time in 2 months

pull request commentcilium/cilium

docs(k3s): add back the flag to disable network policies

A I just now notice that it's removing it from the agent and it was never set for the server. In that case that commit is correct because the agent doesn't support that flag only the server does.

rio

comment created time in 3 months

PR opened cilium/cilium

docs(k3s): add back the flag to disable network policies

This was actually added in https://github.com/cilium/cilium/pull/13783 but got removed again after some other changes. That flag is required because otherwise any liveness or readiness probes will be classified as world traffic and get dropped if network policies are installed.

Signed-off-by: Rio Kierkels riokierkels@gmail.com

+2 -2

0 comment

1 changed file

pr created time in 3 months

push eventrio/cilium

Rio Kierkels

commit sha 22187f225143030afd97b831e99533e416250afd

docs(k3s): add back the flag to disable network policies This was actually added in https://github.com/cilium/cilium/pull/13783 but got removed again after some other changes. That flag is required because otherwise any liveness or readiness probes will be classified as *world* traffic and get dropped if network policies are installed. Signed-off-by: Rio Kierkels <riokierkels@gmail.com>

view details

push time in 3 months

push eventrio/cilium

Rio Kierkels

commit sha 9945d6bf6be7c861c8422f0341f949e838813503

docs(k3s): add back the flag to disable network policies This was actually added in https://github.com/cilium/cilium/pull/13783 but got removed again after some other changes. That flag is required because otherwise any liveness or readiness probes will be classified as *world* traffic and get dropped if network policies are installed.

view details

push time in 3 months

fork rio/cilium

eBPF-based Networking, Security, and Observability

https://www.cilium.io

fork in 3 months

issue commentValveSoftware/steam-for-linux

Steam beta 1624329075 regression: hangs when using big picture mode without 32-bit libXi.so.6 on host

Can confirm this works on the beta release on ubuntu 20.04. Thanks!

smcv

comment created time in 3 months

startedloft-sh/vcluster

started time in 3 months

issue closedloft-sh/vcluster

Default kind admin clusterrole is not enough to deploy a vcluster

I've already mentioned it in #42 but I thought I'd create a proper issue for it.

When trying to deploy a vcluster using a service account that has a namespaced rolebinding to the admin clusterrole the helm command fails because it tries to adds permissions that the admin clusterrole doesn't have. Below is the error output of vcluster create.

~ $ vcluster create vcluster-1
[info]   execute command: helm upgrade vcluster-1 vcluster --repo https://charts.loft.sh --version 0.3.0 --kubeconfig /tmp/247832214 --namespace vcluster --install --repository-config='' --values /tmp/285100285
[fatal]  error executing helm upgrade vcluster-1 vcluster --repo https://charts.loft.sh --version 0.3.0 --kubeconfig /tmp/247832214 --namespace vcluster --install --repository-config='' --values /tmp/285100285: Error: UPGRADE FAILED: failed to create resource: roles.rbac.authorization.k8s.io "vcluster-1" is forbidden: user "system:serviceaccount:vcluster:default" (groups=["system:serviceaccounts" "system:serviceaccounts:vcluster" "system:authenticated"]) is attempting to grant RBAC permissions not currently held:
{APIGroups:[""], Resources:["configmaps"], Verbs:["*"]}
{APIGroups:[""], Resources:["endpoints"], Verbs:["*"]}
{APIGroups:[""], Resources:["events"], Verbs:["*"]}
{APIGroups:[""], Resources:["persistentvolumeclaims"], Verbs:["*"]}
{APIGroups:[""], Resources:["pods"], Verbs:["*"]}
{APIGroups:[""], Resources:["pods/attach"], Verbs:["*"]}
{APIGroups:[""], Resources:["pods/exec"], Verbs:["*"]}
{APIGroups:[""], Resources:["pods/log"], Verbs:["*"]}
{APIGroups:[""], Resources:["pods/portforward"], Verbs:["*"]}
{APIGroups:[""], Resources:["pods/proxy"], Verbs:["*"]}
{APIGroups:[""], Resources:["secrets"], Verbs:["*"]}
{APIGroups:[""], Resources:["services"], Verbs:["*"]}
{APIGroups:[""], Resources:["services/proxy"], Verbs:["*"]}
{APIGroups:["networking.k8s.io"], Resources:["ingresses"], Verbs:["*"]}

The admin clusterrole for kind v0.11.1 contains these permissions:

Name:         admin
Labels:       kubernetes.io/bootstrapping=rbac-defaults
Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
  Resources                                       Non-Resource URLs  Resource Names  Verbs
  ---------                                       -----------------  --------------  -----
  rolebindings.rbac.authorization.k8s.io          []                 []              [create delete deletecollection get list patch update watch]
  roles.rbac.authorization.k8s.io                 []                 []              [create delete deletecollection get list patch update watch]
  configmaps                                      []                 []              [create delete deletecollection patch update get list watch]
  endpoints                                       []                 []              [create delete deletecollection patch update get list watch]
  persistentvolumeclaims                          []                 []              [create delete deletecollection patch update get list watch]
  pods                                            []                 []              [create delete deletecollection patch update get list watch]
  replicationcontrollers/scale                    []                 []              [create delete deletecollection patch update get list watch]
  replicationcontrollers                          []                 []              [create delete deletecollection patch update get list watch]
  services                                        []                 []              [create delete deletecollection patch update get list watch]
  daemonsets.apps                                 []                 []              [create delete deletecollection patch update get list watch]
  deployments.apps/scale                          []                 []              [create delete deletecollection patch update get list watch]
  deployments.apps                                []                 []              [create delete deletecollection patch update get list watch]
  replicasets.apps/scale                          []                 []              [create delete deletecollection patch update get list watch]
  replicasets.apps                                []                 []              [create delete deletecollection patch update get list watch]
  statefulsets.apps/scale                         []                 []              [create delete deletecollection patch update get list watch]
  statefulsets.apps                               []                 []              [create delete deletecollection patch update get list watch]
  horizontalpodautoscalers.autoscaling            []                 []              [create delete deletecollection patch update get list watch]
  cronjobs.batch                                  []                 []              [create delete deletecollection patch update get list watch]
  jobs.batch                                      []                 []              [create delete deletecollection patch update get list watch]
  daemonsets.extensions                           []                 []              [create delete deletecollection patch update get list watch]
  deployments.extensions/scale                    []                 []              [create delete deletecollection patch update get list watch]
  deployments.extensions                          []                 []              [create delete deletecollection patch update get list watch]
  ingresses.extensions                            []                 []              [create delete deletecollection patch update get list watch]
  networkpolicies.extensions                      []                 []              [create delete deletecollection patch update get list watch]
  replicasets.extensions/scale                    []                 []              [create delete deletecollection patch update get list watch]
  replicasets.extensions                          []                 []              [create delete deletecollection patch update get list watch]
  replicationcontrollers.extensions/scale         []                 []              [create delete deletecollection patch update get list watch]
  ingresses.networking.k8s.io                     []                 []              [create delete deletecollection patch update get list watch]
  networkpolicies.networking.k8s.io               []                 []              [create delete deletecollection patch update get list watch]
  poddisruptionbudgets.policy                     []                 []              [create delete deletecollection patch update get list watch]
  deployments.apps/rollback                       []                 []              [create delete deletecollection patch update]
  deployments.extensions/rollback                 []                 []              [create delete deletecollection patch update]
  localsubjectaccessreviews.authorization.k8s.io  []                 []              [create]
  pods/attach                                     []                 []              [get list watch create delete deletecollection patch update]
  pods/exec                                       []                 []              [get list watch create delete deletecollection patch update]
  pods/portforward                                []                 []              [get list watch create delete deletecollection patch update]
  pods/proxy                                      []                 []              [get list watch create delete deletecollection patch update]
  secrets                                         []                 []              [get list watch create delete deletecollection patch update]
  services/proxy                                  []                 []              [get list watch create delete deletecollection patch update]
  bindings                                        []                 []              [get list watch]
  events                                          []                 []              [get list watch]
  limitranges                                     []                 []              [get list watch]
  namespaces/status                               []                 []              [get list watch]
  namespaces                                      []                 []              [get list watch]
  persistentvolumeclaims/status                   []                 []              [get list watch]
  pods/log                                        []                 []              [get list watch]
  pods/status                                     []                 []              [get list watch]
  replicationcontrollers/status                   []                 []              [get list watch]
  resourcequotas/status                           []                 []              [get list watch]
  resourcequotas                                  []                 []              [get list watch]
  services/status                                 []                 []              [get list watch]
  controllerrevisions.apps                        []                 []              [get list watch]
  daemonsets.apps/status                          []                 []              [get list watch]
  deployments.apps/status                         []                 []              [get list watch]
  replicasets.apps/status                         []                 []              [get list watch]
  statefulsets.apps/status                        []                 []              [get list watch]
  horizontalpodautoscalers.autoscaling/status     []                 []              [get list watch]
  cronjobs.batch/status                           []                 []              [get list watch]
  jobs.batch/status                               []                 []              [get list watch]
  daemonsets.extensions/status                    []                 []              [get list watch]
  deployments.extensions/status                   []                 []              [get list watch]
  ingresses.extensions/status                     []                 []              [get list watch]
  replicasets.extensions/status                   []                 []              [get list watch]
  ingresses.networking.k8s.io/status              []                 []              [get list watch]
  poddisruptionbudgets.policy/status              []                 []              [get list watch]
  serviceaccounts                                 []                 []              [impersonate create delete deletecollection patch update get list watch]

You can reproduce this issue using the script below. You only require kind and kubectl in your path. Just write it to a file, give it the executable flag and execute it. It is idempotent so you can change any step and it wil just re run it if required.

#!/bin/bash

set -eux

if [ ! $(kind get clusters) ]; then
        kind create cluster --wait 1m
fi

# create a rolebinding to the default service account in the default namespace so our test pod
# has the admin role.
if [ ! $(kubectl get rolebinding namespace-admin) ]; then
        kubectl create rolebinding namespace-admin --serviceaccount default:default --clusterrole admin
fi

# start an ubuntu container that we can use as a base.
if [ ! $(kubectl get pod test-container) ]; then
        kubectl run test-container --image docker.io/library/ubuntu:20.04 -- sleep infinity
fi

kubectl wait --for condition=Ready pod test-container

# exec inside our test contaienr and install kubectl, helm and vcluster
kubectl exec -i test-container -- sh <<EOF
        set -eux

        apt-get update -qq
        apt-get install -qqy wget

        if [ ! -f /usr/local/bin/kubectl ]; then
                wget -qO /usr/local/bin/kubectl https://dl.k8s.io/release/v1.21.0/bin/linux/amd64/kubectl
                chmod +x /usr/local/bin/kubectl
        fi

        if [ ! -f /usr/local/bin/vcluster ]; then
                wget -qO- "https://github.com/loft-sh/vcluster/releases/latest" | sed -nE 's!.*"([^"]*vcluster-linux-amd64)".*!https://github.com\1!p' | xargs -n 1 wget -qO /usr/local/bin/vcluster
                chmod +x /usr/local/bin/vcluster
        fi

        if [ ! -f /usr/local/bin/helm ]; then
                wget -qO- https://get.helm.sh/helm-v3.6.1-linux-amd64.tar.gz | gzip -d | tar -O -x -f - linux-amd64/helm > /usr/local/bin/helm
                chmod +x /usr/local/bin/helm
        fi

        # test if the container can execute kubectl commands against the kind cluster
        kubectl get pod

        # print the privileges of the service account for debugging
        kubectl auth can-i --list

        # finally try to create a vcluster, this will fail!
        vcluster create my-failing-cluster
EOF

closed time in 3 months

rio

issue commentloft-sh/vcluster

Default kind admin clusterrole is not enough to deploy a vcluster

@FabianKramm, I can confirm it works now. Thanks for the fast fix! Loved the show on rawkode btw.

rio

comment created time in 3 months

issue commentloft-sh/vcluster

Default kind admin clusterrole is not enough to deploy a vcluster

Ok so got a chance to spend time to make it work with the test script I posted. Passing some extra helm values lets vcluster deploy.

serviceAccount:
  create: false
  name: default

rbac:
  role:
    create: false

Now I'm not sure what the right course of action is here. I think that kind's admin clusterrole is the default one shipped using kubeadm. It being (I think) the maximum amount of privileges you can give a serviceaccount within a namespace, we should adjust the role that's created for vcluster accordingly.

rio

comment created time in 3 months

issue openedloft-sh/vcluster

Default kind admin clusterrole is not enough to deploy a vcluster

I've already mentioned it in #42 but I thought I'd create a proper issue for it.

When trying to deploy a vcluster using a service account that has a namespaced rolebinding to the admin clusterrole the helm command fails because it tries to adds permissions that the admin clusterrole doesn't have. Below is the error output of vcluster create.

~ $ vcluster create vcluster-1
[info]   execute command: helm upgrade vcluster-1 vcluster --repo https://charts.loft.sh --version 0.3.0 --kubeconfig /tmp/247832214 --namespace vcluster --install --repository-config='' --values /tmp/285100285
[fatal]  error executing helm upgrade vcluster-1 vcluster --repo https://charts.loft.sh --version 0.3.0 --kubeconfig /tmp/247832214 --namespace vcluster --install --repository-config='' --values /tmp/285100285: Error: UPGRADE FAILED: failed to create resource: roles.rbac.authorization.k8s.io "vcluster-1" is forbidden: user "system:serviceaccount:vcluster:default" (groups=["system:serviceaccounts" "system:serviceaccounts:vcluster" "system:authenticated"]) is attempting to grant RBAC permissions not currently held:
{APIGroups:[""], Resources:["configmaps"], Verbs:["*"]}
{APIGroups:[""], Resources:["endpoints"], Verbs:["*"]}
{APIGroups:[""], Resources:["events"], Verbs:["*"]}
{APIGroups:[""], Resources:["persistentvolumeclaims"], Verbs:["*"]}
{APIGroups:[""], Resources:["pods"], Verbs:["*"]}
{APIGroups:[""], Resources:["pods/attach"], Verbs:["*"]}
{APIGroups:[""], Resources:["pods/exec"], Verbs:["*"]}
{APIGroups:[""], Resources:["pods/log"], Verbs:["*"]}
{APIGroups:[""], Resources:["pods/portforward"], Verbs:["*"]}
{APIGroups:[""], Resources:["pods/proxy"], Verbs:["*"]}
{APIGroups:[""], Resources:["secrets"], Verbs:["*"]}
{APIGroups:[""], Resources:["services"], Verbs:["*"]}
{APIGroups:[""], Resources:["services/proxy"], Verbs:["*"]}
{APIGroups:["networking.k8s.io"], Resources:["ingresses"], Verbs:["*"]}

The admin clusterrole for kind v0.11.1 contains these permissions:

Name:         admin
Labels:       kubernetes.io/bootstrapping=rbac-defaults
Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
  Resources                                       Non-Resource URLs  Resource Names  Verbs
  ---------                                       -----------------  --------------  -----
  rolebindings.rbac.authorization.k8s.io          []                 []              [create delete deletecollection get list patch update watch]
  roles.rbac.authorization.k8s.io                 []                 []              [create delete deletecollection get list patch update watch]
  configmaps                                      []                 []              [create delete deletecollection patch update get list watch]
  endpoints                                       []                 []              [create delete deletecollection patch update get list watch]
  persistentvolumeclaims                          []                 []              [create delete deletecollection patch update get list watch]
  pods                                            []                 []              [create delete deletecollection patch update get list watch]
  replicationcontrollers/scale                    []                 []              [create delete deletecollection patch update get list watch]
  replicationcontrollers                          []                 []              [create delete deletecollection patch update get list watch]
  services                                        []                 []              [create delete deletecollection patch update get list watch]
  daemonsets.apps                                 []                 []              [create delete deletecollection patch update get list watch]
  deployments.apps/scale                          []                 []              [create delete deletecollection patch update get list watch]
  deployments.apps                                []                 []              [create delete deletecollection patch update get list watch]
  replicasets.apps/scale                          []                 []              [create delete deletecollection patch update get list watch]
  replicasets.apps                                []                 []              [create delete deletecollection patch update get list watch]
  statefulsets.apps/scale                         []                 []              [create delete deletecollection patch update get list watch]
  statefulsets.apps                               []                 []              [create delete deletecollection patch update get list watch]
  horizontalpodautoscalers.autoscaling            []                 []              [create delete deletecollection patch update get list watch]
  cronjobs.batch                                  []                 []              [create delete deletecollection patch update get list watch]
  jobs.batch                                      []                 []              [create delete deletecollection patch update get list watch]
  daemonsets.extensions                           []                 []              [create delete deletecollection patch update get list watch]
  deployments.extensions/scale                    []                 []              [create delete deletecollection patch update get list watch]
  deployments.extensions                          []                 []              [create delete deletecollection patch update get list watch]
  ingresses.extensions                            []                 []              [create delete deletecollection patch update get list watch]
  networkpolicies.extensions                      []                 []              [create delete deletecollection patch update get list watch]
  replicasets.extensions/scale                    []                 []              [create delete deletecollection patch update get list watch]
  replicasets.extensions                          []                 []              [create delete deletecollection patch update get list watch]
  replicationcontrollers.extensions/scale         []                 []              [create delete deletecollection patch update get list watch]
  ingresses.networking.k8s.io                     []                 []              [create delete deletecollection patch update get list watch]
  networkpolicies.networking.k8s.io               []                 []              [create delete deletecollection patch update get list watch]
  poddisruptionbudgets.policy                     []                 []              [create delete deletecollection patch update get list watch]
  deployments.apps/rollback                       []                 []              [create delete deletecollection patch update]
  deployments.extensions/rollback                 []                 []              [create delete deletecollection patch update]
  localsubjectaccessreviews.authorization.k8s.io  []                 []              [create]
  pods/attach                                     []                 []              [get list watch create delete deletecollection patch update]
  pods/exec                                       []                 []              [get list watch create delete deletecollection patch update]
  pods/portforward                                []                 []              [get list watch create delete deletecollection patch update]
  pods/proxy                                      []                 []              [get list watch create delete deletecollection patch update]
  secrets                                         []                 []              [get list watch create delete deletecollection patch update]
  services/proxy                                  []                 []              [get list watch create delete deletecollection patch update]
  bindings                                        []                 []              [get list watch]
  events                                          []                 []              [get list watch]
  limitranges                                     []                 []              [get list watch]
  namespaces/status                               []                 []              [get list watch]
  namespaces                                      []                 []              [get list watch]
  persistentvolumeclaims/status                   []                 []              [get list watch]
  pods/log                                        []                 []              [get list watch]
  pods/status                                     []                 []              [get list watch]
  replicationcontrollers/status                   []                 []              [get list watch]
  resourcequotas/status                           []                 []              [get list watch]
  resourcequotas                                  []                 []              [get list watch]
  services/status                                 []                 []              [get list watch]
  controllerrevisions.apps                        []                 []              [get list watch]
  daemonsets.apps/status                          []                 []              [get list watch]
  deployments.apps/status                         []                 []              [get list watch]
  replicasets.apps/status                         []                 []              [get list watch]
  statefulsets.apps/status                        []                 []              [get list watch]
  horizontalpodautoscalers.autoscaling/status     []                 []              [get list watch]
  cronjobs.batch/status                           []                 []              [get list watch]
  jobs.batch/status                               []                 []              [get list watch]
  daemonsets.extensions/status                    []                 []              [get list watch]
  deployments.extensions/status                   []                 []              [get list watch]
  ingresses.extensions/status                     []                 []              [get list watch]
  replicasets.extensions/status                   []                 []              [get list watch]
  ingresses.networking.k8s.io/status              []                 []              [get list watch]
  poddisruptionbudgets.policy/status              []                 []              [get list watch]
  serviceaccounts                                 []                 []              [impersonate create delete deletecollection patch update get list watch]

You can reproduce this issue using the script below. You only require kind and kubectl in your path. Just write it to a file, give it the executable flag and execute it. It is idempotent so you can change any step and it wil just re run it if required.

#!/bin/bash

set -eux

if [ ! $(kind get clusters) ]; then
        kind create cluster --wait 1m
fi

# create a rolebinding to the default service account in the default namespace so our test pod
# has the admin role.
if [ ! $(kubectl get rolebinding namespace-admin) ]; then
        kubectl create rolebinding namespace-admin --serviceaccount default:default --clusterrole admin
fi

# start an ubuntu container that we can use as a base.
if [ ! $(kubectl get pod test-container) ]; then
        kubectl run test-container --image docker.io/library/ubuntu:20.04 -- sleep infinity
fi

kubectl wait --for condition=Ready pod test-container

# exec inside our test contaienr and install kubectl, helm and vcluster
kubectl exec -i test-container -- sh <<EOF
        set -eux

        apt-get update -qq
        apt-get install -qqy wget

        if [ ! -f /usr/local/bin/kubectl ]; then
                wget -qO /usr/local/bin/kubectl https://dl.k8s.io/release/v1.21.0/bin/linux/amd64/kubectl
                chmod +x /usr/local/bin/kubectl
        fi

        if [ ! -f /usr/local/bin/vcluster ]; then
                wget -qO- "https://github.com/loft-sh/vcluster/releases/latest" | sed -nE 's!.*"([^"]*vcluster-linux-amd64)".*!https://github.com\1!p' | xargs -n 1 wget -qO /usr/local/bin/vcluster
                chmod +x /usr/local/bin/vcluster
        fi

        if [ ! -f /usr/local/bin/helm ]; then
                wget -qO- https://get.helm.sh/helm-v3.6.1-linux-amd64.tar.gz | gzip -d | tar -O -x -f - linux-amd64/helm > /usr/local/bin/helm
                chmod +x /usr/local/bin/helm
        fi

        # test if the container can execute kubectl commands against the kind cluster
        kubectl get pod

        # print the privileges of the service account for debugging
        kubectl auth can-i --list

        # finally try to create a vcluster, this will fail!
        vcluster create my-failing-cluster
EOF

created time in 3 months

issue commentloft-sh/vcluster

feat: vcluster check (pre-flight checklist for installation)

For completeness this is the admin clusterrole in kind v0.11.1

Name:         admin
Labels:       kubernetes.io/bootstrapping=rbac-defaults
Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
  Resources                                       Non-Resource URLs  Resource Names  Verbs
  ---------                                       -----------------  --------------  -----
  rolebindings.rbac.authorization.k8s.io          []                 []              [create delete deletecollection get list patch update watch]
  roles.rbac.authorization.k8s.io                 []                 []              [create delete deletecollection get list patch update watch]
  configmaps                                      []                 []              [create delete deletecollection patch update get list watch]
  endpoints                                       []                 []              [create delete deletecollection patch update get list watch]
  persistentvolumeclaims                          []                 []              [create delete deletecollection patch update get list watch]
  pods                                            []                 []              [create delete deletecollection patch update get list watch]
  replicationcontrollers/scale                    []                 []              [create delete deletecollection patch update get list watch]
  replicationcontrollers                          []                 []              [create delete deletecollection patch update get list watch]
  services                                        []                 []              [create delete deletecollection patch update get list watch]
  daemonsets.apps                                 []                 []              [create delete deletecollection patch update get list watch]
  deployments.apps/scale                          []                 []              [create delete deletecollection patch update get list watch]
  deployments.apps                                []                 []              [create delete deletecollection patch update get list watch]
  replicasets.apps/scale                          []                 []              [create delete deletecollection patch update get list watch]
  replicasets.apps                                []                 []              [create delete deletecollection patch update get list watch]
  statefulsets.apps/scale                         []                 []              [create delete deletecollection patch update get list watch]
  statefulsets.apps                               []                 []              [create delete deletecollection patch update get list watch]
  horizontalpodautoscalers.autoscaling            []                 []              [create delete deletecollection patch update get list watch]
  cronjobs.batch                                  []                 []              [create delete deletecollection patch update get list watch]
  jobs.batch                                      []                 []              [create delete deletecollection patch update get list watch]
  daemonsets.extensions                           []                 []              [create delete deletecollection patch update get list watch]
  deployments.extensions/scale                    []                 []              [create delete deletecollection patch update get list watch]
  deployments.extensions                          []                 []              [create delete deletecollection patch update get list watch]
  ingresses.extensions                            []                 []              [create delete deletecollection patch update get list watch]
  networkpolicies.extensions                      []                 []              [create delete deletecollection patch update get list watch]
  replicasets.extensions/scale                    []                 []              [create delete deletecollection patch update get list watch]
  replicasets.extensions                          []                 []              [create delete deletecollection patch update get list watch]
  replicationcontrollers.extensions/scale         []                 []              [create delete deletecollection patch update get list watch]
  ingresses.networking.k8s.io                     []                 []              [create delete deletecollection patch update get list watch]
  networkpolicies.networking.k8s.io               []                 []              [create delete deletecollection patch update get list watch]
  poddisruptionbudgets.policy                     []                 []              [create delete deletecollection patch update get list watch]
  deployments.apps/rollback                       []                 []              [create delete deletecollection patch update]
  deployments.extensions/rollback                 []                 []              [create delete deletecollection patch update]
  localsubjectaccessreviews.authorization.k8s.io  []                 []              [create]
  pods/attach                                     []                 []              [get list watch create delete deletecollection patch update]
  pods/exec                                       []                 []              [get list watch create delete deletecollection patch update]
  pods/portforward                                []                 []              [get list watch create delete deletecollection patch update]
  pods/proxy                                      []                 []              [get list watch create delete deletecollection patch update]
  secrets                                         []                 []              [get list watch create delete deletecollection patch update]
  services/proxy                                  []                 []              [get list watch create delete deletecollection patch update]
  bindings                                        []                 []              [get list watch]
  events                                          []                 []              [get list watch]
  limitranges                                     []                 []              [get list watch]
  namespaces/status                               []                 []              [get list watch]
  namespaces                                      []                 []              [get list watch]
  persistentvolumeclaims/status                   []                 []              [get list watch]
  pods/log                                        []                 []              [get list watch]
  pods/status                                     []                 []              [get list watch]
  replicationcontrollers/status                   []                 []              [get list watch]
  resourcequotas/status                           []                 []              [get list watch]
  resourcequotas                                  []                 []              [get list watch]
  services/status                                 []                 []              [get list watch]
  controllerrevisions.apps                        []                 []              [get list watch]
  daemonsets.apps/status                          []                 []              [get list watch]
  deployments.apps/status                         []                 []              [get list watch]
  replicasets.apps/status                         []                 []              [get list watch]
  statefulsets.apps/status                        []                 []              [get list watch]
  horizontalpodautoscalers.autoscaling/status     []                 []              [get list watch]
  cronjobs.batch/status                           []                 []              [get list watch]
  jobs.batch/status                               []                 []              [get list watch]
  daemonsets.extensions/status                    []                 []              [get list watch]
  deployments.extensions/status                   []                 []              [get list watch]
  ingresses.extensions/status                     []                 []              [get list watch]
  replicasets.extensions/status                   []                 []              [get list watch]
  ingresses.networking.k8s.io/status              []                 []              [get list watch]
  poddisruptionbudgets.policy/status              []                 []              [get list watch]
  serviceaccounts                                 []                 []              [impersonate create delete deletecollection patch update get list watch]
kostis-codefresh

comment created time in 3 months

issue commentloft-sh/vcluster

feat: vcluster check (pre-flight checklist for installation)

This would definitely help with deploying when having only access to the admin role in a namespace. This is what currently happens:

~ $ vcluster create vcluster-1
[info]   execute command: helm upgrade vcluster-1 vcluster --repo https://charts.loft.sh --version 0.3.0 --kubeconfig /tmp/247832214 --namespace vcluster --install --repository-config='' --values /tmp/285100285
[fatal]  error executing helm upgrade vcluster-1 vcluster --repo https://charts.loft.sh --version 0.3.0 --kubeconfig /tmp/247832214 --namespace vcluster --install --repository-config='' --values /tmp/285100285: Error: UPGRADE FAILED: failed to create resource: roles.rbac.authorization.k8s.io "vcluster-1" is forbidden: user "system:serviceaccount:vcluster:default" (groups=["system:serviceaccounts" "system:serviceaccounts:vcluster" "system:authenticated"]) is attempting to grant RBAC permissions not currently held:
{APIGroups:[""], Resources:["configmaps"], Verbs:["*"]}
{APIGroups:[""], Resources:["endpoints"], Verbs:["*"]}
{APIGroups:[""], Resources:["events"], Verbs:["*"]}
{APIGroups:[""], Resources:["persistentvolumeclaims"], Verbs:["*"]}
{APIGroups:[""], Resources:["pods"], Verbs:["*"]}
{APIGroups:[""], Resources:["pods/attach"], Verbs:["*"]}
{APIGroups:[""], Resources:["pods/exec"], Verbs:["*"]}
{APIGroups:[""], Resources:["pods/log"], Verbs:["*"]}
{APIGroups:[""], Resources:["pods/portforward"], Verbs:["*"]}
{APIGroups:[""], Resources:["pods/proxy"], Verbs:["*"]}
{APIGroups:[""], Resources:["secrets"], Verbs:["*"]}
{APIGroups:[""], Resources:["services"], Verbs:["*"]}
{APIGroups:[""], Resources:["services/proxy"], Verbs:["*"]}
{APIGroups:["networking.k8s.io"], Resources:["ingresses"], Verbs:["*"]}
kostis-codefresh

comment created time in 3 months