profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/lazybetrayer/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.
Wang Zhen lazybetrayer China 记得当时那个小

lazybetrayer/Cosmos 0

A star rating control for iOS/tvOS written in Swift

lazybetrayer/go-control-plane 0

Go implementation of data-plane-api

lazybetrayer/kubespray 0

Deploy a Production Ready Kubernetes Cluster

lazybetrayer/MJRefresh 0

An easy way to use pull-to-refresh.

lazybetrayer/MMPopupView 0

Pop-up based view(e.g. alert sheet), can easily customize.

lazybetrayer/MWPhotoBrowser 0

A simple iOS photo and video browser with grid view, captions and selections.

Pull request review commentkubernetes-sigs/kubespray

Fix k8s-certs-renew cp path

 echo "## Restarting control plane pods managed by kubeadm ##" {% endif %}  echo "## Updating /root/.kube/config ##"-/usr/bin/cp {{ kube_config_dir }}/admin.conf /root/.kube/config+/bin/cp {{ kube_config_dir }}/admin.conf /root/.kube/config

done

lazybetrayer

comment created time in a day

PullRequestReviewEvent

push eventlazybetrayer/kubespray

Wang Zhen

commit sha e6326ae25d3e93f3c0cb499484036e8c0acd7bf8

Fix k8s-certs-renew cp path Signed-off-by: Wang Zhen <lazybetrayer@gmail.com>

view details

push time in a day

issue commentkubernetes-sigs/kubespray

playbook fails when ShutdownGracePeriod and ShutdownGracePeriodCriticalPods are set

Set kubelet_shutdown_grace_period_critical_pods to 180s

Still fails with ShutdownGracePeriod (150s) needs to be greater than ShutdownGracePeriodCriticalPods (180s)

lazybetrayer

comment created time in 2 days

PR opened kubernetes-sigs/kubespray

Fix k8s-certs-renew cp path

<!-- Thanks for sending a pull request! Here are some tips for you:

  1. If this is your first time, please read our contributor guidelines: https://git.k8s.io/community/contributors/guide/first-contribution.md and developer guide https://git.k8s.io/community/contributors/devel/development.md
  2. Please label this pull request according to what type of issue you are addressing, especially if this is a release targeted pull request. For reference on required PR/issue labels, read here: https://git.k8s.io/community/contributors/devel/sig-release/release.md#issuepr-kind-label
  3. Ensure you have added or ran the appropriate tests for your PR: https://git.k8s.io/community/contributors/devel/sig-testing/testing.md
  4. If you want faster PR reviews, read how: https://git.k8s.io/community/contributors/guide/pull-requests.md#best-practices-for-faster-reviews
  5. Follow the instructions for writing a release note: https://git.k8s.io/community/contributors/guide/release-notes.md
  6. If the PR is unfinished, see how to mark it: https://git.k8s.io/community/contributors/guide/pull-requests.md#marking-unfinished-pull-requests -->

What type of PR is this?

/kind bug

What this PR does / why we need it:

Which issue(s) this PR fixes:

Fixes #7990

Special notes for your reviewer:

Does this PR introduce a user-facing change?: <!-- If no, just write "NONE" in the release-note block below. If yes, a release note is required: Enter your extended release note in the block below. If the PR requires additional action from users switching to the new release, include the string "action required". -->

NONE
+1 -1

0 comment

1 changed file

pr created time in 2 days

push eventlazybetrayer/kubespray

Florian Ruynat

commit sha b59035df06673ab0e0a0c6d425bbb99b5f6a61ff

change nginx default HTTPS protocol from "SSLv2" to "TLSv1.2 TLSv1.3" (#7144)

view details

Wang Zhen

commit sha 387df0ee1f4f1d5507d4dd4360935c06ba6e0fc1

Remove unnecessary condition check when updating server field in kube-proxy kubeconfig (#7145)

view details

Sergey

commit sha 02213d6e07e4d8c240a46ba2264e5630e4990fb1

change nodeSelector label from deprecated beta.kubernetes.io/os and arch to kubernetes.io prefix (#7138)

view details

Etienne Champetier

commit sha 8331939aed2cc5ca6332933f1c2f450ad84872fd

preinstall: check etcd_deployment_type (#7149) Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>

view details

Florian Ruynat

commit sha 09fa99fdc6d61ff8cae29d2becd05f02c6baaa14

Update hashes and set default version to 1.19.7 (#7150)

view details

Etienne Champetier

commit sha 9c5c1a09a10f495ed2bd4117d0663a6d9ceca975

test-infra: update CentOS images (#7134) Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>

view details

Etienne Champetier

commit sha 8c1821228df4598d139aa4f9729799291350a470

preinstall: fixup etcd_deployment_type check (#7152) fixes 8331939aed2cc5ca6332933f1c2f450ad84872fd Thanks to Tomas Vanderka / karlism / LuckySB Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>

view details

Florian Ruynat

commit sha 81b4ffa6b4ab268999031e9a61f911f7d28ff1eb

Add Fedora 33 CI, remove Fedora 31 (#7072)

view details

Etienne Champetier

commit sha 55b03a41b2979de3a1db40444abcd775ecd7f7a1

containerd-common,containerd,docker: remove ubuntu arch specific vars By removing ancient version we don't need arch specific vars Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>

view details

Etienne Champetier

commit sha cf1d9f5612933072299c6b3d5427622e25f18853

preinstall: remove old Fedora task Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>

view details

Etienne Champetier

commit sha 667a6981ea47c9e9742f21e24114253fd02a5084

preinstall: remove credentials folder move This was introduced in 3004791c6469181a83d80971110813a3cd3ce658, so since 2018 everyone should be upgraded ;) Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>

view details

Etienne Champetier

commit sha 09e34d29cd4566bfb505217271693fcb726bb9eb

containerd: remove docker_yum_conf / yum_conf leftover from 1945499e2f3c2b8f9e555405eac7896fd24d7e07 Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>

view details

Etienne Champetier

commit sha b2f6ed7deed7fcb8559d747f18df135974a86aa9

docker: remove obsoletes=0 in yum.conf This was introduced in ef7f5edbb3643dd23009c35e78e6efaae77f1f08 obsoletes=0 is not present in the official repo config https://download.docker.com/linux/centos/docker-ce.repo so it might not be needed for some time Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>

view details

Etienne Champetier

commit sha b2f3ab77cdf5b6eb59d5d533b21ac14120ca4c83

docker: remove some old debug code Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>

view details

Etienne Champetier

commit sha 16a34548ea7c5bffbc8ff42282b706bf003f3372

docker: remove checks for docker 1.12 Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>

view details

Etienne Champetier

commit sha de6c71a426e3dd42d9da35b87ea2f0035c8b519e

docker: remove dockerproject repo reference Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>

view details

Etienne Champetier

commit sha 7433b70d9579a850561aad4a2b464ab4a1494a9e

docker: remove kernel check Only CentOS 7 uses Linux 3.10, all other OSs have more recent kernels Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>

view details

Etienne Champetier

commit sha 1baee488ab806bcc375859618c327574e71f0ed0

containerd: remove duplicate package pining task Leave it with the install instead of the repo config Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>

view details

Etienne Champetier

commit sha 82af8e455e87db1f56978430d7613f709ff7c41e

docker: remove old versions Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>

view details

Florian Ruynat

commit sha a923f4e7c0692229c442b07a531bfb5fc41a23f9

Update kube_version_min_required and cleanup hashes for release (#7160)

view details

push time in 2 days

issue commentkubernetes-sigs/kubespray

/usr/local/bin/k8s-certs-renew.sh: line 13: /usr/bin/cp: No such file or directory

I think we can use /bin/cp according to https://refspecs.linuxfoundation.org/FHS_3.0/fhs-3.0.html#binEssentialUserCommandBinaries

lazybetrayer

comment created time in 2 days

issue openedkubernetes-sigs/kubespray

playbook fails when ShutdownGracePeriod and ShutdownGracePeriodCriticalPods are set

Environment:

  • OS (printf "$(uname -srm)\n$(cat /etc/os-release)\n"):

Linux 3.10.0-514.26.2.el7.x86_64 x86_64 NAME="Ubuntu" VERSION="20.04.2 LTS (Focal Fossa)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 20.04.2 LTS" VERSION_ID="20.04" HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" VERSION_CODENAME=focal UBUNTU_CODENAME=focal

  • Version of Ansible (ansible --version):

ansible 2.10.11 config file = /workspace/ansible.cfg configured module search path = ['/workspace/library', '/workspace/kubespray/library'] ansible python module location = /usr/local/lib/python3.8/dist-packages/ansible executable location = /usr/local/bin/ansible python version = 3.8.5 (default, Jan 27 2021, 15:41:15) [GCC 9.3.0]

  • Version of Python (python --version): Python 3.8.5

Kubespray version (commit) (git rev-parse --short HEAD): v2.17.0

Full inventory with variables (ansible -i inventory/sample/inventory.ini all -m debug -a "var=hostvars[inventory_hostname]"):

kubelet_shutdown_grace_period: 150s
kubelet_shutdown_grace_period_critical_pods: 30s

Output of ansible run:

fatal: [kube-node-10-200-17-47]: FAILED! => {
    "assertion": "kubelet_shutdown_grace_period > kubelet_shutdown_grace_period_critical_pods",
    "changed": false,
    "evaluated_to": false,
    "msg": "ShutdownGracePeriod (150s) needs to be greater than ShutdownGracePeriodCriticalPods (30s) in order to give normal pods time to be evacuated, please see https://kubernetes.io/blog/2021/04/21/grace
ful-node-shutdown-beta/ for details"
}

created time in 2 days

issue openedkubernetes-sigs/kubespray

/usr/local/bin/k8s-certs-renew.sh: line 13: /usr/bin/cp: No such file or directory

Environment:

  • OS (printf "$(uname -srm)\n$(cat /etc/os-release)\n"): Linux 5.4.0-84-generic x86_64 NAME="Ubuntu" VERSION="20.04.2 LTS (Focal Fossa)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 20.04.2 LTS" VERSION_ID="20.04" HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" VERSION_CODENAME=focal UBUNTU_CODENAME=focal

Kubespray version (commit) (git rev-parse --short HEAD): v2.17.0

Anything else do we need to know: In Ubuntu, cp is located at/bin/cp.

created time in 2 days

startedcontainers/skopeo

started time in 2 days

startedvirtual-kubelet/virtual-kubelet

started time in 10 days

issue openedopsgenie/kubernetes-event-exporter

elasticsearch response status code ignored

The status code from ES is ignored here, which hide some errors from ES https://github.com/opsgenie/kubernetes-event-exporter/blob/7bb80521d1a65733473090cefc85ec2aabdaef17/pkg/sinks/elasticsearch.go#L174-L181

created time in 16 days

issue commentkubernetes/ingress-nginx

validating webhook should ignore ingresses with a different ingressclass

Ok thanks. So just fyi, 2 controllers in one cluster are a longtime supporter and functioning feature so this is not a bug. Both controllers need to be configured with different ingressClass and the ingress objects need to be configured with a ingressClassName. Thanks, ; Long On Fri, 27 Aug, 2021, 7:54 AM Wang Zhen, ***@***.***> wrote: Thanks. To me myself one aspect is still unclear. I was asking about the ingress.spec host value that is a fqdn. Like api1.mydomain.com and path /. Can you elaborate why internal and external ingress will both configure same api1.mydomain.com and /. Thanks, ; Long … <#m_6536236180590232772_> On Fri, 27 Aug, 2021, 4:17 AM Nico Engelen, @.***> wrote: One use case as described here https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/#multiple-ingress-nginx-controllers would be different ingress controller for internal vs. external traffic. We do this for instance to enforce mTLS auth for external traffic while letting internal talk to the ingress without auth while serving up the exact same content at the exact same path. I'm sure other use cases can be thought of. This has now stopped for us and forced me to pin the chart version to 3.33.0 (last working for us, haven't tested anything after that) and I could not re-instate the previous behaviour following any documentation I could find (I tried this https://kubernetes.github.io/ingress-nginx/#i-have-more-than-one-controller-running-in-my-cluster-and-i-want-to-use-the-new-spec but it's quite hard to understand in the first place). Any advice on how to keep using multiple ingress controllers for the same host+path combination would be highly appreciated. I would also argue that this should remain a bug as previously working behaviour has been broken. Thanks, Nico /kind bug — You are receiving this because you commented. Reply to this email directly, view it on GitHub <#7546 (comment) <#7546 (comment)>>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGZVWSEF2NUEATUODF4NBDT6277VANCNFSM5C2MNKZQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub . In my company, a same domain can be mapped to different IPs, one for internal access and one for external access. To support this, we deploy two ingress controllers. Sometimes we create 2 ingress resources with identical rule but different ingress class, since they are used for different IPs. — You are receiving this because you commented. Reply to this email directly, view it on GitHub <#7546 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGZVWWTTIQW5OCIDS4HSZTT63ZOJANCNFSM5C2MNKZQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

Let me clarify, we run 2 controllers in one cluster successfully. Both controllers are configured with different ingressClass and the ingress objects are configured with a correct ingressClassName. There are no problems with most ingress resources.

ingress classes:

apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: external
spec:
  controller: k8s.io/ingress-nginx-external
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: internal
spec:
  controller: k8s.io/ingress-nginx-internal

controller are running with --controller-class=k8s.io/ingress-nginx-internal/--controller-class=k8s.io/ingress-nginx-external

If we create below resources, test2 will be rejected by validating webhook: Error from server (BadRequest): error when creating "ing.yml": admission webhook "validate.nginx.ingress.kubernetes.io" denied the request: host "www.example.com" and path "/" is already defined in ingress default/test1. But this used to work before v1.0.0.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: test1
spec:
  ingressClassName: external
  rules:
  - host: www.example.com
    http:
      paths:
      - backend:
          service:
            name: test
            port:
              number: 1111
        path: /
        pathType: Prefix
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: test2
spec:
  ingressClassName: internal
  rules:
  - host: www.example.com
    http:
      paths:
      - backend:
          service:
            name: test
            port:
              number: 1111
        path: /
        pathType: Prefix
lazybetrayer

comment created time in a month

issue commentkubernetes/ingress-nginx

validating webhook should ignore ingresses with a different ingressclass

Thanks. To me myself one aspect is still unclear. I was asking about the ingress.spec host value that is a fqdn. Like api1.mydomain.com and path /. Can you elaborate why internal and external ingress will both configure same api1.mydomain.com and /. Thanks, ; Long On Fri, 27 Aug, 2021, 4:17 AM Nico Engelen, ***@***.***> wrote: One use case as described here https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/#multiple-ingress-nginx-controllers would be different ingress controller for internal vs. external traffic. We do this for instance to enforce mTLS auth for external traffic while letting internal talk to the ingress without auth while serving up the exact same content at the exact same path. I'm sure other use cases can be thought of. This has now stopped for us and forced me to pin the chart version to 3.33.0 (last working for us, haven't tested anything after that) and I could not re-instate the previous behaviour following any documentation I could find (I tried this https://kubernetes.github.io/ingress-nginx/#i-have-more-than-one-controller-running-in-my-cluster-and-i-want-to-use-the-new-spec but it's quite hard to understand in the first place). Any advice on how to keep using multiple ingress controllers for the same host+path combination would be highly appreciated. I would also argue that this should remain a bug as previously working behaviour has been broken. Thanks, Nico /kind bug — You are receiving this because you commented. Reply to this email directly, view it on GitHub <#7546 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGZVWSEF2NUEATUODF4NBDT6277VANCNFSM5C2MNKZQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

In my company, a same domain can be mapped to different IPs, one for internal access and one for external access. To support this, we deploy two ingress controllers. Sometimes we create 2 ingress resources with identical rule but different ingress class, since they are used for different IPs.

lazybetrayer

comment created time in a month

issue openedkubernetes/ingress-nginx

validating webhook should ignore ingresses with a different ingressclass

NGINX Ingress controller version: v1.0.0

Kubernetes version (use kubectl version): v1.20.9

Environment: Bare Metal

What happened:

Before v1.0.0, there's a check to skip validating ingress with a different ingressclass: https://github.com/kubernetes/ingress-nginx/blob/f3c50698d98299b1a61f83cb6c4bb7de0b71fb4b/internal/ingress/controller/controller.go#L224

In v1.0.0, this code is removed. With multiple ingress controllers, both will validate the same ingress. If we create two ingresses with same host and path but different ingressclasses, the second one will be rejected.

What you expected to happen:

skip validating ingress with a different ingressclass

/kind bug

created time in a month

startedkedacore/keda

started time in a month

startedpi-hole/pi-hole

started time in a month

startedAdguardTeam/AdGuardHome

started time in a month

startedtogettoyou/super-signature

started time in a month

fork lazybetrayer/zadig

Zadig is a cloud native, distributed, developer-oriented continuous delivery product.

https://koderover.com

fork in 2 months

push eventlazybetrayer/dotfiles

Wang Zhen

commit sha 920ad41da8404f70b9333d73327c928bb390a679

update

view details

push time in 2 months

startedcontainerd/containerd

started time in 3 months

startedcontainerd/nerdctl

started time in 3 months