profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/pacoxu/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.
Paco Xu pacoxu @DaoCloud Co,.Ltd, Shanghai Shanghai, China https://pacoxu.wordpress.com/ Kubernetes Developer, Soccer Fan & PUBG Fan 🌟CNCF 开源工程师🌟持续招聘中,📧简历➡️paco.xu@daocloud.io

GeraldXv/igroo 1

studio

pacoxu/calculators 1

tieba guess calculator

pacoxu/django-xadmin 1

Drop-in replacement of Django admin comes with lots of goodies, fully extensible with plugin support, pretty UI based on Twitter Bootstrap.

pacoxu/500LineorLess_CN 0

500 line or less 中文翻译计划。

pacoxu/arpscan 0

Docker container for arp-scan (Detect Duplicate IP Addresses)

pacoxu/azure-docs 0

Open source documentation of Microsoft Azure

pacoxu/caddyfile-parser 0

Caddyfile Syntax https://caddyserver.com/docs/caddyfile

Pull request review commentkubernetes/enhancements

KEP 2527: Clarify meaning of `status`

+# KEP-2527: Clarify if/how controllers can use status to track non-observable state++<!-- toc -->+- [Release Signoff Checklist](#release-signoff-checklist)+- [Summary](#summary)+- [Motivation](#motivation)+  - [Goals](#goals)+  - [Non-Goals](#non-goals)+- [Proposal](#proposal)+  - [User Stories (Optional)](#user-stories-optional)+    - [Story 1](#story-1)+    - [Story 2](#story-2)+  - [Notes/Constraints/Caveats (Optional)](#notesconstraintscaveats-optional)+  - [Risks and Mitigations](#risks-and-mitigations)+- [Design Details](#design-details)+  - [Test Plan](#test-plan)+  - [Graduation Criteria](#graduation-criteria)+  - [Upgrade / Downgrade Strategy](#upgrade--downgrade-strategy)+  - [Version Skew Strategy](#version-skew-strategy)+- [Production Readiness Review Questionnaire](#production-readiness-review-questionnaire)+  - [Feature Enablement and Rollback](#feature-enablement-and-rollback)+  - [Rollout, Upgrade and Rollback Planning](#rollout-upgrade-and-rollback-planning)+  - [Monitoring Requirements](#monitoring-requirements)+  - [Dependencies](#dependencies)+  - [Scalability](#scalability)+  - [Troubleshooting](#troubleshooting)+- [Implementation History](#implementation-history)+- [Drawbacks](#drawbacks)+- [Alternatives](#alternatives)+- [Infrastructure Needed (Optional)](#infrastructure-needed-optional)+<!-- /toc -->++## Release Signoff Checklist++Items marked with (R) are required *prior to targeting to a milestone / release*.++- [ ] (R) Enhancement issue in release milestone, which links to KEP dir in [kubernetes/enhancements] (not the initial KEP PR)+- [ ] (R) KEP approvers have approved the KEP status as `implementable`+- [ ] (R) Design details are appropriately documented+- [ ] (R) Test plan is in place, giving consideration to SIG Architecture and SIG Testing input+- [ ] (R) Graduation criteria is in place+- [ ] (R) Production readiness review completed+- [ ] (R) Production readiness review approved+- [ ] "Implementation History" section is up-to-date for milestone+- [ ] User-facing documentation has been created in [kubernetes/website], for publication to [kubernetes.io]+- [ ] Supporting documentation—e.g., additional design documents, links to mailing list discussions/SIG meetings, relevant PRs/issues, release notes++[kubernetes.io]: https://kubernetes.io/+[kubernetes/enhancements]: https://git.k8s.io/enhancements+[kubernetes/kubernetes]: https://git.k8s.io/kubernetes+[kubernetes/website]: https://git.k8s.io/website++## Summary++Since basically the beginning of Kubernetes, we've had this sort of "litmus+test" for status fields: "If I erased the whole status struct, would everything+in it be reconstructed (or at least be reconstructible) from observation?". The+goal was to ensure that the delineation between "what I asked for" and "what it+actually is" was clear and to encourage active reconciliation of state.++Another reason for this split was an idea which, as far as I know, has never+been implemented by anyone: that an object's spec and status blocks could be+stored in different etcd instances and the status could have a TTL.  At this+point in the project, I expect that wiping status out like this would end in+fireworks, and not the fun kind.  Status is, effectively, as durable as spec.++Many of our APIs pass this test (sometimes we fudge it and say yes "in+theory"), but not all of them do.  This KEP proposes to clarify or remove this+guidance, especially as it pertains to state that is not derived from+observation.++One of the emergent uses of the spec/status split is access control.  It is+assumed that, for most resources, users own (can write to) all of spec and+controllers own all of status, and not the reverse.  This allows patterns like+Services which set `spec.type: LoadBalancer`, where the controller writes the+LB's IP address to status, and kube-proxy can trust that IP address (because it+came from a controller, not a user).  Compare that with Services which use+`spec.externalIPs`.  The behavior in kube-proxy is roughly the same, but+because non-trusted users can write to `spec.externalIPs` and that does not+require a trusted controller to ACK, that behavior was declared a CVE.++This KEP further proposes to add guidance for APIs that want to implement an+"allocation" or "binding" pattern which requires trusted ACK.++## Motivation++As an API reviewer, I have seen many different patterns in use.  I have shot+down APIs that run afoul of the rebuild-from-observation test, and forced the+implementations to be more complicated.  I no longer find this to be useful to+our APIs, and in fact I find it to be a detriment.++I suspect that several APIs would have come out differently if not for this.++### Goals++* Clarify or remove the from-observation rule to better match reality.+* Provide guidance for APIs on how to save controller results.++### Non-Goals++* Retroactively apply this pattern to change existing GA APIs.+* Necessarily apply this pattern to change pre-GA APIs (though they may choose+  to follow it).+* Provide arbitrary storage for controller state.+* Provide distinct access control for subsets of spec or status.++## Proposal++This KEP proposes to:++1) Get rid of the [if it got lost](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status)+litmus test for status fields+2) Acknowledge that spec/status are a useful and meaningful split for access control+3) Document one or more patterns for APIs that need a trusted acknowledgement+   of the spec++Depending on feedback, there may be other approaches to solving the same+problems.++### Risks and Mitigations++The current guidance exists for a reason.  It does encourage a sort of+idealogical cleanliness.  However, it is not universally adhered to and it+neglects the reality of these request/acknowledge APIs.  On the whole, this KEP+posits that the current guidance is a net negative.++## Design Details++<<[UNRESOLVED ideas to debate]>>++### Option 1: Loosen status++Remove the idea that status fields _must_ be from observation.  Allow controllers+to write values to status that represent allocations or acknowledged requests.+Document that status fields are best when they represent observed state, but do+not _require_ it.++This has [already happened](https://github.com/kubernetes/enhancements/pull/2308/files#r567809465)+at least once.++#### Examples++Some of these are variants of existing APIs and some are hypothetical.++1) User creates a Service and sets `Service.spec.type = LoadBalancer`.  The+cloud controller sets `Service.status.loadBalancer.ingress.ip` in response.  If+the user sets `Service.spec.loadBalancerIP` to a specific value, the cloud+controller will either successfully use that IP and and set it as before (ACK),+or it will not set any value in `Service.status.loadBalancer.ingress` (NAK).++2) Given a Pod, user patches `Pod.spec.containers[0].resources.requests[cpu]`+to a new value.  Kubelet sees this as a request and, if viable, sets+`Pod.status.containerStatuses[0].resources.requests[cpu]` to the same value.++3) User creates a `Bucket`, setting `Bucket.spec.instance = artifacts-prod`.+The bucket controller verifies that the namespace is allowed to use the+artifacts-prod bucket and, if so, sets `Bucket.status.instance` to the same+value.++#### Tradeoffs++Pro: Easy and compatible with existing uses.++Con: Dilutes the meaning of status.++### Option 2: Add a new top-level stanza to spec/status resources++Keep and strengthen the idea that status fields must be from observation.+Segregate controller-owned fields to a new stanza, parallel to `spec` and+`status`, which can be RBAC'ed explicitly.  For the sake of this doc, let's+call it `control`.++To make this viable, we would need to add `control` as a "standard" subresource+and apply similar rules to `spec` and `status` with regards to writes (can't+write to `control` through the main resource, can't write to `spec` or `status`+through the `control` subresource).  We would also need to add it to CRDs as an+available subresource.++#### Examples++1) User creates a Service and sets `Service.spec.type = LoadBalancer`.  The+cloud controller sets `Service.control.loadBalancer.ingress.ip` in response.  If+the user sets `Service.spec.loadBalancerIP` to a specific value, the cloud+controller will either successfully use that IP and and set it as before (ACK),+or it will not set any value in `Service.control.loadBalancer.ingress` (NAK).++2) Given a Pod, user patches `Pod.spec.containers[0].resources.requests[cpu]`+to a new value.  Kubelet sees this as a request and, if viable, sets+`Pod.control.containerStatuses[0].resources.requests[cpu]` to the same value.++3) User creates a `Bucket`, setting `Bucket.spec.instance = artifacts-prod`.+The bucket controller verifies that the namespace is allowed to use the+artifacts-prod bucket and, if so, sets `Bucket.control.instance` to the same+value.++#### Tradeoffs++Pro: Clarifies the meaning of status.++Pro: Possibly clarifies the roles acting on a resource.++Con: Requires a lot of implentation and possibly steps on existing uses of the+field name.++Con: Net-new concept requires new documentation and socialization.++Con: Incompatible with existing uses of status for this.++### Option 3: Sub-divide spec++Keep and strengthen the idea that status fields must be from observation.+Segregate controller-owned fields to a sub-stanza of spec.  Create a new+access-control mechanism (or extend RBAC) to provide field-by-field access.++#### Examples++1) User creates a Service and sets `Service.spec.type = LoadBalancer`.  The+cloud controller sets `Service.spec.control.loadBalancer.ingress.ip` in+response.  If the user sets `Service.spec.loadBalancerIP` to a specific value,+the cloud controller will either successfully use that IP and and set it as+before (ACK), or it will not set any value in+`Service.spec.control.loadBalancer.ingress` (NAK).++2) Given a Pod, user patches `Pod.spec.containers[0].resources.requests[cpu]`+to a new value.  Kubelet sees this as a request and, if viable, sets+`Pod.spec.control.containerStatuses[0].resources.requests[cpu]` to the same+value.++3) User creates a `Bucket`, setting `Bucket.spec.instance = artifacts-prod`.+The bucket controller verifies that the namespace is allowed to use the+artifacts-prod bucket and, if so, sets `Bucket.spec.control.instance` to the same+value.++#### Tradeoffs++Pro: Retains purity of status.++Con: Confuses the meaning of spec.++Con: Can collide with existing uses of the field name.++Con: Needs a whole new access model.++Con: Likely needs a new subresource.++#### Notes++This model is included for completeness.  I do not expect ANYONE to endorse it.++### Option 4: Use 2 objects++Keep and strengthen the idea that status fields must be from observation.+Segregate controller-owned fields to a new object, parallel to the user-owned+object.  RBAC the new object.++#### Examples++1) User creates a Service "foo" and sets `Service.spec.type = LoadBalancer`.+The cloud controller creates a new ServiceLoadBalancer object "foo" and sets+`ServiceLoadBalancer.ingress.ip` in response.  If the user sets+`Service.spec.loadBalancerIP` to a specific value, the cloud controller will+either successfully use that IP and and set it as before (ACK), or it will+create a ServiceLoadBalancer object with an error status (NAK).++2) Given a pod "foo", user patches+`Pod.spec.containers[0].resources.requests[cpu]` to a new value.  Kubelet sees+this as a request and, if viable, updates the matching PodResources object+"foo", setting `PodResources.requests[cpu]` to the same value.++3) User creates a `Bucket`, setting `Bucket.spec.instance = artifacts-prod`.+The bucket controller verifies that the namespace is allowed to use the+artifacts-prod bucket and, if so, creates a new BucketBinding object which sets+`BucketBinding.instance` to the same value.++#### Tradeoffs++Pro: RBAC is clear.++Pro: Different controllers can be RBAC'ed to different result-kinds.++Con: Proliferation of many small objects, lifecycle management.++Con: Results are not always 1:1 with user objects (e.g. a Service can have N+ports, each needs a NodePort allocation).++Con: More watch streams overall (but less traffic on each).++Pro: Feels natural for things like Buckets.++Con: Feels clumsy for things like Pod resources.++Pro: Third-party controllers can follow the same pattern for non-standard+results.++#### Notes++Even if we ultimately adopt option 1, 2, or 3, option 4 may still be a better+answer, depending on the particular design.+

I guess I am not arguing NOT to use conditions when that makes sense, just that the use of status (proper) should also be allowed for common-case needs.

thockin

comment created time in 3 minutes

issue commentkubernetes/enhancements

ReadWriteOncePod PersistentVolume Access Mode

Hi @supriya-premkumar , sorry for the churn here. I re-added the third PR; sig scheduling recommends this makes it in before alpha. It should be good to go (once the base PR is merged).

chrishenzie

comment created time in 15 minutes

pull request commentkubernetes/enhancements

KEP-2485: Scheduler changes for beta graduation criteria

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: <a href="https://github.com/kubernetes/enhancements/pull/2797#" title="Author self-approved">chrishenzie</a> To complete the pull request process, please assign saad-ali after the PR has been reviewed. You can assign the PR to them by writing /assign @saad-ali in a comment when ready.

The full list of commands accepted by this bot can be found here.

<details open> Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment Approvers can cancel approval by writing /approve cancel in a comment </details> <!-- META={"approvers":["saad-ali"]} -->

chrishenzie

comment created time in 20 minutes

PR opened kubernetes/enhancements

KEP-2485: Scheduler changes for beta graduation criteria
  • One-line PR description: Scheduler changes for beta graduation criteria, remove kube-controller-manager from feature gate consumers
  • Issue link: https://github.com/kubernetes/enhancements/issues/2485
  • Other comments: Included is a small change to remove the kube-controller-manager from the feature gate consumer list, it is not required for adding the feature

/assign @alculquicondor /assign @ahg-g

+10 -11

0 comment

2 changed files

pr created time in 20 minutes

created repositorydlorenc/sbom-oci

created time in 23 minutes

startedopencontainers/runtime-spec

started time in 27 minutes

startedAliyunContainerService/kube-eventer

started time in 41 minutes

issue commentkubernetes/enhancements

Receipts process for tracking release enhancements

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten

palnabarun

comment created time in 43 minutes

release chrisohaver/ebpf

v.0.0.1-alpha

released time in an hour

release chrisohaver/ebpf

v0.0.0-alpha

released time in an hour

startedlitmuschaos/litmus-go

started time in an hour

created repositorychrisohaver/ebpf

created time in 2 hours

pull request commentkubernetes/enhancements

CSR Duration KEP

Hi @enj 👋🏽. Supriya here, 1.22 Enhancements Shadow. For the enhancement to be included in the 1.22 milestone, it must meet the following criteria:

  • Complete all the Beta release PRR questionnaire(upgrade/rollback test)
  • The PR must be approved and merged by EOD June 25, 2021

Thank you!

enj

comment created time in 2 hours

release Clivern/observability-php-sdk

2.0.5

released time in 2 hours

issue commentkubernetes/enhancements

Generic data populators

@bswartz Have you had a chance to open a docs PR against the dev-1.22 branch in the k/website repo, as @PI-Victor suggested above? For now it can just be a placeholder to track the progress of any docs work. Let me know if you need any help with the process or the writing.

bswartz

comment created time in 2 hours

created repositorykairen/opa-labs

Open Policy Agent Lags

created time in 2 hours

fork jimangel/falco-diagrams

Diagrams to visually learn Falco and its eBPF probe

fork in 3 hours

release kubernetes/git-sync

v3.3.3

released time in 3 hours

startedxing/kubernetes-oom-event-generator

started time in 3 hours

startedZulko/moviepy

started time in 3 hours

startedmicrosoft/Web-Dev-For-Beginners

started time in 3 hours

startednektos/act

started time in 3 hours

fork n4j/kansible

Kansible lets you orchestrate operating system processes on Windows or any Unix in the same way as you orchestrate your Docker containers with Kubernetes by using Ansible to provision the software onto hosts and Kubernetes to orchestate the processes

fork in 3 hours

fork yasker/kcp

kcp is a prototype of a Kubernetes API server that is not a Kubernetes cluster - a place to create, update, and maintain Kube-like APIs with controllers above or without clusters.

fork in 3 hours

created repositorynavidshaikh/ghactions

test github actions

created time in 3 hours

release Clivern/observability-php-sdk

2.0.4

released time in 4 hours

startedsimdjson/simdjson

started time in 4 hours

fork micahhausler/hegel

The gRPC/http metadata service for Tinkerbell.

https://tinkerbell.org

fork in 4 hours

startedwhoshuu/cpr

started time in 4 hours