profile
viewpoint
Tim Bannister sftim @scalefactory https://tinyurl.com/sftim Consultant at The Scale Factory

scalefactory/s3audit-ts 100

CLI tool for auditing S3 buckets

cncf/hugo-netlify-starter 19

Static website template for CNCF projects

scalefactory/s3audit-rs 1

Tool for auditing AWS S3 buckets

sftim/bottlerocket 1

An operating system designed for hosting containers

sftim/action-setup-waypoint 0

A GitHub action for setting up Waypoint

sftim/amazon-s3-developer-guide 0

The open source version of the Amazon S3 docs. You can submit feedback & requests for changes by submitting issues in this repo or by making proposed changes & submitting a pull request.

sftim/amazon-ssm-agent 0

Agent to enable remote management of your Amazon EC2 instance configuration.

sftim/amazon-vpc-cni-k8s 0

Networking plugin repository for pod networking in Kubernetes using Elastic Network Interfaces on AWS

sftim/appshield 0

Security configuration checks for popular cloud native applications and infrastructure.

Pull request review commentkubernetes/website

Add doc for Recovery from expansion failure

 different Kubernetes components. | `ProxyTerminatingEndpoints` | `false` | Alpha | 1.22 | | | `QOSReserved` | `false` | Alpha | 1.11 | | | `ReadWriteOncePod` | `false` | Alpha | 1.22 | |+| `RecoverVolumeExpansionFailure` | `false` | Alpha | 1.23 | |

I don't see the extra details. This should be in the form of an edit to the page section headed “List of feature gates”.

gnufied

comment created time in 2 hours

Pull request review commentkubernetes/website

Add doc for Recovery from expansion failure

 subresource of the referenced *owner* can change it. This admission controller implements additional validations for checking incoming `PersistentVolumeClaim` resize requests.  {{< note >}}-Support for volume resizing is available as an alpha feature. Admins must set the feature gate `ExpandPersistentVolumes`+Support for volume resizing is available as an beta feature. Admins must set the feature gate `ExpandPersistentVolumes`
Support for volume resizing is available as a beta feature. As a cluster administrator,
you must ensure that the feature gate `ExpandPersistentVolumes` is set
gnufied

comment created time in 2 hours

Pull request review commentkubernetes/website

Add doc for Recovery from expansion failure

 If expanding underlying storage fails, the cluster administrator can manually re 4. Re-create the PVC with smaller size than PV and set `volumeName` field of the PVC to the name of the PV. This should bind new PVC to existing PV. 5. Don't forget to restore the reclaim policy of the PV. +##### User recovery expansion failure++{{< feature-state for_k8s_version="v1.23" state="alpha" >}}++{{< note >}}+Recovery from expanding PVCs by users is available as an alpha feature since Kubernetes 1.23. The `RecoverVolumeExpansionFailure` feature must be enabled for this feature to work. Refer to the [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) documentation for more information.+{{< /note >}}++In addition to above manual steps Kubernetes if feature gate `RecoverVolumeExpansionFailure` is enabled in the cluster, the user of PVC can retry expansion with smaller size than previously requested value by editing `pvc.spec.resources`.
If the feature gates `ExpandPersistentVolumes` and `RecoverVolumeExpansionFailure` are both
enabled in your cluster, and expansion has failed for a PVC, you can retry expansion with a
smaller size than the previously requested value. To request a new expansion attempt with a
smaller proposed size, edit `.spec.resources` for that PVC and choose a value that is less than the
value you previously tried.
gnufied

comment created time in 2 hours

Pull request review commentkubernetes/website

Add doc for Recovery from expansion failure

 If expanding underlying storage fails, the cluster administrator can manually re 4. Re-create the PVC with smaller size than PV and set `volumeName` field of the PVC to the name of the PV. This should bind new PVC to existing PV. 5. Don't forget to restore the reclaim policy of the PV. +##### User recovery expansion failure

It looks like there are two ways to attempt recovery when expansion fails.

I would use https://kubernetes.io/docs/contribute/style/hugo-shortcodes/#tabs to show the two choices.

gnufied

comment created time in 2 hours

Pull request review commentkubernetes/website

Add doc for Recovery from expansion failure

 If expanding underlying storage fails, the cluster administrator can manually re 4. Re-create the PVC with smaller size than PV and set `volumeName` field of the PVC to the name of the PV. This should bind new PVC to existing PV. 5. Don't forget to restore the reclaim policy of the PV. +##### User recovery expansion failure++{{< feature-state for_k8s_version="v1.23" state="alpha" >}}++{{< note >}}+Recovery from expanding PVCs by users is available as an alpha feature since Kubernetes 1.23. The `RecoverVolumeExpansionFailure` feature must be enabled for this feature to work. Refer to the [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) documentation for more information.+{{< /note >}}++In addition to above manual steps Kubernetes if feature gate `RecoverVolumeExpansionFailure` is enabled in the cluster, the user of PVC can retry expansion with smaller size than previously requested value by editing `pvc.spec.resources`.+This is useful if expansion to a higher value can not succeed because of capacity constraints, in which case user of PVC can retry expansion by specifying a size that is within the capacity limits of underlying storage provider. It should be noted that, while users can specify lower value than what was requested previously in `pvc.spec.resources` but new value must still be higher than `pvc.status.capacity`.
This is useful if expansion to a higher value did not succeed because of capacity constraint.
If that has happened, or you suspect that it might have, you can retry expansion by specifying a
size that is within the capacity limits of underlying storage provider.

Note that,
although you can a specify a lower amount of storage than what was requested previously,
the new value must still be higher than `.status.capacity`.
Kubernetes does not support shrinking a PVC to less than its current size.
gnufied

comment created time in 2 hours

PullRequestReviewEvent
PullRequestReviewEvent

pull request commentkubernetes/website

Add doc for Recovery from expansion failure

/sig storage

gnufied

comment created time in 2 hours

pull request commentkubernetes/website

Add recommendation for Deployment when HPA is enabled

/lgtm I strongly recommend squashing commits here down to as few as makes sense (probably: just 1).

jtslear

comment created time in 4 hours

pull request commentkubernetes/website

[docs]: Promote STS minReadySeconds to beta

LGTM for SIG Docs too

ravisantoshgudimetla

comment created time in 5 hours

pull request commentkubernetes/website

Add information to API Evictions from Safely Drain a Node

@kubernetes/sig-node-pr-reviews and @endocrimes any further thoughts on https://github.com/kubernetes/website/pull/28290#discussion_r758289076 and my response? How does this mechanism actually work?

shannonxtreme

comment created time in 6 hours

Pull request review commentkubernetes/website

Add information to API Evictions from Safely Drain a Node

 weight: 70  {{< glossary_definition term_id="api-eviction" length="short" >}} </br> -You can request eviction by directly calling the Eviction API -using a client of the kube-apiserver, like the `kubectl drain` command. -This creates an `Eviction` object, which causes the API server to terminate the Pod. +You can request eviction by calling the Eviction API directly, or programmatically+using a client of the {{<glossary_tooltip term_id="kube-apiserver" text="API server">}}, like the `kubectl drain` command. This+creates an `Eviction` object, which causes the API server to terminate the Pod.  API-initiated evictions respect your configured [`PodDisruptionBudgets`](/docs/tasks/run-application/configure-pdb/) and [`terminationGracePeriodSeconds`](/docs/concepts/workloads/pods/pod-lifecycle#pod-termination).  +Using the API to create an Eviction object for a Pod is like performing a+policy-controlled [`DELETE` operation](/docs/reference/kubernetes-api/workload-resources/pod-v1/#delete-delete-a-pod)+on the Pod. ++## Calling the Eviction API++You can use a [Kubernetes language client](/docs/tasks/administer-cluster/access-cluster-api/#programmatic-access-to-the-api)+to access the Kubernetes API and create an `Eviction` object. To do this, you+POST the attempted operation, similar to the following example:++{{< tabs name="Eviction_example" >}}+{{% tab name="policy/v1" %}}+{{< note >}}+`policy/v1` Eviction is available in v1.22+. Use `policy/v1beta1` with prior releases.+{{< /note >}}++```json+{+  "apiVersion": "policy/v1",+  "kind": "Eviction",+  "metadata": {+    "name": "quux",+    "namespace": "default"+  }+}+```+{{% /tab %}}+{{% tab name="policy/v1beta1" %}}+{{< note >}}+Deprecated in v1.22 in favor of `policy/v1`+{{< /note >}}++```json+{+  "apiVersion": "policy/v1beta1",+  "kind": "Eviction",+  "metadata": {+    "name": "quux",+    "namespace": "default"+  }+}+```+{{% /tab %}}+{{< /tabs >}}++Alternatively, you can attempt an eviction operation by accessing the API using+`curl` or `wget`, similar to the following example:++```bash+curl -v -H 'Content-type: application/json' https://your-cluster-api-endpoint.example/api/v1/namespaces/default/pods/quux/eviction -d @eviction.json+```++## How API-initiated eviction works++When you request an eviction using the API, the API server performs admission+checks and responds in one of the following ways:++* `200 OK`: the eviction is allowed, the `Eviction` subresource is created, and+  the Pod is deleted, similar to sending a `DELETE` request to the Pod URL.+* `429 Too Many Requests`: the eviction is not currently allowed because of the+  configured {{<glossary_tooltip term_id="pod-disruption-budget" text="PodDisruptionBudget">}}.+  You may be able to attempt the eviction again later. You might also see this+  response because of API rate limiting. +* `500 Internal Server Error`: the eviction is not allowed because there is a+  misconfiguration, like if multiple PodDisruptionBudgets reference the same Pod.++If the Pod you want to evict isn't part of a workload that has a+PodDisruptionBudget, the API server always returns `200 OK` and allows the+eviction. ++If the API server allows the eviction, the Pod is deleted as follows:++1. The `Pod` resource in the API server is updated with a deletion timestamp,+   after which the API server considers the `Pod` resource to be terminated. The+   `Pod` resource is also marked with the configured grace period.+1. The {{<glossary_tooltip term_id="kubelet" text="kubelet">}} on the node where the local Pod is running notices that the `Pod`+   resource is marked for termination and starts to gracefully shut down the+   local Pod.+1. While the kubelet is shutting the Pod down, the control plane removes the Pod+   from {{<glossary_tooltip term_id="endpoint" text="Endpoint">}} and+   {{<glossary_tooltip term_id="endpoint-slice" text="EndpointSlice">}}+   objects. As a result, controllers no longer consider the Pod as a valid object.+1. After the grace period for the Pod expires, the kubelet forcefully terminates+   the local Pod.+1. The kubelet tells the API server to remove the `Pod` resource.

I'm not sure whether the kubelet or some control plane component sends the DELETE request / performs the equivalent operation.

It's possible that API-initiated eviction lists the kubelet as a finalizer, and the action that the kubelet takes is to remove its finalizer once the Pod has terminated (gracefully or otherwise). In that case, it's the API server that spots there is a resource pending deletion with no finalizer remaining, and triggers cleanup.

SIG Node are likely to be the best folks to confirm this, as they look after both the kubelet code and (most of) the Pod API.

shannonxtreme

comment created time in 6 hours

PullRequestReviewEvent

pull request commentkubernetes/website

[docs]: Promote STS minReadySeconds to beta

/sig apps

@kubernetes/sig-apps-pr-reviews is this revised documentation technically accurate?

ravisantoshgudimetla

comment created time in 6 hours

Pull request review commentkubernetes/website

[docs]: Promote STS minReadySeconds to beta

 In the above example: The name of a StatefulSet object must be a valid [DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). ++### Minimum ready seconds

Let's not cover volumeClaimTemplates in this PR.

ravisantoshgudimetla

comment created time in 6 hours

PullRequestReviewEvent

issue commentkubernetes/website

List All Container Images Running in a Cluster

/triage needs-information /language en

I don't (yet) see anything to fix.

amalic

comment created time in 6 hours

pull request commentkubernetes/website

Added Base64 Online Tool

See issue https://github.com/kubernetes/website/issues/17984 and PR https://github.com/kubernetes/website/pull/18002 for the existing discussion.

rahulkumarsingh73690

comment created time in 6 hours

pull request commentkubernetes/website

Added Base64 Online Tool

/sig security

rahulkumarsingh73690

comment created time in 7 hours

PullRequestReviewEvent

pull request commentkubernetes/website

Updated deb sources location

I haven't reviewed the code (CLA not signed) but with https://github.com/kubernetes/website/pull/30651#issuecomment-980635392 in mind: /hold

paulgreenbank

comment created time in 7 hours

PullRequestReviewEvent

Pull request review commentkubernetes/website

kubeadm: add instructions about rebalancing CoreDNS Pods after joining more nodes

 Run 'kubectl get nodes' on control-plane to see this machine join. A few seconds later, you should notice this node in the output from `kubectl get nodes` when run on the control-plane node. +{{< note >}}+As the cluster nodes are usually initialized sequentially, the CoreDNS Pods are likely to +all run on the first control-plane node. If you install a cluster with more than one node, +we recommend that you rebalance the CoreDNS Pods with `kubectl -n kube-system rollout restart deployment coredns`.

nit (avoid using “we”, and timing for when to do this):

rebalance the CoreDNS Pods with `kubectl -n kube-system rollout restart deployment coredns`
after you first add nodes that are not part of the control plane.

How about that?

SataQiu

comment created time in 7 hours

pull request commentkubernetes/website

kubeadm: add instructions about rebalancing CoreDNS Pods after joining more nodes

/sig cluster-lifecycle

SataQiu

comment created time in 7 hours

Pull request review commentkubernetes/website

Use tee to write to /etc/bash_completion.d/kubectl

 You now need to ensure that the kubectl completion script gets sourced in all yo - Add the completion script to the `/etc/bash_completion.d` directory:     ```bash-   kubectl completion bash >/etc/bash_completion.d/kubectl+   echo -e "$(kubectl completion bash)" | sudo tee /etc/bash_completion.d/kubectl > /dev/null

This is better:

   kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl > /dev/null

Ideally: use https://kubernetes.io/docs/contribute/style/hugo-shortcodes/#tabs to cover the two cases:

  1. per user: kubectl completion bash >>~/.bashrc
  2. system-wide change, editing /etc/bash_completion.d/kubectl

and also mention that this is specific to bash; other shells exist.

CarlosDomingues

comment created time in 7 hours

PullRequestReviewEvent

Pull request review commentkubernetes/enhancements

KEP-3031: Add release artifact signing KEP

+title: Signing release artifacts+kep-number: 3031+authors:+  - "@saschagrunert"+owning-sig: sig-release+participating-sigs:+  - sig-security+status: provisional+creation-date: 2021-11-02+reviewers:+  - TBD+approvers:+  - TBD++##### WARNING !!! ######+# prr-approvers has been moved to its own location+# You should create your own in keps/prod-readiness+# Please make a copy of keps/prod-readiness/template/nnnn.yaml+# to keps/prod-readiness/sig-xxxxx/00000.yaml (replace with kep number)+#prr-approvers:++# see-also:+#   - "/keps/sig-aaa/1234-we-heard-you-like-keps"+#   - "/keps/sig-bbb/2345-everyone-gets-a-kep"+# replaces:+#   - "/keps/sig-ccc/3456-replaced-kep"++# The target maturity stage in the current dev cycle for this KEP.+stage: alpha++# The most recent milestone for which work toward delivery of this KEP has been+# done. This can be the current (upcoming) milestone, if it is being actively+# worked on.+latest-milestone: "v1.23"++# The milestone at which this feature was, or is targeted to be, at each stage.+milestone:+  alpha: "v1.23"+  # beta: "v1.20"+  # stable: "v1.22"++# The following PRR answers are required at alpha release+# List the feature gate name and the components for which it must be enabled+# feature-gates:+#   - name: MyFeature+#     components:+#       - kube-apiserver+#       - kube-controller-manager+# disable-supported: true

I'd comment this out unless we know we'll have a feature gate.

saschagrunert

comment created time in 10 hours

PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commentkubernetes/enhancements

KEP-3031: Add release artifact signing KEP

+# KEP-3031: Signing release artifacts++<!-- toc -->+- [Release Signoff Checklist](#release-signoff-checklist)+- [Summary](#summary)+- [Motivation](#motivation)+  - [Goals](#goals)+  - [Non-Goals](#non-goals)+- [Proposal](#proposal)+  - [User Stories (Optional)](#user-stories-optional)+  - [Risks and Mitigations](#risks-and-mitigations)+  - [Graduation Criteria](#graduation-criteria)+    - [Alpha](#alpha)+    - [Beta](#beta)+    - [GA](#ga)+- [Drawbacks](#drawbacks)+- [Alternatives](#alternatives)+- [Implementation History](#implementation-history)+<!-- /toc -->++## Release Signoff Checklist++<!--+**ACTION REQUIRED:** In order to merge code into a release, there must be an+issue in [kubernetes/enhancements] referencing this KEP and targeting a release+milestone **before the [Enhancement Freeze](https://git.k8s.io/sig-release/releases)+of the targeted release**.++For enhancements that make changes to code or processes/procedures in core+Kubernetes—i.e., [kubernetes/kubernetes], we require the following Release+Signoff checklist to be completed.++Check these off as they are completed for the Release Team to track. These+checklist items _must_ be updated for the enhancement to be released.+-->++Items marked with (R) are required _prior to targeting to a milestone / release_.++- [ ] (R) Enhancement issue in release milestone, which links to KEP dir in [kubernetes/enhancements] (not the initial KEP PR)+- [ ] (R) KEP approvers have approved the KEP status as `implementable`+- [ ] (R) Design details are appropriately documented+- [ ] (R) Test plan is in place, giving consideration to SIG Architecture and SIG Testing input (including test refactors)+  - [ ] e2e Tests for all Beta API Operations (endpoints)+  - [ ] (R) Ensure GA e2e tests for meet requirements for [Conformance Tests](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md)+  - [ ] (R) Minimum Two Week Window for GA e2e tests to prove flake free+- [ ] (R) Graduation criteria is in place+  - [ ] (R) [all GA Endpoints](https://github.com/kubernetes/community/pull/1806) must be hit by [Conformance Tests](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md)+- [ ] (R) Production readiness review completed+- [ ] (R) Production readiness review approved+- [ ] "Implementation History" section is up-to-date for milestone+- [ ] User-facing documentation has been created in [kubernetes/website], for publication to [kubernetes.io]+- [ ] Supporting documentation—e.g., additional design documents, links to mailing list discussions/SIG meetings, relevant PRs/issues, release notes++<!--+**Note:** This checklist is iterative and should be reviewed and updated every time this enhancement is being considered for a milestone.+-->++[kubernetes.io]: https://kubernetes.io/+[kubernetes/enhancements]: https://git.k8s.io/enhancements+[kubernetes/kubernetes]: https://git.k8s.io/kubernetes+[kubernetes/website]: https://git.k8s.io/website++## Summary++Target of this enhancement is to define which technology the Kubernetes+community is using to signs release artifacts.++## Motivation++Signing artifacts provides end users a chance to verify the integrity of the+downloaded resource. It allows to mitigate man-in-the-middle attacks directly on+the client side and therefore ensures the trustfulness of the remote serving the+artifacts.++### Goals++- Defining the used tooling for signing all Kubernetes related artifacts+- Providing a standard signing process for related projects (like k/release)++### Non-Goals++- Discussing not user-facing internal technical implementation details++## Proposal++Every Kubernetes release produces a set of artifacts. We define artifacts as+something consumable by end users. Artifacts can be binaries, container images,+checksum files, documentation or provenance data produced by using a bill of+materials. None of those end-user resources are signed right now.++The overall goal of SIG Release is to unify the way how to sign artifacts. This+will be done by relying on the tools of the Linux Foundations digital signing+project [sigstore](https://www.sigstore.dev). This goal aligns with the+[Roadmap and Vision](https://github.com/kubernetes/sig-release/blob/f62149/roadmap.md)+of SIG Release to provide a secure software supply chain for Kubernetes. It also+joins the effort of gaining full SLSA Compliance in the Kubernetes Release+Process ([KEP-3027](https://github.com/kubernetes/enhancements/issues/3027)).+Because of that, the future [SLSA](https://slsa.dev) compliance of artifacts+produced by SIG release will require signing artifacts starting from level 2.++[cosign](https://github.com/sigstore/cosign) will be the tool of our choice when+speaking about the technical aspects of the solution. How we integrate the+projects into our build process in k/release is out of scope of this KEP and+will be discussed in the Release Engineering subproject of SIG Release. A+pre-evaluation of the tool has been done already to ensure that it meets the+requirements.++An [ongoing discussion](https://github.com/kubernetes/release/issues/2227) about+using cosign already exists in k/release. This issue contains technical+discussions about how to utilize the existing Google infrastructure as well as+consider utilizing keyless signing via workload identities. Nevertheless, this+KEP focuses more on the "What" aspects rather than the "How".++### User Stories (Optional)++- As an end user, I would like to be able to verify the Kubernetes release+  artifacts, so that I can mitigate possible resource modifications by the+  network.++### Risks and Mitigations++- **Risk:** Unauthorized access to the signing key or its infrastructure++  **Mitigation:** Storing the credentials in a secure Google Cloud Project with+  limited access for SIG Release++### Graduation Criteria++#### Alpha++- Outline and integrate an example process for signing Kubernetes release+  artifacts.++#### Beta++- Standard Kubernetes release artifacts (binaries and container images) are+  signed.++#### GA++- All Kubernetes artifacts are signed

We might want to list some non-goals for clarity. For example, if https://github.com/kubernetes-sigs/kui is producing unsigned artifacts for releases?, does that block graduation to GA (just picking kui as an example)?

saschagrunert

comment created time in 10 hours

more