profile
viewpoint
Benjamin Elder BenTheElder @Google Sunnyvale, CA https://elder.dev/ maintaining @kubernetes things, Kubernetes SIG Testing Chair, sigs.k8s.io/kind creator / maintainer

BenTheElder/creaturebox 3

golang/gomobile evolutionary avoidance simulation

BenTheElder/color-schemes 1

sublime text themes

BenTheElder/api 0

The canonical location of the Kubernetes API definition.

BenTheElder/autoscaler 0

Autoscaling components for Kubernetes

BenTheElder/bluebliss-atom 0

blubliss atom syntax theme

BenTheElder/cadvisor 0

Analyzes resource usage and performance characteristics of running containers.

BenTheElder/cluster-api 0

Home for the Cluster Management API work, a subproject of sig-cluster-lifecycle

BenTheElder/complx 0

Complx the LC-3 Simulator used in CS2110 managed by Brandon

issue commentkubernetes-sigs/kind

Question:Why was only one node created?

  1. what kind version?
  2. how did you pass the config?

FWIW: unless you have a specific use case for multi-node, you generally want single-node. The nodes are not isolated and only add overhead. They're useful for specific kinds of testing for kubernetes itself related to multi-node behavior.

Thor-wl

comment created time in 7 hours

Pull request review commentkubernetes/enhancements

[WIP] Removing dockershim from kubelet

+# KEP-1985: Removing dockershim from kubelet++<!-- toc -->+- [Release Signoff Checklist](#release-signoff-checklist)+- [Terms](#terms)+- [Summary](#summary)+- [Motivation](#motivation)+  - [Pros](#pros)+  - [Cons](#cons)+  - [Goals](#goals)+  - [Non-Goals](#non-goals)+- [Proposal](#proposal)+  - [Dockershim removal criteria](#dockershim-removal-criteria)+  - [Dockershim removal plan](#dockershim-removal-plan)+  - [Risks and Mitigations](#risks-and-mitigations)+  - [Test Plan](#test-plan)+  - [Graduation Criteria](#graduation-criteria)+  - [Upgrade / Downgrade Strategy](#upgrade--downgrade-strategy)+  - [Version Skew Strategy](#version-skew-strategy)+- [Production Readiness Review Questionnaire](#production-readiness-review-questionnaire)+  - [Feature Enablement and Rollback](#feature-enablement-and-rollback)+  - [Rollout, Upgrade and Rollback Planning](#rollout-upgrade-and-rollback-planning)+  - [Monitoring Requirements](#monitoring-requirements)+  - [Dependencies](#dependencies)+  - [Scalability](#scalability)+  - [Troubleshooting](#troubleshooting)+- [Implementation History](#implementation-history)+- [Drawbacks](#drawbacks)+- [Alternatives](#alternatives)+- [Infrastructure Needed (Optional)](#infrastructure-needed-optional)+<!-- /toc -->++## Release Signoff Checklist++Items marked with (R) are required *prior to targeting to a milestone / release*.++- [ ] (R) Enhancement issue in release milestone, which links to KEP dir in [kubernetes/enhancements] (not the initial KEP PR)+- [ ] (R) KEP approvers have approved the KEP status as `implementable`+- [ ] (R) Design details are appropriately documented+- [ ] (R) Test plan is in place, giving consideration to SIG Architecture and SIG Testing input+- [ ] (R) Graduation criteria is in place+- [ ] (R) Production readiness review completed+- [ ] Production readiness review approved+- [ ] "Implementation History" section is up-to-date for milestone+- [ ] User-facing documentation has been created in [kubernetes/website], for publication to [kubernetes.io]+- [ ] Supporting documentation—e.g., additional design documents, links to mailing list discussions/SIG meetings, relevant PRs/issues, release notes++## Terms++- **CRI:** Container Runtime Interface – a plugin interface which enables kubelet to use a wide variety of container +runtimes, without the need to recompile.++## Summary++CRI for docker (i.e. dockershim) is currently a built-in container runtime in kubelet code base. This proposal aims +at a deprecation and subsequent removal of dockershim from kubelet.++## Motivation++In Kubernetes, CRI is the used as the "default" container runtime, while currently the CRI of docker (a.k.a. dockershim) +is part of kubelet code and coupled with kubelet's lifecycle. ++This is not ideal as kubelet then has dependency on specific container runtime which leads to maintenance burden for not +only developers in sig-node, but also cluster administrators when critical issues (e.g. runc CVE) happen to container runtimes. The pros of removing dockershim is straightforward:++### Pros+- Docker is not special and should be just a CRI implementation just like every other CRI implementation in our ecosystem. +- Currently, dockershim "enjoys" some backdoors for various reasons (see [legacyLogProvider](https://cs.k8s.io/?q=legacyLogProvider&i=nope&files=&repos=kubernetes/kubernetes) for example) . Removing these "features" should eliminate maintenance burden of kubelet.+- A cri-dockerd can be maintained independently by folks who are interested in keeping this functionality+- Over time we can remove vendored docker dependencies in kubelet.++Having said that, cons of removal built-in dockershim requires lots of attention:++### Cons+- Deployment pain with a new binary in addition to kubelet.+  - An additional component may aggravate the complexity currently. It may be relieved with docker version evolutions.+- The number of affected users maybe large.+  - Users must change existing use experience when using Kubernetes and docker.+  - Users have to change their existing workflows to adapt to this new changes.+  - And other unrecorded stuff.+- Updating all the eco-system tools to avoid dependencies on docker.+- Many people use the built in dockershim for in-cluster image build. While that may not be something we recommend for +a variety of reasons, it will be a breaking change for these users.

They're not using dockershim for build. They're using dockerd directly. They could continue to run dockerd and use it outside of kubernetes. This usage isn't really relevant..?

dims

comment created time in 13 hours

PullRequestReviewEvent

pull request commentkubernetes-sigs/kubetest2

[Clusterloader2] Implement a clusterloader2 tester

/lgtm /approve /hold some nits

amwat

comment created time in 14 hours

Pull request review commentkubernetes-sigs/kubetest2

[Clusterloader2] Implement a clusterloader2 tester

+package clusterloader2++import (+	"flag"+	"fmt"+	"os"+	"path/filepath"+	"strings"++	"github.com/octago/sflags/gen/gpflag"+	"k8s.io/klog"++	"sigs.k8s.io/kubetest2/pkg/exec"+	suite "sigs.k8s.io/kubetest2/pkg/testers/clusterloader2/suite"+)++type Tester struct {+	Suites        string `desc:"Comma separated list of standard scale testing suites e.g. load, density"`+	TestOverrides string `desc:"Comma separated list of paths to the config override files. The latter overrides take precedence over changes in former files."`+	TestConfigs   string `desc:"Comma separated list of paths to test config files."`+	Provider      string `desc:"The type of cluster provider used (e.g gke, gce, skeleton)"`+	KubeConfig    string `desc:"Path to kubeconfig"`

the usage text should probably clarify that this is an override?

amwat

comment created time in 14 hours

PullRequestReviewEvent

Pull request review commentkubernetes-sigs/kubetest2

[Clusterloader2] Implement a clusterloader2 tester

 type deployer struct { 	verbosity      int    // --verbosity for kind } +func (d *deployer) Kubeconfig() (string, error) {

~HOME is also more complex on windows, off the top of my head.

I think this deployer should probably have it's own stable computed paths and use those consistently (like $HOME/.kube/kubetest2-kind-$cluster)

amwat

comment created time in 14 hours

PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commentkubernetes-sigs/kubetest2

[Clusterloader2] Implement a clusterloader2 tester

 func bindFlags(d *deployer) *pflag.FlagSet { 	return flags } -// assert that deployer implements types.Deployer-var _ types.Deployer = &deployer{}+// assert that deployer implements types.DeployerWithKubeconfig

I think we should make this a standard part of the API for deployer, a requirement for kubetest2, but that can be a follow-up.

amwat

comment created time in 14 hours

issue commentkubernetes/kubernetes

[Flaky Test] verify.govet passes entire source tree resulting in high memory usage

It's entirely possible, though I'd expect it to be transient.

hasheddan

comment created time in 16 hours

delete branch BenTheElder/test-infra

delete branch : testgrid-confromance-update

delete time in 16 hours

issue commentkubernetes-sigs/kind

avoid excessive search lines in

This gets a little confusing to talk about. Reminding anyone in the thread that the layering is:

[GKE Node] -- runs a kubelet, has GKE managed DNS config

[[GKE Pod / ProwJob Pod]] -- docker in docker runs here. We could manage this config ourselves in the same script that sets up docker.

[[[kind "node" container"]]] -- container created by kind to be a "node". We want this to respect the "host" DNS locally using docker embedded DNS. "host" here is the prowjob pod. we should improve the prowjob pod to be a better environment for running KIND.

[[[[kind cluster pods]]]] -- not actually relevant here, the problem in https://github.com/kubernetes/test-infra/issues/19080#issuecomment-685141853 was with kubelet, at the level above (kind "node" container)

my point is that this is not only a KIND problem :) , it is possible that you don't want to have your host search domains in your pods, and AFAIK the possibilities that we have now are

My point was that it's not a KIND problem, it's a "kind inside kubernetes" problem which is not something reasonable to optimize for in Kubernetes.

that seems overkill if we can just indicate the kubelet to not copy the search domains from the host .

In this case we need the GKE kubelet to do that, which we're not going to be able to configure regardless of whatever upstream options are available :upside_down_face:, it's managed. Upstream options to customize DNS for the kubelet already exist though.

Just for the GKE cluster pods in which we run kind, we want to reduce the searches, we don't need them in the inner cluster nodes.


We should reduce ndots and searches at the prowjob pod level to look more like a typical host. We're already hacking around Kubernetes in abnormal ways to do the docker-in-docker bit, so tweaking DNS there is not a big deal.

BenTheElder

comment created time in 16 hours

PR opened kubernetes/test-infra

quick update to testgrid conformance GCS instructions

this came up today in slack.

/cc @dims @spiffxp @amwat

+1 -2

0 comment

1 changed file

pr created time in 17 hours

create barnchBenTheElder/test-infra

branch : testgrid-confromance-update

created branch time in 17 hours

issue commentkubernetes/kubernetes

test/e2e/upgrade tests are not running

@vinayakankugoyal I have a longer term goal to look at how we do upgrade-type testing of Kubernetes strategically, but we're not planning on investing in this test tooling further, and it's the cluster upgrade/deployment mechanism that is broken currently, not the tests.

The lack of keeping after the signal for this / responsive ownership of failures caused the release team to refuse to monitor it further, leading us to the current state where it doesn't run at all. For starters someone will need to mitigate the dependency on deprecated binary release hashes in the older release branches.

I don't see that this has been assigned to any one. PR #93455 has been blocked on this for 15 days.

We don't forcibly assign things to people in this project, people are either volunteering their time or take on work their employer is staffing. Nobody is volunteering for this and nobody has been prioritizing this cluster tooling.

I will chip in on this when I have time and EngProd @ $employer has been helping maintain the cluster tooling as a stop-gap but it's not reasonable for us to primarily own this. If you need this ASAP please help fix it and help prioritize maintaining this tooling.

liggitt

comment created time in 18 hours

push eventBenTheElder/test-infra

Jakub Przychodzeń

commit sha f77606e1bf7ebbb905af3b885bd70429df5a33a9

[scalability][periodic] Add resources requests/limits

view details

David Ashpole

commit sha a45db458aab3b76b97246c2381a2332246b1c1e4

move dashpole to emeritus for node

view details

Adhityaa Chandrasekar

commit sha 443e3e481cbab72fa06f07296a13d3312e2d8e72

kubemark: add 5000 node scheduler job Signed-off-by: Adhityaa Chandrasekar <adtac@google.com>

view details

Kubernetes Prow Robot

commit sha b3d001c393ca18137ddbb1f7ac2f9a058134690e

Update prow to v20200917-8a0841f9bc, and other images as necessary.

view details

Harshal Patil

commit sha 04ca22278abeff98303b07d3df7b16751c9e2b24

Set ssh username with KUBE_SSH_USER if available Signed-off-by: Harshal Patil <harpatil@redhat.com>

view details

Kubernetes Prow Robot

commit sha 8d5600d24ac4a92d7ec11c697f154e88884bd08f

Merge pull request #19258 from dashpole/dashpole_emeritus Move dashpole to emeritus for node

view details

Kubernetes Prow Robot

commit sha fb7dcd3faa8011162f6a3356459fdcbfd1d5bd35

Merge pull request #19253 from k8s-ci-robot/autobump Update prow to v20200917-8a0841f9bc, and other images as necessary.

view details

Kubernetes Prow Robot

commit sha 58bdf8d2ee46a95eb0e8ec346567c8e70afee0e0

Merge pull request #19273 from harche/fix_ssh_user Set ssh username with KUBE_SSH_USER if available

view details

Alex Pavel

commit sha c2df92890e99b76369904e09370a7703a2a3938c

bugzilla plugin: add StateAfterClosed config This PR adds a StateAfterClosed field to the bugzilla plugin config that sets the state that a bug should be moved to if all external GitHub PRs for the bug are closed.

view details

Ciprian Hacman

commit sha 0c74b97c355527b5b2e198ab697cd0423068023b

Set KOPS_RUN_TOO_NEW_VERSION for pull-kubernetes-e2e-kops-aws

view details

Kubernetes Prow Robot

commit sha a74aedde2f081d253e26eb4f679e9a2de30b52e7

Merge pull request #19279 from hakman/k8s-pull-too-new-ver Set KOPS_RUN_TOO_NEW_VERSION for pull-kubernetes-e2e-kops-aws

view details

Michelle Au

commit sha 7cab2bb83a94d9f0890b17c2c931230deeecfc35

Remove 1.16 csi jobs and add 1.19 optional jobs

view details

Kubernetes Prow Robot

commit sha 3df99b60cc2c649ef0e3a6309c371240db6fa2ac

Merge pull request #19278 from AlexNPavel/bugzilla-reset-state bugzilla plugin: add StateAfterClosed config

view details

Ciprian Hacman

commit sha 703d9cd675eeedda021217793c83510ec8271338

Use the "ubuntu" SSH user for pull-kubernetes-e2e-kops-aws

view details

Kubernetes Prow Robot

commit sha 20f7dabe9b3cb064a0f52ae79f6fa2c65c4013ca

Merge pull request #19281 from hakman/k8s-pull-too-new-ver Use the "ubuntu" SSH user for pull-kubernetes-e2e-kops-aws

view details

Ciprian Hacman

commit sha ab4932cb6dfbffeb0008998824b279ef34bee0d0

Use the "ubuntu" SSH user for pull-kubernetes-e2e-kops-aws (2)

view details

Kubernetes Prow Robot

commit sha ac46f7d53e620c537220c27907c09b5ec97c5df3

Merge pull request #19282 from hakman/k8s-pull-too-new-ver Use the "ubuntu" SSH user for pull-kubernetes-e2e-kops-aws (2)

view details

Ciprian Hacman

commit sha e242bdbafb97c3a2b3bb2f3a62daf55ba65d94fd

Use the "ubuntu" SSH user for pull-kubernetes-e2e-kops-aws (3)

view details

Kubernetes Prow Robot

commit sha d5e4b327efe750d6b0b2053238247a6c2c5960af

Merge pull request #19285 from hakman/kops-ssh-user Use the "ubuntu" SSH user for pull-kubernetes-e2e-kops-aws (3)

view details

Kubernetes Prow Robot

commit sha 48917591c7c614c6d3413344c2c5b4eb20fce85a

Merge pull request #19152 from jprzychodzen/periodics [scalability][periodic] Add resources requests/limits

view details

push time in 20 hours

issue commentkubernetes-sigs/kind

avoid excessive search lines in

We can do it without a KEP in this case, we are able to mutate the file from the entrypoint as mentioned above so that the node's DNS config is reasonable. Kubelet also already has the upstream mechanisms to do this IIRC, kubelet can use a DNS config file that is not the host global one, but as I said, we don't want kind to have special CI-only behavior here.

The problem is that the CI environment's DNS config is unsuitable. The nested cluster is behaving correctly.

On Sun, Sep 20, 2020 at 4:52 AM Antonio Ojea notifications@github.com wrote:

🤔 I' m not sure a new flag will require a KEP, beccause is not changing current behavior and just adding a new configuration option.

@thockin https://github.com/thockin what do you think? what will be the best way to avoid (globally) appending the host resolv.conf additional search domains to pods?

— You are receiving this because you were assigned. Reply to this email directly, view it on GitHub https://github.com/kubernetes-sigs/kind/issues/1860#issuecomment-695778388, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAHADK2H5G56SIOMIHHJD7LSGXUI7ANCNFSM4RSQMN3Q .

BenTheElder

comment created time in 2 days

issue commentkubernetes-sigs/kind

kind port-forward

Use extraPortMappings, but we have to know the port even before starting the cluster.

You need to know how many ports you want to have available for mapping. You can still run some further forwarding action (e.g. creating a nodePort object or a hostPort container) at runtime.

Search for the container running the service (there can be more than one), find the port on it, and try to access using the container's IP address.

We don't discuss this one much because it only works on linux.

Use kubectl port-forward and somehow make it work in the background.

kubectl port-forward ... & ...? Backgrounding tasks is a pretty standard shell operation for example.

KinD would:

  1. Find what's the container running that service

How do we know it's pinned to a node and won't move...?

  1. Find what's the port on the container running that service (it can be different than the pod's port)

Pods do not have ports? Containers have ports. Services can target ports (or have a nodePort).

  1. Expose the container's port on the host "permanently" I know that VSCode Remote - Containers extension is able to expose ports from a running container to the host machine without stopping it, but I don't know their procedure.

It's not possible to directly expose ports from a running container. They're running a process somewhere to do this.

In your CI are you unable to use kubectl port-forward ... & (background the task yourself?)

felipecrs

comment created time in 2 days

issue commentkubernetes-sigs/kind

avoid excessive search lines in

At the kind node level ...? Kubelet doesn't talk to coreDNS ...

On Sat, Sep 19, 2020, 14:26 Antonio Ojea notifications@github.com wrote:

does not forwarding the queries to the upstream dns server solve the problem?

— You are receiving this because you were assigned. Reply to this email directly, view it on GitHub https://github.com/kubernetes-sigs/kind/issues/1860#issuecomment-695357700, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAHADK46EEYJPKMGMJYWKNLSGUOZ5ANCNFSM4RSQMN3Q .

BenTheElder

comment created time in 2 days

issue commentkubernetes-sigs/kind

avoid excessive search lines in

@aojea if you check the context link the issue we had was in the kubelet sooo ...

BenTheElder

comment created time in 3 days

issue commentkubernetes/test-infra

(ci|pull)-kubernetes-kind-e2e.* jobs are failing

https://github.com/kubernetes-sigs/kind/issues/1860 to track mitigating in the future

spiffxp

comment created time in 4 days

issue openedkubernetes-sigs/kind

avoid excessive search lines in

There's not a good way to manage this in KIND, it's reasonable that we expect the host to have sane DNS and it's intentional that we use upstream DNS pointed at the host DNS resolution, but in CI we have additional search paths by way of running inside another cluster. We're already using FQDN for interacting with services in that cluster outside of KIND (namely the bazel build cache), so we shouldn't need search paths at all.

Basically we should mitigate this up front: https://github.com/kubernetes/test-infra/issues/19080#issuecomment-685141853

/assign /priority important-soon

xref: #303

created time in 4 days

pull request commentkubernetes-sigs/kubetest2

[GKE] Less strict version checking

@amwat it might be time to start tagging versions even if they are 0.0.X, so as to have clean pins & change notes

amwat

comment created time in 4 days

issue commentkubernetes-sigs/kind

Enable Simulation of automatically provisioned ReadWriteMany PVs

1.19.0 isn't a latest image (please see the kind release notes as usual) and all of the images that are current were built with the same version, there were no changed to the base image or node image build process between those builds and tagging the release.

joshatcaper

comment created time in 4 days

issue commentkubernetes/test-infra

Can not merge PR when use github actions

https://github.com/kubernetes/test-infra/issues/15611

Rustin-Liu

comment created time in 4 days

pull request commentkubernetes-sigs/kind

bump debian iptables base image

/retest

aojea

comment created time in 4 days

pull request commentkubernetes/kubernetes

Always set relevant variables for cross compiling

/cc @mkumatag @dims

bnrjee

comment created time in 4 days

Pull request review commentkubernetes/kubernetes

Always set relevant variables for cross compiling

 kube::golang::set_platform_envs() {    export GOOS=${platform%/*}   export GOARCH=${platform##*/}--  # Do not set CC when building natively on a platform, only if cross-compiling from linux/amd64-  if [[ $(kube::golang::host_platform) == "linux/amd64" ]]; then-    # Dynamic CGO linking for other server architectures than linux/amd64 goes here-    # If you want to include support for more server platforms than these, add arch-specific gcc names here-    case "${platform}" in-      "linux/arm")-        export CGO_ENABLED=1-        export CC=arm-linux-gnueabihf-gcc-        ;;-      "linux/arm64")-        export CGO_ENABLED=1-        export CC=aarch64-linux-gnu-gcc-        ;;-      "linux/ppc64le")-        export CGO_ENABLED=1-        export CC=powerpc64le-linux-gnu-gcc-        ;;-      "linux/s390x")-        export CGO_ENABLED=1-        export CC=s390x-linux-gnu-gcc-        ;;-    esac+  # Apply standard values for CGO_ENABLED and CC unless KUBE_BUILD_PLATFORMS is set.+  if [ -z "${KUBE_BUILD_PLATFORMS:-}" ] ; then+      export CGO_ENABLED=0+      export CC=gcc+      return   fi+  # Dynamic CGO linking for other server architectures goes here+  # If you want to include support for more server platforms than these, add arch-specific gcc names here

this is no longer guarded by being on a supported build host.

bnrjee

comment created time in 4 days

Pull request review commentkubernetes/kubernetes

Always set relevant variables for cross compiling

 kube::golang::set_platform_envs() {    export GOOS=${platform%/*}   export GOARCH=${platform##*/}--  # Do not set CC when building natively on a platform, only if cross-compiling from linux/amd64-  if [[ $(kube::golang::host_platform) == "linux/amd64" ]]; then-    # Dynamic CGO linking for other server architectures than linux/amd64 goes here-    # If you want to include support for more server platforms than these, add arch-specific gcc names here-    case "${platform}" in-      "linux/arm")-        export CGO_ENABLED=1-        export CC=arm-linux-gnueabihf-gcc-        ;;-      "linux/arm64")-        export CGO_ENABLED=1-        export CC=aarch64-linux-gnu-gcc-        ;;-      "linux/ppc64le")-        export CGO_ENABLED=1-        export CC=powerpc64le-linux-gnu-gcc-        ;;-      "linux/s390x")-        export CGO_ENABLED=1-        export CC=s390x-linux-gnu-gcc-        ;;-    esac+  # Apply standard values for CGO_ENABLED and CC unless KUBE_BUILD_PLATFORMS is set.

why?

bnrjee

comment created time in 4 days

PullRequestReviewEvent
PullRequestReviewEvent

issue commentkubernetes/kubernetes

Support customized gcc and cross compilation of kubernetes on platforms other than amd64

Also, cross compilation should be supported on platforms other than amd64.

Sure, that will require cross building the build toolset itself, which nobody has taken up yet.

bnrjee

comment created time in 4 days

pull request commentkubernetes-sigs/kind

bump debian iptables base image

/lgtm /approve

aojea

comment created time in 4 days

push eventaojea/kind

Benjamin Elder

commit sha 1d2c89f568ec8ece6ebd33dc3e35a218c92b6352

bump kindnetd

view details

push time in 4 days

push eventBenTheElder/kind

Kannan Manickam

commit sha 77e7b478ec95665a4fe6bc967f41ae2247bcd878

Update ingress example to use networking.k8s.io extensions api group is deprecated in newer versions of Kubernetes.

view details

Benjamin Elder

commit sha 3baa8682dd76de481ba38f38946ba5507befb4eb

Merge pull request #1856 from arangamani/patch-1 Update ingress example to use networking.k8s.io

view details

push time in 4 days

pull request commentkubernetes/kubernetes

Build non-static Kubernetes binaries as position-independent executables (PIE)

Correct me if I'm wrong, but I believe bazel already supports building with pie using the -linkmode=pie attribute. This works in the same way as the go -buildmode flag and accepts "pie" as a valid argument value.

having support is great! can we please enable it in this PR so as to avoid skew between the build outputs?

abhay-krishna

comment created time in 4 days

push eventBenTheElder/test-infra

google-oss-robot

commit sha 270ba6435445a8057a65acb3f47b4ff10768ffb2

Update TestGrid for GoogleCloudPlatform

view details

Cole Wagner

commit sha 583f2ac4cc09968519e258df62642872528c5d6b

Improve SLO dashboard layout and labeling.

view details

Derek McGowan

commit sha 1dada7a0307f271851d7ce2bb14340bcdd31d18d

Add containerd to managed_webhooks Use managed webhooks for containerd/containerd Signed-off-by: Derek McGowan <derek@mcg.dev>

view details

Kubernetes Prow Robot

commit sha 5aaed051bfed4e883d54c9433350108fe8b13d99

Merge pull request #19266 from cjwagner/slo-dashboard-tweaks Improve SLO dashboard layout and labeling.

view details

Stephen Augustus

commit sha 77cd71acc80ce692320ee544ca1d488833e60a29

releng: Update k/release CI image to not use bazel Signed-off-by: Stephen Augustus <saugustus@vmware.com>

view details

Kubernetes Prow Robot

commit sha feccbf60454db394d955e00977a9e2829712f050

Merge pull request #19268 from dmcgowan/add-containerd-managed-webhook Add containerd to managed_webhooks

view details

Stephen Augustus

commit sha 65814509185b90e614d091dc14606c5da3092a21

releng: Update k/release presubmits - Don't use bazel - Add a verify test Signed-off-by: Stephen Augustus <saugustus@vmware.com>

view details

Kubernetes Prow Robot

commit sha 9c48810f9271f6cfe81dd6fedb565e3ab527181c

Merge pull request #19269 from justaugustus/releng-ci releng: Update k/release CI image to not use bazel and update presubmits

view details

Kubernetes Prow Robot

commit sha 6b5cd0510f3230e54374bb602ddd613cf2b178e3

Merge pull request #19265 from google-oss-robot/transfigure-branch Update TestGrid for GoogleCloudPlatform

view details

push time in 4 days

pull request commentkubernetes/kubernetes

Automated cherry pick of #94287: Update default etcd server to 3.4.13

images in that registry are managed here https://github.com/kubernetes/k8s.io/tree/master/k8s.gcr.io

I don't think arch specific names are being pushed currently, only multi-arch.

https://github.com/kubernetes/k8s.io/blob/ba15ca822cdc857394102fdef0838df08c913eca/k8s.gcr.io/images/k8s-staging-etcd/images.yaml

On Fri, Sep 18, 2020 at 12:30 AM Takuhiro Yoshida notifications@github.com wrote:

@jingyih https://github.com/jingyih hi, we use k8s.gcr.io/etcd-amd64, but 3.4.13 or 3.4.13-0 is not pushed yet. Is it also necessary to cherry-pick #94260 https://github.com/kubernetes/kubernetes/pull/94260 ?

— You are receiving this because you are on a team that was mentioned. Reply to this email directly, view it on GitHub https://github.com/kubernetes/kubernetes/pull/94536#issuecomment-694706974, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAHADK2BAA4HFOE7PXI7BS3SGMEBPANCNFSM4QZA2OKQ .

jingyih

comment created time in 4 days

pull request commentkubernetes-sigs/kind

Lint Configuration via YAML

/approve agree with @neolit123, going ahead and approving but leaving LGTM for the comments.

kensipe

comment created time in 4 days

push eventkubernetes-sigs/kind

Kannan Manickam

commit sha 77e7b478ec95665a4fe6bc967f41ae2247bcd878

Update ingress example to use networking.k8s.io extensions api group is deprecated in newer versions of Kubernetes.

view details

Benjamin Elder

commit sha 3baa8682dd76de481ba38f38946ba5507befb4eb

Merge pull request #1856 from arangamani/patch-1 Update ingress example to use networking.k8s.io

view details

push time in 4 days

PR merged kubernetes-sigs/kind

Reviewers
Update ingress example to use networking.k8s.io approved cncf-cla: yes kind/documentation lgtm ok-to-test size/XS

extensions api group is deprecated in newer versions of Kubernetes.

+1 -1

9 comments

1 changed file

arangamani

pr closed time in 4 days

Pull request review commentkubernetes/kubernetes

Crio node e2e test fix

 func GetHostnameOrIP(hostname string) string { 		host = ip 	} -	if *sshUser == "" {+	// Do no ignore the KUBE_SSH_USER if it's set explicitly+	if os.Getenv("KUBE_SSH_USER") != "" {

I agree that if --ssh-user is set we should respect that. It also seems like https://github.com/kubernetes/test-infra/blob/feccbf60454db394d955e00977a9e2829712f050/kubetest/e2e.go#L575 could handle KUBE_SSH_USER overridding USER

harche

comment created time in 5 days

PullRequestReviewEvent

push eventBenTheElder/test-infra

Grant Mccloskey

commit sha cfd473877a6b5b4318a2488d909b10406a4bed0d

Add more jobs to kettle's garbage job list

view details

Grant Mccloskey

commit sha 42248a98172db95fe31f8ace8aef7f12538fb3ff

Fix pylint garbage list to long

view details

Grant Mccloskey

commit sha 318255f20aebc9fbf3e1512f04301b5de2941741

Remove jobs that no longer exist from garbage They are extra garbage now

view details

Alvaro Aleman

commit sha 3797e394d2cdc39d94f9bee8b5c1dd50e713c44e

Use prow-controller-manager/plank v2

view details

Markus Lehtonen

commit sha ce851f90f82d7f6c43f6cafd0c2cc9d91638c5f5

Add build presubmit job for node-feature-discovery Verify non-container build of the binaries.

view details

Jing Xu

commit sha 8da2697adc5f0511475e44714d21f984d9c86697

Add config job for running gce pd driver on win1909 Add CI and presubmit jobs for running gce pd driver on win1909 nodes

view details

RobertKielty

commit sha fb90f21d95319fac2cf46de86746c4be760249f2

removes canary job that tested migration changes The configuration changes were ... migrating pull-kubernetes-e2e-gce-network-proxy-http-connect onto the k8s-infra-prow-build cluster removing container startup flag --gcp-project=k8s-network-proxy-e2e

view details

google-oss-robot

commit sha dd541dd1800225e4b5ace0ba97cb8b19e4941d74

Update TestGrid for GoogleCloudPlatform

view details

Kubernetes Prow Robot

commit sha 3d1bda602a66861d481c35cff2b44b1b88d0df10

Merge pull request #19163 from google-oss-robot/transfigure-branch Update TestGrid for GoogleCloudPlatform

view details

Aaron Crickenberger

commit sha 5be27c8f6f19a1ab33b0ae4a1f0ffb2ea57e88af

Migrate pull-kubernetes-bazel-test to k8s-infra-prow-build

view details

Cole Wagner

commit sha dab5bcad669f3e5ffc44c2dbc5630f4acc281cf6

Fix Crier handling of reporters that report multiple PJs.

view details

Kubernetes Prow Robot

commit sha b20d089fe6a564aa9c664e14d72557b2dff5b4b8

Merge pull request #19171 from cjwagner/crier-lister-wait-fix Fix Crier handling of reporters that report multiple PJs.

view details

Kubernetes Prow Robot

commit sha b6504f470674abea7f94cbcd9426b7b1a000c997

Merge pull request #19170 from spiffxp/migrate-pull-bazel-test-to-k8s-infra Migrate pull-kubernetes-bazel-test to k8s-infra-prow-build

view details

Andy Zhang

commit sha 6b8506b702525e139952098168ca46cd12ef05d2

Update csi-driver-smb-config.yaml

view details

Kubernetes Prow Robot

commit sha 5322fbfaacd1c443d5cbc3ca2e6f0e7a85b1be5f

Merge pull request #19174 from andyzhangx/patch-2 Update csi-driver-smb-config.yaml

view details

istio-testing

commit sha 17aed478f07ae6e6929fa68b676cc51c9d91fb91

Update TestGrid for istio

view details

Stephen Augustus

commit sha dfaa92159624f6f805d185b7762be09aa31f5736

releng(kubekins, krte): Update Golang versions to go1.15.2 Signed-off-by: Stephen Augustus <saugustus@vmware.com>

view details

Stephen Augustus

commit sha 384dcf1505d52f1d9a42eb28eb6fe2b3e1545dc6

releng: Update build directory for kube-cross images We're taking advantage of some scripts that are above the kube-cross subdirectory, so we need to pull in the repo root as the build directory for image building now. Signed-off-by: Stephen Augustus <saugustus@vmware.com>

view details

Kubernetes Prow Robot

commit sha 5dac5d5a7f8c562db0c2af67982a700745a4f6fd

Merge pull request #19176 from justaugustus/kube-cross-image-push releng: Update build directory for kube-cross images

view details

Markus Lehtonen

commit sha 09ee61a5aefacf2effcf0a7f1bcb9ef8490eabe7

Fix a typo in node-feature-discovery postsubmit job config

view details

push time in 5 days

startedkubernetes-sigs/kind

started time in 5 days

issue commentkubernetes/test-infra

Retain LGTM through squashes

no, but that's not really comparable, because you get a naive squash commit as opposed to local squash where you can actually do a fixup / edit the commit message to make sense and drop any noise like "commit 2 == fix the thing"

On Wed, Sep 16, 2020 at 10:47 PM Grant McCloskey notifications@github.com wrote:

Does the tide/merge-method-squash label also remove lgtm?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/kubernetes/test-infra/issues/19147#issuecomment-693917495, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAHADK4NCIWBJRQK45G5DZDSGGPGJANCNFSM4RAPVVCA .

sftim

comment created time in 5 days

pull request commentkubernetes-sigs/kubetest2

[GKE] Less strict version checking

/lgtm /approve

amwat

comment created time in 5 days

push eventBenTheElder/kind

Benjamin Elder

commit sha f9db81c462abccb2c3a527cdf54160e681579554

remove unnecessary usage of CombinedOutputLines we only want stdout for these

view details

Kubernetes Prow Robot

commit sha 1b4b217cfd74fc2181ce0d9888a80ed1a4cd0161

Merge pull request #1851 from BenTheElder/outputlines remove unnecessary usage of CombinedOutputLines

view details

Ken Sipe

commit sha 88935152c645c4bacdf06af3806448441280c10d

export logs kind version Signed-off-by: Ken Sipe <kensipe@gmail.com>

view details

Kubernetes Prow Robot

commit sha 4c9c99bdf94a92ce3c7b21e10b92e3cd284d52c0

Merge pull request #1854 from kensipe/kind-version-log export logs kind version

view details

push time in 5 days

issue commentkubernetes-sigs/kind

Enable Simulation of automatically provisioned ReadWriteMany PVs

that's unfortunate. we're shipping the latest available in the distro at the moment (ubuntu 20.10), if it's fixed in ubuntu we'll pick it up in a future kind image.

joshatcaper

comment created time in 5 days

pull request commentkubernetes/kubernetes

build: Migrate pause image building to k/release

+1 on moving the pause image code out of k/k being strange. This isn't a base / dependency image it's a critical component image.

justaugustus

comment created time in 5 days

issue commentkubernetes/kubernetes

Env var to set location of .kube directory

This isn't going in without someone driving agreement with the owners that this should exist. Please go discuss with SIG CLI (mailing list, slack, or their zoom meeting). When I brought it up before the response was negative.

OJFord

comment created time in 5 days

issue closedkubernetes-sigs/kind

Inline Create Cluster Config

What would you like to be added:

Ability to use inline config upon cluster creation.

Why is this needed:

No files are needed on the file system. Easier to manage when KinD is used in automations. You'll be able to pass the config via just an environment variable.

closed time in 5 days

Xtigyro

issue commentkubernetes-sigs/kind

Inline Create Cluster Config

You can pass a config file via stdin already. --config=-, giving various options to make it inline.

Xtigyro

comment created time in 5 days

issue commentaws/aws-controllers-k8s

Test controllers architectural compatiblity

I think long term that's the right, most powerful answer, xref: https://github.com/kubernetes/kubernetes/issues/88553 there's some discussion of removing it.

I'd like pre-built multi-arch images in kind eventually, but constraints on the rather non-standard node image build are going to make that a bit of a mess, and even if we landed that you'd be stuck with whatever versions we'd pre-published.

Getting the make build to work on not-amd64 will be a win for others beyond kind as well. I reached out in #sig-release (kubernetes slack) and received this pointer to the upstream tracking for that: https://github.com/kubernetes/release/blob/0816bc88c00d682b8a05fe374fe52c7c764d05d8/images/build/cross/Makefile#L27-L31

A-Hilaly

comment created time in 6 days

delete branch BenTheElder/kind

delete branch : update-docs

delete time in 6 days

pull request commentkubernetes-sigs/kind

Update ingress example to use networking.k8s.io

/lgtm /approve thanks!

arangamani

comment created time in 6 days

pull request commentkubernetes-sigs/kind

export logs kind version

/lgtm /approve thank you!

kensipe

comment created time in 6 days

pull request commentkubernetes-sigs/kind

export logs kind version

@aojea we probably want to switch to smoke-test style jobs for older kubernetes releases in the future, still thinking about how best to do that. I asked @amwat to write a proposal for version support policy first. /override pull-kind-conformance-parallel-1-15

kensipe

comment created time in 6 days

pull request commentkubernetes-sigs/kind

add config command

Hmm, I'm not sure if we want to add this, but if we did it would still be something like kind [verb] [noun] for consistency. In the future it's generally preferable to open an issue to discuss feature additions first so we can discuss that sort of thing up front :sweat_smile:

yylt

comment created time in 6 days

Pull request review commentkubernetes-sigs/kind

Appeasing the Angry Linter

 package docker  import ( 	"bytes"-	"crypto/sha1"+	"crypto/sha1" //nolint:gosec // used as a seed to subnet generator

I think we should probably just disable this linter wholesale, I'm not aware of it catching much useful and it complains about a lot of intentional, well-meant behavior

kensipe

comment created time in 6 days

Pull request review commentkubernetes-sigs/kind

Appeasing the Angry Linter

 func (c *buildContext) prePullImages(bits kube.Bits, dir, containerID string) ([ 	return importer.ListImported() } +func pullImageFns(requiredImages []string, builtImages sets.String, imagesDir string, pulledImages chan string, logger log.Logger) []func() error {

I'm -1 on this, this loses a ton of context, this method takes way too many parameters.

this whole file does need refactoring, but I'd rather not just do enough to silence some linter. we're going to continue restructuring build to solve some open issues as-is.

kensipe

comment created time in 6 days

Pull request review commentkubernetes-sigs/kind

Appeasing the Angry Linter

 https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands - If $KUBECONFIG environment variable is set, then it is used as a list of paths (normal path delimiting rules for your system). These paths are merged. When a value is modified, it is modified in the file that defines the stanza. When a value is created, it is created in the first file that exists. - If no files in the chain exist, then it creates the last file in the list.  - Otherwise, ${HOME}/.kube/config is used and no merging takes place.-*/+*/ // nolint:lll

this linter should probably just be disabled. we're not using 80 character-width hardware terminals in 2020.

kensipe

comment created time in 6 days

PullRequestReviewEvent
PullRequestReviewEvent

pull request commentkubernetes-sigs/kind

Appeasing the Angry Linter

hmm, the linters should be running and passing in CI already?

kensipe

comment created time in 6 days

Pull request review commentkubernetes/release

Add links to rapture RPM/Debian lists

 The entire build process takes several hours. Once you are ready to begin, the d  ### Validating packages -Now that `rapture` has successfully complete, we need to verify the packages that were just created. This validation can be done on any instance where Kubernetes is not already installed. (Ideally, you would want to spin up a new VM to test.)+Now that `rapture` has successfully complete, we need to verify the packages that were just published and created correctly.++To check for publish success, see the [Debian package list][deb-package-list] and [RPM package list][rpm-package-list] for the versions that were just uploaded. Or curl on cmdline for Debian and RPM respectively:+```

can we wrap this up into a script command that takes the version instead? the less manual hackery the better.

MushuEE

comment created time in 6 days

PullRequestReviewEvent

pull request commentkubernetes-sigs/kubetest2

[GKE deployer] Enable building from source

this seems like a solid starting point let's iterate from here /lgtm /approve

amwat

comment created time in 6 days

pull request commentkubernetes-sigs/kind

export logs kind version

There's no kind binary on the node to call, and even if there was we want to know the kind version on the host doing the log dumping.

This import is fine. Please squash these commits and remove the identical TODO from the other (podman) location.

kensipe

comment created time in 6 days

issue commentkubernetes-sigs/kind

export logs: add kind version

this actually already had a // TODO comment, +1, it's past time to get to that. https://github.com/kubernetes-sigs/kind/blob/1b4b217cfd74fc2181ce0d9888a80ed1a4cd0161/pkg/cluster/internal/providers/docker/provider.go#L249

howardjohn

comment created time in 6 days

pull request commentkubernetes-sigs/kind

export logs kind version

we should drop the TODO line and see if it was copied into the podman code and drop it there too if so

kensipe

comment created time in 6 days

pull request commentkubernetes-sigs/kind

export logs kind version

That seems like a good place to put it 👍 I think this TODO has just been carried over since before there the provider abstraction.

kensipe

comment created time in 6 days

pull request commentkubernetes-sigs/kind

export logs kind version

Thanks for the PR!

This should be implemented somewhere else, we want it to happen for all of the node providers (docker,podman,...)

kensipe

comment created time in 7 days

issue commentkubernetes-sigs/kind

network kind is ambiguous

FYI @howardjohn @JeremyOT it should be safe to do concurrent multi-cluster bringup in CI in v0.9.0 without any workarounds.

jdef

comment created time in 7 days

delete branch BenTheElder/kind

delete branch : outputlines

delete time in 7 days

issue commentkubernetes-sigs/kind

Kind fails kubadm command: unknown flag --skip-phases

This one is really unfortunate and surprising, I will have to dig later to find out what the thinking was upstream :disappointed:

I think we may be able to re-introduce 1.13 support in the future, but it will require a somewhat involved rewrite of bootstrapping to use the phases directly. It's something on our radar but we have some other large changes to tackle first. We also need to speak to the kubeadm team more before moving on this as it alters the upstream test coverage.

FraBle

comment created time in 7 days

issue closedkubernetes-sigs/kind

Kind fails kubadm command: unknown flag --skip-phases

<!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!-->

What happened:

We've upgraded the local tooling on macOS from kind 0.8.1 to 0.9.0. Creating clusters now breaks with a kubeadm error.

ERROR: failed to create cluster: failed to join node with kubeadm: command "docker exec --privileged skynet-worker kubeadm join --config /kind/kubeadm.conf --skip-phases=preflight --v=6" failed with error: exit status 1
Command Output: Error: unknown flag: --skip-phases

What you expected to happen:

Cluster coming up healthy.

How to reproduce it (as minimally and precisely as possible):

kind.json:

{
    "kind": "Cluster",
    "apiVersion": "kind.x-k8s.io/v1alpha4",
    "nodes": [
        {
            "role": "control-plane",
            "image": "$KUBE_NODE_IMAGE",
            "extraMounts": [
                {
                    "hostPath": "/Users/$USER/.docker/config.json",
                    "containerPath": "/var/lib/kubelet/config.json"
                }
            ]
        },
        {
            "role": "worker",
            "image": "$KUBE_NODE_IMAGE",
            "extraMounts": [
                {
                    "hostPath": "/Users/$USER/.docker/config.json",
                    "containerPath": "/var/lib/kubelet/config.json"
                }
            ]
        }
    ]
}

setup.sh:

#!/bin/bash
set -e

export KUBE_NODE_IMAGE="kindest/node:v1.13.12@sha256:214476f1514e47fe3f6f54d0f9e24cfb1e4cda449529791286c7161b7f9c08e7"
config="$(cat kind.json | envsubst)"
kind create cluster --name skynet --config=<(echo "$config")

Terminal output

./setup.sh
Creating cluster "skynet" ...
 ✓ Ensuring node image (kindest/node:v1.13.12) 🖼
 ✓ Preparing nodes 📦 📦
 ✓ Writing configuration 📜
 ✓ Starting control-plane 🕹️
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
 ✗ Joining worker nodes 🚜
ERROR: failed to create cluster: failed to join node with kubeadm: command "docker exec --privileged skynet-worker kubeadm join --config /kind/kubeadm.conf --skip-phases=preflight --v=6" failed with error: exit status 1
Command Output: Error: unknown flag: --skip-phases
Usage:
  kubeadm join [flags]

Flags:
      --apiserver-advertise-address string            If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on.
      --apiserver-bind-port int32                     If the node should host a new control plane instance, the port for the API Server to bind to. (default 6443)
      --config string                                 Path to kubeadm config file.
      --cri-socket string                             Specify the CRI socket to connect to. (default "/var/run/dockershim.sock")
      --discovery-file string                         A file or URL from which to load cluster information.
      --discovery-token string                        A token used to validate cluster information fetched from the API server.
      --discovery-token-ca-cert-hash strings          For token-based discovery, validate that the root CA public key matches this hash (format: "<type>:<value>").
      --discovery-token-unsafe-skip-ca-verification   For token-based discovery, allow joining without --discovery-token-ca-cert-hash pinning.
      --experimental-control-plane                    Create a new control plane instance on this node
  -h, --help                                          help for join
      --ignore-preflight-errors strings               A list of checks whose errors will be shown as warnings. Example: 'IsPrivilegedUser,Swap'. Value 'all' ignores errors from all checks.
      --node-name string                              Specify the node name.
      --token string                                  Use this token for both discovery-token and tls-bootstrap-token when those values are not provided.

Global Flags:
      --log-file string   If non-empty, use this log file
      --rootfs string     [EXPERIMENTAL] The path to the 'real' host root filesystem.
      --skip-headers      If true, avoid header prefixes in the log messages
  -v, --v Level           log level for V logs

error: unknown flag: --skip-phases

Anything else we need to know?:

  • N/A

Environment:

  • kind version: (use kind version): kind v0.9.0 go1.15.2 darwin/amd64
  • Kubernetes version: (use kubectl version):
    Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.12", GitCommit:"a8b52209ee172232b6db7a6e0ce2adc77458829f", GitTreeState:"clean", BuildDate:"2019-10-15T12:12:15Z", GoVersion:"go1.11.13", Compiler:"gc", Platform:"darwin/amd64"}
    
  • Docker version: (use docker info):
    Server Version: 19.03.12
    Kernel Version: 4.19.76-linuxkit
    Operating System: Docker Desktop
    
  • OS (e.g. from /etc/os-release): macOS Catalina 10.15.6 (19G2021)

closed time in 7 days

FraBle

issue commentkubernetes-sigs/kind

Kind fails kubadm command: unknown flag --skip-phases

See existing discussion: https://github.com/kubernetes-sigs/kind/pull/1744#issuecomment-692905480

The release notes have been updated. 1.14+ will be required for multi-node clusters for now.

FraBle

comment created time in 7 days

pull request commentkubernetes/test-infra

kind-conformance-image: use latest kind like the rest of CI

thanks @mikedanese :+1:

mikedanese

comment created time in 7 days

pull request commentkubernetes/test-infra

kind-conformance-image: use latest kind like the rest of CI

/lgtm /approve

mikedanese

comment created time in 7 days

Pull request review commentkubernetes/test-infra

kind-conformance-image: use latest kind like the rest of CI

 cleanup() { install_kind() {     # install `kind` to tempdir     TMP_DIR=$(mktemp -d)-    curl -sLo "${TMP_DIR}/kind" https://github.com/kubernetes-sigs/kind/releases/download/${STABLE_KIND_VERSION}/kind-linux-amd64+    curl -sLo "${TMP_DIR}/kind" https://kind.sigs.k8s.io/dl/latest/kind-linux-amd64

Actually, it appears the CI version of this job is not even in release informing currently, let alone blocking.

mikedanese

comment created time in 7 days

PullRequestReviewEvent

Pull request review commentkubernetes/test-infra

kind-conformance-image: use latest kind like the rest of CI

 cleanup() { install_kind() {     # install `kind` to tempdir     TMP_DIR=$(mktemp -d)-    curl -sLo "${TMP_DIR}/kind" https://github.com/kubernetes-sigs/kind/releases/download/${STABLE_KIND_VERSION}/kind-linux-amd64+    curl -sLo "${TMP_DIR}/kind" https://kind.sigs.k8s.io/dl/latest/kind-linux-amd64

There was a new release tagged last night, but switching to latest here is reasonable.

I think this job should be demoted out of release blocking and presubmit given the lack of maintainership, it's frequently failing. https://testgrid.k8s.io/conformance-all#conformance-image,%20master%20(dev)

For some additional context:

All of the other jobs using this latest binary the KIND maintainers actively support and we block kind PRs on running mirrors of the jobs.

The latest (nightly / continuous) binary is only supported for upstream k/k testing where we need to move quicker to work around upstream breaking changes like this.

This image is really only consumed by sonobuoy, which is not upstream.

mikedanese

comment created time in 7 days

PullRequestReviewEvent

pull request commentkubernetes/test-infra

kind-conformance-image: use latest kind like the rest of CI

FYI this job is not blocking for a reason, it's something the sonobuoy folks more or less set up to get coverage for the e2e.test wrapper image but it's not something anyone has been looking after properly. Upstream CI does not use the e2e wrapper image otherwise for various reasons.

I would be in favor of demoting this to CI given the limited need and the lack of upkeep.

mikedanese

comment created time in 7 days

pull request commentkubernetes-sigs/kind

skip preflight and remove Kubernetes < 1.13 workarounds / code

Sorry about that, we should have tested this, shoring up the CI and policy around older versions is something we're looking at this quarter. I've updated the release notes.

Officially the Kubernetes project will not support a release this old, kind supports older releases best-effort currently but only 1.17+ are open for patches / support upstream. https://kubernetes.io/docs/setup/release/version-skew-policy/#:~:text=Supported%20versions&text=The%20Kubernetes%20project%20maintains%20release,9%20months%20of%20patch%20support.

BenTheElder

comment created time in 7 days

pull request commentkubernetes-sigs/kind

skip preflight and remove Kubernetes < 1.13 workarounds / code

We're currently dropping support for Kubernetes versions without this functionality upstream, it seems this works in single node clusters but not multi-node in 1.13, which was an oversight in the release notes.

It appears that while 1.13 has this flag, it only has it in kubeadm init and not kubeadm join, whereas 1.14+ has it in both. I will add a breaking change note to the release notes for now. In the future we may have an alternative workaround.

BenTheElder

comment created time in 7 days

issue commentkubernetes-sigs/kind

CoreDNS metrics vary in node version 1.19.1

A note of dns variance in change notes is also fine!

The first breaking change entry at the top of the release notes states:

The default node image is a Kubernetes v1.19.1 image: kindest/node:v1.19.1@sha256:98cf5288864662e37115e362b23e4369c8c4a408f99cbc06e58ac30ddc721600

https://github.com/kubernetes-sigs/kind/releases/tag/v0.9.0#breaking-changes

If we check the kubernetes 1.19 release notes:

https://kubernetes.io/docs/setup/release/notes/#feature

ACTION REQUIRED : In CoreDNS v1.7.0, metrics names have been changed which will be backward incompatible with existing reporting formulas that use the old metrics' names. Adjust your formulas to the new names before upgrading.

Kubeadm now includes CoreDNS version v1.7.0. Some of the major changes include:

  • Fixed a bug that could cause CoreDNS to stop updating service records.
  • Fixed a bug in the forward plugin where only the first upstream server is always selected no matter which policy is set.
  • Remove already deprecated options resyncperiod and upstream in the Kubernetes plugin.
  • Includes Prometheus metrics name changes (to bring them in line with standard Prometheus metrics naming convention). They will be backward incompatible with existing reporting formulas that use the old metrics' names.
  • The federation plugin (allows for v1 Kubernetes federation) has been removed. More details are available in https://coredns.io/2020/06/15/coredns-1.7.0-release/ (#92651, @rajansandeep) [SIG API Machinery, CLI, Cloud Provider, Cluster Lifecycle and Instrumentation]
sjmiller609

comment created time in 7 days

issue commentkubernetes-sigs/kind

CoreDNS metrics vary in node version 1.19.1

I expected to find the same metrics between kubernetes versions on core dns in Kind.

kind doesn't do anything special to coreDNS metrics, however different kubernetes versions ship different versions of coreDNS in the kubeadm internal addon.

we should check if this changed in coreDNS

sjmiller609

comment created time in 7 days

pull request commentkubernetes-sigs/kind

remove unnecessary usage of CombinedOutputLines

/retest

BenTheElder

comment created time in 7 days

PR opened kubernetes-sigs/kind

remove unnecessary usage of CombinedOutputLines

we only want stdout for these

there's two remaining usages, where we're only going to debug the output, not parse it (kubeadm init / kubeadm join)

+12 -13

0 comment

11 changed files

pr created time in 7 days

create barnchBenTheElder/kind

branch : outputlines

created branch time in 7 days

push eventBenTheElder/kind

Benjamin Elder

commit sha 3dedc7ffe66102558f3136883dc8dae0c7951cea

update install docs for v0.9.0

view details

Benjamin Elder

commit sha 3355a40018848073f42be4409ca6234001439723

Merge pull request #1849 from BenTheElder/update-docs update install docs for v0.9.0

view details

push time in 7 days

issue commentkubernetes/kubernetes

systemd specs should be in-repo

this is still true, for anyone not using the debs/rpms you either have to reinvent these or do quite some digging to turn up the upstream ones. consumers are not likely to be aware of kubernetes/release to begin with.

BenTheElder

comment created time in 7 days

push eventkubernetes-sigs/kind

Benjamin Elder

commit sha 3dedc7ffe66102558f3136883dc8dae0c7951cea

update install docs for v0.9.0

view details

Benjamin Elder

commit sha 3355a40018848073f42be4409ca6234001439723

Merge pull request #1849 from BenTheElder/update-docs update install docs for v0.9.0

view details

push time in 8 days

PR merged kubernetes-sigs/kind

Reviewers
update install docs for v0.9.0 approved cncf-cla: yes size/S

everything looks to be in order, bumping the references in the docs

+9 -9

1 comment

3 changed files

BenTheElder

pr closed time in 8 days

Pull request review commentkubernetes-sigs/kubetest2

[Clusterloader2] Implement a clusterloader2 tester

 func bindFlags(d *deployer) *pflag.FlagSet { 	return flags } -// assert that deployer implements types.Deployer-var _ types.Deployer = &deployer{}+// assert that deployer implements types.DeployerWithKubeconfig

Is there any deployer that doesn't have kubeconfig?

amwat

comment created time in 8 days

PullRequestReviewEvent

Pull request review commentkubernetes-sigs/kubetest2

[Clusterloader2] Implement a clusterloader2 tester

+package clusterloader2++import (+	"flag"+	"fmt"+	"os"+	"path/filepath"+	"strings"++	"github.com/octago/sflags/gen/gpflag"+	"k8s.io/klog"++	"sigs.k8s.io/kubetest2/pkg/exec"+	suite "sigs.k8s.io/kubetest2/pkg/testers/clusterloader2/suite"+)++type Tester struct {+	Suites        string `desc:"Comma separated list of standard scale testing suites e.g. load, density"`+	TestOverrides string `desc:"Comma separated list of paths to the config override files. The latter overrides take precedence over changes in former files."`+	TestConfigs   string `desc:"Comma separated list of paths to test config files."`+	Provider      string `desc:"The type of cluster provider used (e.g gke, gce, skeleton)"`+	KubeConfig    string `desc:"Path to kubeconfig"`

(this also allows us to do multi-file for multi-cluster)

amwat

comment created time in 8 days

more