profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/rajula96reddy/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.
Rajula Vineet Reddy rajula96reddy Geneva, Switzerland http://rajula96reddy.github.io/ Linux! Pharo! Kubernetes! IT Fellow @CERN, IIITB Alum '19, CERN Openlab Student '19, OMP Intern '18, GSoC '17

openmainframeproject-internship/LinuxOne_Kubernetes_SLES_Deployment_Documentation 2

Documentation on setting up Kubernetes under SLES on s390x. This work is a part of Open Mainframe Summer Internship 2018.

ClementMastin/clap-st 1

Command-line argument parsing for Pharo

nmdesai92/Projector_pi 1

Controlling Projector using Rpi

Alakazam03/ELK-Tutorial 0

Short setup for a demo visualization of apache-logs using logstash, kibana and elastic search

keerthivutla/ITDomains_Project 0

The Fun Begins !!!

openmainframeproject-internship/LinuxOne_Kubernetes_Canonical_Deployment_Documentation 0

Documentation on setting up Kubernetes under Ubuntu Linux 18.04 on s390x. This work is a part of Open Mainframe Summer Internship 2018.

rajula96reddy/2018-ESD_813-Real_Time_Operating_Systems 0

Assignments & other experimental work as a part of 2018-ESD 813-Real Time Operating Systems Course

rajula96reddy/Awesome-Linux-Software 0

🐧 A list of awesome applications, softwares, tools and other materials for Linux distros.

issue openedkubernetes/kubernetes

[Failing Job] gce-cos-master-alpha-features

Which jobs are failing?

  • gce-cos-master-alpha-features

Which tests are failing?

  • kubetest.Up (ci-kubernetes-e2e-gci-gce-alpha-features)

Since when has it been failing?

20/10/2021

Testgrid link

https://testgrid.k8s.io/sig-release-master-blocking#gce-cos-master-alpha-features

Reason for failure (if possible)

  • error during ./hack/e2e-internal/e2e-up.sh: exit status 1

  • W1020 14:42:16.077] Specify --start=52725 in the next get-serial-port-output invocation to get only the new output starting from here.
    W1020 14:42:19.592] scp: /var/log/cluster-autoscaler.log*: No such file or directory
    W1020 14:42:19.748] scp: /var/log/fluentd.log*: No such file or directory
    W1020 14:42:19.748] scp: /var/log/kubelet.cov*: No such file or directory
    W1020 14:42:19.749] scp: /var/log/startupscript.log*: No such file or directory
    W1020 14:42:19.753] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1]
    

Anything else we need to know?

No response

Relevant SIG(s)

/sig cloud-provider

created time in 3 days

issue openedkubernetes/kubernetes

[Failing Job] periodic-conformance-main-k8s-main

Which jobs are failing?

  • periodic-conformance-main-k8s-main

Which tests are failing?

  • capa-e2e-conformance.[unmanaged] [conformance] tests conformance
  • periodic-cluster-api-provider-aws-e2e-conformance-with-k8s-ci-artifacts

Since when has it been failing?

07/10/2021

Testgrid link

https://testgrid.k8s.io/sig-release-master-informing#gce-master-scale-correctness

Reason for failure (if possible)

 Failure [2259.300 seconds]
[1] [unmanaged] [conformance] tests
[1] /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/conformance/conformance_test.go:39
[1]   conformance [Measurement]
[1]   /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/conformance/conformance_test.go:56 

Anything else we need to know?

No response

Relevant SIG(s)

/sig cluster-lifecycle

created time in 14 days

issue closedkubernetes/kubernetes

[Failing test][sig-node] ci-kubernetes-node-kubelet-serial-containerd

<!-- Please only use this template for submitting reports about continuously failing tests or jobs in Kubernetes CI -->

Which jobs are failing:

node-kubelet-serial-containerd

Which test(s) are failing:

  • kubetest.Node Tests

Since when has it been failing:

20.09.2021 15:40 PDT

Testgrid link:

Testgrid One of the Failed jobs

Reason for failure:

JUnit

  • E2eNode Suite: [sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] [Slow] update Node.Spec.ConfigSource: state transitions: status and events should match expectations 
    _output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:123
    Timed out after 60.000s.
    Expected
        <*errors.errorString | 0xc0016592d0>: {
            s: "checkConfigMetrics: case: reset via nil config source: expect metrics (metrics.KubeletMetrics)map[kubelet_node_config_last_known_good:[<*>kubelet_node_config_last_known_good{node_config_kubelet_key=\"\", node_config_resource_version=\"\", node_config_source=\"local\", node_config_uid=\"\"} => 1 @[0]] kubelet_node_config_error:[<*>kubelet_node_config_error => 0 @[0]] kubelet_node_config_assigned:[<*>kubelet_node_config_assigned{node_config_kubelet_key=\"\", node_config_resource_version=\"\", node_config_source=\"local\", node_config_uid=\"\"} => 1 @[0]] kubelet_node_config_active:[<*>kubelet_node_config_active{node_config_kubelet_key=\"\", node_config_resource_version=\"\", node_config_source=\"local\", node_config_uid=\"\"} => 1 @[0]]] but got (metrics.KubeletMetrics)map[kubelet_node_config_error:[<*>kubelet_node_config_error => 0 @[0]]]",
        }
    to be nil
    _output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:1192
    
  • E2eNode  Suite: [sig-node]  [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive]   [Slow] update Node.Spec.ConfigSource: non-nil last-known-good to a new  non-nil last-known-good status and events should match expectations  | 2m0s
    -- | --
    _output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:82 Timed out after 60.000s. Expected     <*errors.errorString \| 0xc000d64880>: {         s: "checkConfigMetrics: case: reset via nil config source: expect metrics (metrics.KubeletMetrics)map[kubelet_node_config_assigned:[<*>kubelet_node_config_assigned{node_config_kubelet_key=\"\", node_config_resource_version=\"\", node_config_source=\"local\", node_config_uid=\"\"} => 1 @[0]] kubelet_node_config_active:[<*>kubelet_node_config_active{node_config_kubelet_key=\"\", node_config_resource_version=\"\", node_config_source=\"local\", node_config_uid=\"\"} => 1 @[0]] kubelet_node_config_last_known_good:[<*>kubelet_node_config_last_known_good{node_config_kubelet_key=\"\", node_config_resource_version=\"\", node_config_source=\"local\", node_config_uid=\"\"} => 1 @[0]] kubelet_node_config_error:[<*>kubelet_node_config_error => 0 @[0]]] but got (metrics.KubeletMetrics)map[kubelet_node_config_error:[<*>kubelet_node_config_error => 0 @[0]]]",     } to be nil _output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:1192
    
  • e2e.go: Node Tests  | 2h57m10s
    -- | --
    error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-jkns-gke-ubuntu-as --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=1 --focus="\[Serial\]" --skip="\[Flaky\]\|\[Benchmark\]\|\[NodeSpecialFeature:.+\]\|\[NodeSpecialFeature\]\|\[NodeAlphaFeature:.+\]\|\[NodeAlphaFeature\]\|\[NodeFeature:Eviction\]" --test_args=--feature-gates=DynamicKubeletConfig=true,LocalStorageCapacityIsolation=true --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --container-runtime-process-name=/usr/bin/containerd --container-runtime-pid-file= --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/containerd.service" --extra-log="{\"name\": \"containerd.log\", \"journalctl\": [\"-u\", \"containerd*\"]}" --test-timeout=4h0m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/containerd/image-config-serial.yaml: exit status 1
    

Build Log


W0922 05:21:48.711] 
W0922 05:21:48.712] [Fail] [sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive]  [BeforeEach] [Slow] update Node.Spec.ConfigSource: non-nil last-known-good to a new non-nil last-known-good status and events should match expectations 
W0922 05:21:48.712] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:1192
W0922 05:21:48.712] 
W0922 05:21:48.712] Ran 39 of 355 Specs in 8719.853 seconds
W0922 05:21:48.712] FAIL! -- 37 Passed | 2 Failed | 1 Pending | 315 Skipped
W0922 05:21:48.712] --- FAIL: TestE2eNode (8719.96s)
W0922 05:21:48.712] FAIL
W0922 05:21:48.712] 
W0922 05:21:48.712] Ginkgo ran 1 suite in 2h25m20.061010454s
W0922 05:21:48.713] Test Suite Failed
W0922 05:21:48.713] , err: exit status 1
W0922 05:21:48.713] I0922 05:21:48.610179    6524 remote.go:198] Test failed unexpectedly. Attempting to retrieving system logs (only works for nodes with journald)
I0922 05:21:58.878] 
I0922 05:21:58.878] [Fail] [sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive]  [BeforeEach] [Slow] update Node.Spec.ConfigSource: non-nil last-known-good to a new non-nil last-known-good status and events should match expectations 
I0922 05:21:58.878] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:1192
I0922 05:21:58.879] 
I0922 05:21:58.879] Ran 39 of 355 Specs in 8719.853 seconds
I0922 05:21:58.879] FAIL! -- 37 Passed | 2 Failed | 1 Pending | 315 Skipped
I0922 05:21:58.879] --- FAIL: TestE2eNode (8719.96s)
I0922 05:21:58.879] FAIL
I0922 05:21:58.879] 
I0922 05:21:58.879] Ginkgo ran 1 suite in 2h25m20.061010454s
I0922 05:21:58.879] Test Suite Failed
I0922 05:21:58.879] 
I0922 05:21:58.879] Failure Finished Test Suite on Host n1-standard-2-cos-89-16108-534-2-eda1b091
I0922 05:42:15.050] Network Project: k8s-jkns-gke-ubuntu-as
I0922 05:42:15.050] Zone: us-west1-b
I0922 05:42:15.062] Dumping logs from master locally to '/workspace/_artifacts'
W0922 05:42:15.163] Trying to find master named 'bootstrap-e2e-master'
W0922 05:42:15.163] Looking for address 'bootstrap-e2e-master-ip'
W0922 05:42:16.656] ERROR: (gcloud.compute.addresses.describe) Could not fetch resource:
W0922 05:42:16.657]  - The resource 'projects/k8s-jkns-gke-ubuntu-as/regions/us-west1/addresses/bootstrap-e2e-master-ip' was not found
W0922 05:42:16.657] 
W0922 05:42:16.888] Could not detect Kubernetes master node.  Make sure you've launched a cluster with 'kube-up.sh'
I0922 05:42:16.988] Master not detected. Is the cluster up?
I0922 05:42:16.989] Dumping logs from nodes locally to '/workspace/_artifacts'
W0922 05:42:22.375]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0922 05:42:22.375]     subprocess.check_call(cmd, env=env)
W0922 05:42:22.375]   File "/usr/lib/python2.7/subprocess.py", line 190, in check_call
W0922 05:42:22.375]     raise CalledProcessError(retcode, cmd)
W0922 05:42:22.376] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--test', '--deployment=node', '--provider=gce', '--cluster=bootstrap-e2e', '--gcp-network=bootstrap-e2e', '--gcp-project-type=node-e2e-project', '--gcp-zone=us-west1-b', '--node-args=--image-config-file=/workspace/test-infra/jobs/e2e_node/containerd/image-config-serial.yaml', '--node-test-args=--feature-gates=DynamicKubeletConfig=true,LocalStorageCapacityIsolation=true --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --container-runtime-process-name=/usr/bin/containerd --container-runtime-pid-file= --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/containerd.service" --extra-log="{\\"name\\": \\"containerd.log\\", \\"journalctl\\": [\\"-u\\", \\"containerd*\\"]}"', '--node-tests=true', '--test_args=--nodes=1 --focus="\\[Serial\\]" --skip="\\[Flaky\\]|\\[Benchmark\\]|\\[NodeSpecialFeature:.+\\]|\\[NodeSpecialFeature\\]|\\[NodeAlphaFeature:.+\\]|\\[NodeAlphaFeature\\]|\\[NodeFeature:Eviction\\]"', '--timeout=240m')' returned non-zero exit status 1
E0922 05:42:22.404] Command failed
I0922 05:42:22.405] process 316 exited with code 1 after 177.4m
E0922 05:42:22.405] FAIL: ci-kubernetes-node-kubelet-serial-containerd
I0922 05:42:22.407] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0922 05:42:23.126] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0922 05:42:23.280] process 57042 exited with code 0 after 0.0m
I0922 05:42:23.280] Call:  gcloud config get-value account
I0922 05:42:24.040] process 57055 exited with code 0 after 0.0m
I0922 05:42:24.040] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0922 05:42:24.040] Upload result and artifacts...
I0922 05:42:24.041] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial-containerd/1440507048503545856
I0922 05:42:24.042] Call:  gsutil ls gs://kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial-containerd/1440507048503545856/artifacts
W0922 05:42:25.333] CommandException: One or more URLs matched no objects.
E0922 05:42:25.579] Command failed
I0922 05:42:25.580] process 57068 exited with code 1 after 0.0m
W0922 05:42:25.580] Remote dir gs://kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial-containerd/1440507048503545856/artifacts not exist yet
I0922 05:42:25.580] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial-containerd/1440507048503545856/artifacts
I0922 05:42:29.938] process 57207 exited with code 0 after 0.1m
I0922 05:42:29.939] Call:  git rev-parse HEAD 

/sig node

closed time in a month

rajula96reddy

issue commentkubernetes/kubernetes

[Failing test][sig-node] ci-kubernetes-node-kubelet-serial-containerd

This is already tracked in https://github.com/kubernetes/kubernetes/issues/104754. There closing this

rajula96reddy

comment created time in a month

issue openedkubernetes/kubernetes

[Failing test][sig-node] ci-kubernetes-node-kubelet-serial-containerd

<!-- Please only use this template for submitting reports about continuously failing tests or jobs in Kubernetes CI -->

Which jobs are failing:

node-kubelet-serial-containerd

Which test(s) are failing:

  • kubetest.Node Tests

Since when has it been failing:

20.09.2021 15:40 PDT

Testgrid link:

Testgrid One of the Failed jobs

Reason for failure:

JUnit

  • E2eNode Suite: [sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] [Slow] update Node.Spec.ConfigSource: state transitions: status and events should match expectations 
    _output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:123
    Timed out after 60.000s.
    Expected
        <*errors.errorString | 0xc0016592d0>: {
            s: "checkConfigMetrics: case: reset via nil config source: expect metrics (metrics.KubeletMetrics)map[kubelet_node_config_last_known_good:[<*>kubelet_node_config_last_known_good{node_config_kubelet_key=\"\", node_config_resource_version=\"\", node_config_source=\"local\", node_config_uid=\"\"} => 1 @[0]] kubelet_node_config_error:[<*>kubelet_node_config_error => 0 @[0]] kubelet_node_config_assigned:[<*>kubelet_node_config_assigned{node_config_kubelet_key=\"\", node_config_resource_version=\"\", node_config_source=\"local\", node_config_uid=\"\"} => 1 @[0]] kubelet_node_config_active:[<*>kubelet_node_config_active{node_config_kubelet_key=\"\", node_config_resource_version=\"\", node_config_source=\"local\", node_config_uid=\"\"} => 1 @[0]]] but got (metrics.KubeletMetrics)map[kubelet_node_config_error:[<*>kubelet_node_config_error => 0 @[0]]]",
        }
    to be nil
    _output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:1192
    
  • E2eNode  Suite: [sig-node]  [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive]   [Slow] update Node.Spec.ConfigSource: non-nil last-known-good to a new  non-nil last-known-good status and events should match expectations  | 2m0s
    -- | --
    _output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:82 Timed out after 60.000s. Expected     <*errors.errorString \| 0xc000d64880>: {         s: "checkConfigMetrics: case: reset via nil config source: expect metrics (metrics.KubeletMetrics)map[kubelet_node_config_assigned:[<*>kubelet_node_config_assigned{node_config_kubelet_key=\"\", node_config_resource_version=\"\", node_config_source=\"local\", node_config_uid=\"\"} => 1 @[0]] kubelet_node_config_active:[<*>kubelet_node_config_active{node_config_kubelet_key=\"\", node_config_resource_version=\"\", node_config_source=\"local\", node_config_uid=\"\"} => 1 @[0]] kubelet_node_config_last_known_good:[<*>kubelet_node_config_last_known_good{node_config_kubelet_key=\"\", node_config_resource_version=\"\", node_config_source=\"local\", node_config_uid=\"\"} => 1 @[0]] kubelet_node_config_error:[<*>kubelet_node_config_error => 0 @[0]]] but got (metrics.KubeletMetrics)map[kubelet_node_config_error:[<*>kubelet_node_config_error => 0 @[0]]]",     } to be nil _output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:1192
    
  • e2e.go: Node Tests  | 2h57m10s
    -- | --
    error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-jkns-gke-ubuntu-as --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=1 --focus="\[Serial\]" --skip="\[Flaky\]\|\[Benchmark\]\|\[NodeSpecialFeature:.+\]\|\[NodeSpecialFeature\]\|\[NodeAlphaFeature:.+\]\|\[NodeAlphaFeature\]\|\[NodeFeature:Eviction\]" --test_args=--feature-gates=DynamicKubeletConfig=true,LocalStorageCapacityIsolation=true --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --container-runtime-process-name=/usr/bin/containerd --container-runtime-pid-file= --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/containerd.service" --extra-log="{\"name\": \"containerd.log\", \"journalctl\": [\"-u\", \"containerd*\"]}" --test-timeout=4h0m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/containerd/image-config-serial.yaml: exit status 1
    

Build Log


W0922 05:21:48.711] 
W0922 05:21:48.712] [Fail] [sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive]  [BeforeEach] [Slow] update Node.Spec.ConfigSource: non-nil last-known-good to a new non-nil last-known-good status and events should match expectations 
W0922 05:21:48.712] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:1192
W0922 05:21:48.712] 
W0922 05:21:48.712] Ran 39 of 355 Specs in 8719.853 seconds
W0922 05:21:48.712] FAIL! -- 37 Passed | 2 Failed | 1 Pending | 315 Skipped
W0922 05:21:48.712] --- FAIL: TestE2eNode (8719.96s)
W0922 05:21:48.712] FAIL
W0922 05:21:48.712] 
W0922 05:21:48.712] Ginkgo ran 1 suite in 2h25m20.061010454s
W0922 05:21:48.713] Test Suite Failed
W0922 05:21:48.713] , err: exit status 1
W0922 05:21:48.713] I0922 05:21:48.610179    6524 remote.go:198] Test failed unexpectedly. Attempting to retrieving system logs (only works for nodes with journald)
I0922 05:21:58.878] 
I0922 05:21:58.878] [Fail] [sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive]  [BeforeEach] [Slow] update Node.Spec.ConfigSource: non-nil last-known-good to a new non-nil last-known-good status and events should match expectations 
I0922 05:21:58.878] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:1192
I0922 05:21:58.879] 
I0922 05:21:58.879] Ran 39 of 355 Specs in 8719.853 seconds
I0922 05:21:58.879] FAIL! -- 37 Passed | 2 Failed | 1 Pending | 315 Skipped
I0922 05:21:58.879] --- FAIL: TestE2eNode (8719.96s)
I0922 05:21:58.879] FAIL
I0922 05:21:58.879] 
I0922 05:21:58.879] Ginkgo ran 1 suite in 2h25m20.061010454s
I0922 05:21:58.879] Test Suite Failed
I0922 05:21:58.879] 
I0922 05:21:58.879] Failure Finished Test Suite on Host n1-standard-2-cos-89-16108-534-2-eda1b091
I0922 05:42:15.050] Network Project: k8s-jkns-gke-ubuntu-as
I0922 05:42:15.050] Zone: us-west1-b
I0922 05:42:15.062] Dumping logs from master locally to '/workspace/_artifacts'
W0922 05:42:15.163] Trying to find master named 'bootstrap-e2e-master'
W0922 05:42:15.163] Looking for address 'bootstrap-e2e-master-ip'
W0922 05:42:16.656] ERROR: (gcloud.compute.addresses.describe) Could not fetch resource:
W0922 05:42:16.657]  - The resource 'projects/k8s-jkns-gke-ubuntu-as/regions/us-west1/addresses/bootstrap-e2e-master-ip' was not found
W0922 05:42:16.657] 
W0922 05:42:16.888] Could not detect Kubernetes master node.  Make sure you've launched a cluster with 'kube-up.sh'
I0922 05:42:16.988] Master not detected. Is the cluster up?
I0922 05:42:16.989] Dumping logs from nodes locally to '/workspace/_artifacts'
W0922 05:42:22.375]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0922 05:42:22.375]     subprocess.check_call(cmd, env=env)
W0922 05:42:22.375]   File "/usr/lib/python2.7/subprocess.py", line 190, in check_call
W0922 05:42:22.375]     raise CalledProcessError(retcode, cmd)
W0922 05:42:22.376] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--test', '--deployment=node', '--provider=gce', '--cluster=bootstrap-e2e', '--gcp-network=bootstrap-e2e', '--gcp-project-type=node-e2e-project', '--gcp-zone=us-west1-b', '--node-args=--image-config-file=/workspace/test-infra/jobs/e2e_node/containerd/image-config-serial.yaml', '--node-test-args=--feature-gates=DynamicKubeletConfig=true,LocalStorageCapacityIsolation=true --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --container-runtime-process-name=/usr/bin/containerd --container-runtime-pid-file= --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/containerd.service" --extra-log="{\\"name\\": \\"containerd.log\\", \\"journalctl\\": [\\"-u\\", \\"containerd*\\"]}"', '--node-tests=true', '--test_args=--nodes=1 --focus="\\[Serial\\]" --skip="\\[Flaky\\]|\\[Benchmark\\]|\\[NodeSpecialFeature:.+\\]|\\[NodeSpecialFeature\\]|\\[NodeAlphaFeature:.+\\]|\\[NodeAlphaFeature\\]|\\[NodeFeature:Eviction\\]"', '--timeout=240m')' returned non-zero exit status 1
E0922 05:42:22.404] Command failed
I0922 05:42:22.405] process 316 exited with code 1 after 177.4m
E0922 05:42:22.405] FAIL: ci-kubernetes-node-kubelet-serial-containerd
I0922 05:42:22.407] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0922 05:42:23.126] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0922 05:42:23.280] process 57042 exited with code 0 after 0.0m
I0922 05:42:23.280] Call:  gcloud config get-value account
I0922 05:42:24.040] process 57055 exited with code 0 after 0.0m
I0922 05:42:24.040] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0922 05:42:24.040] Upload result and artifacts...
I0922 05:42:24.041] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial-containerd/1440507048503545856
I0922 05:42:24.042] Call:  gsutil ls gs://kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial-containerd/1440507048503545856/artifacts
W0922 05:42:25.333] CommandException: One or more URLs matched no objects.
E0922 05:42:25.579] Command failed
I0922 05:42:25.580] process 57068 exited with code 1 after 0.0m
W0922 05:42:25.580] Remote dir gs://kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial-containerd/1440507048503545856/artifacts not exist yet
I0922 05:42:25.580] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial-containerd/1440507048503545856/artifacts
I0922 05:42:29.938] process 57207 exited with code 0 after 0.1m
I0922 05:42:29.939] Call:  git rev-parse HEAD 

/sig node

created time in a month

push eventrajula96reddy/website

Rajula Vineet Reddy

commit sha e6add63933b99cdca2996b1fc2169fa6e7345303

1.22 feature blog for memory qos support

view details

push time in 2 months

pull request commentkubernetes/website

Add seccomp default feature blog post

All the nits are resolved rn. Can we have apply lgtm on this?

saschagrunert

comment created time in 2 months

push eventrajula96reddy/website

Rajula Vineet Reddy

commit sha 9e36d88caac7ad389a5dd394f734a09f7d73baf3

Update publish date to 1st September 2021

view details

push time in 2 months

pull request commentkubernetes/website

Add memory manager moves to beta feature blog post 1.22

@rajula96reddy please update this publishing date to August 11. Thanks!

Updated!

rajula96reddy

comment created time in 2 months

push eventrajula96reddy/website

Rajula Vineet Reddy

commit sha 629ac0e387d199ff6df96659973a98d1d2af9a7a

Add memory manager feature blog post Co-authored-by: Artyom Lukianov <alukiano@redhat.com> Co-authored-by: Cezary Zukowski <c.zukowski@samsung.com>

view details

push time in 2 months

Pull request review commentkubernetes/website

Add memory manager moves to beta feature blog post 1.22

+---+layout: blog+title: "Kubernetes Memory manager moves to beta"+date: 2021-08-09+slug: kubernetes-1-22-feature-memory-manager-moves-to-beta+---++**Authors:** Artyom Lukianov (Red Hat), Cezary Zukowski (Samsung)++# Memory manager moves to beta++The blog post describes the internals of the **Memory manager**, a beta feature of Kubernetes 1.22. The **Memory Manager** is a component in the **kubelet** ecosystem proposed to enable the feature of guaranteed memory (and hugepages) allocation for pods in the [Guaranteed QoS class](https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/#qos-classes).++This blog post covers:++1. Why do you need it?+2. The internal details of how the **MemoryManager** works+3. Current limitations of the **MemoryManager**+4. Future work for the **MemoryManager**

Done!

rajula96reddy

comment created time in 2 months

PullRequestReviewEvent

push eventrajula96reddy/website

Rajula Vineet Reddy

commit sha ef816e118105d2a78710f66040af3bc4956928a7

Add memory manager feature blog post Co-authored-by: Artyom Lukianov <alukiano@redhat.com> Co-authored-by: Cezary Zukowski <c.zukowski@samsung.com>

view details

push time in 2 months

MemberEvent

issue openedkubernetes-sigs/contributor-tweets

1.23 Release Shadow application tweet

Provide the actual tweet content between the commented colons <!-- Input your tweet content exactly between the commented colons --> <!--::--> Want to be part of the K8s 1.23 release team 🔨 ? Shadow application ends on 13th August. Apply ASAP 📝! https://forms.gle/7As7hacvMhxBQaox8 <!--::-->

created time in 3 months

PullRequestReviewEvent

push eventrajula96reddy/website

Rajula Vineet Reddy

commit sha aebdd79e0cd55675d3cf426cedbc04da1f65abfc

Add memory manager feature blog post Signed-off-by: Artyom Lukianov <alukiano@redhat.com>

view details

push time in 3 months

push eventrajula96reddy/website

Rajula Vineet Reddy

commit sha 0c643c00f9102937d593989699a8dbe9f309aaa5

1.22 feature blog for memory qos support

view details

push time in 3 months

Pull request review commentkubernetes/website

1.22 feature blog for alpha swap support

+---+layout: blog+title: 'New in Kubernetes v1.22: alpha support for using swap memory'+date: 2021-07-19

There were some minor changes, and the 1.21 comms team decided to move the date to 18th August. @ehashman will update the same here.

ehashman

comment created time in 3 months

PullRequestReviewEvent
MemberEvent

pull request commentkubernetes/website

[WIP] 1.22 feature blog for Node alpha swap support

Closing this. Since we are tracking the blog in a new PR https://github.com/kubernetes/website/pull/29060

rajula96reddy

comment created time in 3 months

PR closed kubernetes/website

Reviewers
[WIP] 1.22 feature blog for Node alpha swap support area/blog cncf-cla: yes language/en sig/docs size/XS

<!-- 🛈

Hello!

Remember to ADD A DESCRIPTION and delete this note before submitting your pull request. The description should explain what will change, and why.

PLEASE title the FIRST commit appropriately, so that if you squash all your commits into one, the combined commit message makes sense. For overall help on editing and submitting pull requests, visit: https://kubernetes.io/docs/contribute/start/#improve-existing-content

Use the default base branch, “main”, if you're documenting existing features in the English localization.

If you're working on a different localization (not English), see https://kubernetes.io/docs/contribute/new-content/overview/#choose-which-git-branch-to-use for advice.

If you're documenting a feature that will be part of a future release, see https://kubernetes.io/docs/contribute/new-content/new-features/ for advice.

--> Feature blog post for "Node Alpha swap support" (https://github.com/kubernetes/enhancements/issues/2400). This is part of the Kubernetes 1.22 release Feature blogs

Tentative publish date is 9th August 2021.

cc: @ehashman

Todo

  • [ ] Adjust publishing date
+6 -0

3 comments

1 changed file

rajula96reddy

pr closed time in 3 months

startediamadamdev/bypass-paywalls-firefox

started time in 3 months