profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/brendanburns/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.

brendanburns/kubernetes 2

Container Cluster Manager

brendanburns/droiddraw 0

Automatically exported from code.google.com/p/droiddraw

pull request commentkubernetes/kubernetes

Upgrade kustomize-in-kubectl to kustomize@v4.0.2

@monopole: The following tests failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
pull-kubernetes-e2e-azure-file-windows afcbc5ca7dc2976ba6479bf2804a69c5194b9ec2 link /test pull-kubernetes-e2e-azure-file-windows
pull-kubernetes-e2e-azure-disk-windows afcbc5ca7dc2976ba6479bf2804a69c5194b9ec2 link /test pull-kubernetes-e2e-azure-disk-windows
pull-kubernetes-e2e-kind 9e0c94856934de24aee393683cc693446634849e link /test pull-kubernetes-e2e-kind
pull-kubernetes-conformance-kind-ga-only-parallel 9e0c94856934de24aee393683cc693446634849e link /test pull-kubernetes-conformance-kind-ga-only-parallel
pull-kubernetes-e2e-kind-ipv6 9e0c94856934de24aee393683cc693446634849e link /test pull-kubernetes-e2e-kind-ipv6
pull-kubernetes-e2e-gce-100-performance 9e0c94856934de24aee393683cc693446634849e link /test pull-kubernetes-e2e-gce-100-performance
pull-kubernetes-e2e-gce-ubuntu-containerd 9e0c94856934de24aee393683cc693446634849e link /test pull-kubernetes-e2e-gce-ubuntu-containerd
pull-kubernetes-bazel-test 9e0c94856934de24aee393683cc693446634849e link /test pull-kubernetes-bazel-test
pull-kubernetes-verify 9e0c94856934de24aee393683cc693446634849e link /test pull-kubernetes-verify

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

<details>

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. </details> <!-- test report -->

monopole

comment created time in 11 minutes

Pull request review commentkubernetes/kubernetes

Fix fs windows tests to pass on windows machine

 func TestDiskUsage(t *testing.T) { 	if err != nil { 		t.Fatalf("TestDiskUsage failed: %s", err.Error()) 	}-	if size.Cmp(used) != 1 {-		t.Fatalf("TestDiskUsage failed: %s", err.Error())+	if size.Cmp(used) != 0 {+		t.Fatalf("TestDiskUsage failed: expected 1, got %d", size.Cmp(used))

mismatch: expected 1 and size.Cmp(used) != 0

jsturtevant

comment created time in 15 minutes

Pull request review commentkubernetes/kubernetes

migrate volume/csi/csi-client.go logs to structured logging

 func (c *csiDriverClient) NodeGetVolumeStats(ctx context.Context, volID string, 			metrics.Inodes = resource.NewQuantity(usage.GetTotal(), resource.BinarySI) 			metrics.InodesUsed = resource.NewQuantity(usage.GetUsed(), resource.BinarySI) 		default:-			klog.Errorf("unknown key %s in usage", unit.String())+			klog.ErrorS(nil, "unknown key in usage", "unitKey", unit.String())

Your suggestion is better, I will update it.

CKchen0726

comment created time in 16 minutes

pull request commentkubernetes/kubernetes

typo fixed for terminatation

/triage accepted

lala123912

comment created time in 22 minutes

pull request commentkubernetes/kubernetes

Replace top-level ginkgo.Describe with SIGDescribe

/triage accepted

lala123912

comment created time in 23 minutes

pull request commentkubernetes/kubernetes

Avoid creation of the same storageclass in e2e tests

/assigne @jsafrane

mauriciopoppe

comment created time in 24 minutes

Pull request review commentkubernetes/kubernetes

Avoid creation of the same storageclass in e2e tests

 func (p *provisioningTestSuite) DefineTests(driver storageframework.TestDriver, 				myTestConfig := testConfig 				myTestConfig.Prefix = fmt.Sprintf("%s-%d", myTestConfig.Prefix, i) -				// Each go routine must have its own testCase copy to store their claim-				myTestCase := *l.testCase-				myTestCase.Claim = myTestCase.Claim.DeepCopy()-				myTestCase.Class = nil // Do not create/delete the storage class in TestDynamicProvisioning, it already exists.-				myTestCase.PvCheck = func(claim *v1.PersistentVolumeClaim) {+				t := *l.testCase+				t.PvCheck = func(claim *v1.PersistentVolumeClaim) {

why remove Claim.DeepCopy()?

mauriciopoppe

comment created time in 31 minutes

Pull request review commentkubernetes/kubernetes

Avoid creation of the same storageclass in e2e tests

 func (p *provisioningTestSuite) DefineTests(driver storageframework.TestDriver, 				myTestConfig := testConfig 				myTestConfig.Prefix = fmt.Sprintf("%s-%d", myTestConfig.Prefix, i) -				// Each go routine must have its own testCase copy to store their claim-				myTestCase := *l.testCase-				myTestCase.Claim = myTestCase.Claim.DeepCopy()-				myTestCase.Class = nil // Do not create/delete the storage class in TestDynamicProvisioning, it already exists.-				myTestCase.PvCheck = func(claim *v1.PersistentVolumeClaim) {+				t := *l.testCase+				t.PvCheck = func(claim *v1.PersistentVolumeClaim) {

Also I don't know why class was set to nil before. The author @jsafrane https://github.com/kubernetes/kubernetes/commit/1f9f2390cb57832f5fbd02e53d39023b81500cc2 specifically made a comment about storageclass already exist? Is it trying to test default storage class behavior?

mauriciopoppe

comment created time in 25 minutes

pull request commentkubernetes/kubernetes

Upgrade kustomize-in-kubectl to kustomize@v4.0.2

@monopole: The following tests failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
pull-kubernetes-e2e-azure-file-windows afcbc5ca7dc2976ba6479bf2804a69c5194b9ec2 link /test pull-kubernetes-e2e-azure-file-windows
pull-kubernetes-e2e-azure-disk-windows afcbc5ca7dc2976ba6479bf2804a69c5194b9ec2 link /test pull-kubernetes-e2e-azure-disk-windows
pull-kubernetes-e2e-kind 9e0c94856934de24aee393683cc693446634849e link /test pull-kubernetes-e2e-kind
pull-kubernetes-conformance-kind-ga-only-parallel 9e0c94856934de24aee393683cc693446634849e link /test pull-kubernetes-conformance-kind-ga-only-parallel
pull-kubernetes-e2e-kind-ipv6 9e0c94856934de24aee393683cc693446634849e link /test pull-kubernetes-e2e-kind-ipv6
pull-kubernetes-e2e-gce-100-performance 9e0c94856934de24aee393683cc693446634849e link /test pull-kubernetes-e2e-gce-100-performance
pull-kubernetes-e2e-gce-ubuntu-containerd 9e0c94856934de24aee393683cc693446634849e link /test pull-kubernetes-e2e-gce-ubuntu-containerd
pull-kubernetes-bazel-test 9e0c94856934de24aee393683cc693446634849e link /test pull-kubernetes-bazel-test

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

<details>

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. </details> <!-- test report -->

monopole

comment created time in 25 minutes

pull request commentkubernetes/kubernetes

ContainerGC cleanup duplicated running containers

/assign @derekwaynecarr

chenyw1990

comment created time in 27 minutes

pull request commentkubernetes/kubernetes

Speed up pkg/volume/csi unit tests

@wzshiming: The following test failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
pull-kubernetes-e2e-kind-ipv6 bc3d9252bc46b6fa4e988527079b67db58f5af29 link /test pull-kubernetes-e2e-kind-ipv6

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

<details>

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. </details> <!-- test report -->

wzshiming

comment created time in 30 minutes

Pull request review commentkubernetes/kubernetes

Avoid creation of the same storageclass in e2e tests

 func (t StorageClassTest) TestDynamicProvisioning() *v1.PersistentVolume { 		} 	}() -	if class == nil {-		// StorageClass is nil, so the default one will be used-		scName, err := e2epv.GetDefaultStorageClassName(client)-		framework.ExpectNoError(err)-		defaultSC, err := client.StorageV1().StorageClasses().Get(context.TODO(), scName, metav1.GetOptions{})+	// ensure that the claim refers to the provisioned StorageClass+	framework.ExpectEqual(*claim.Spec.StorageClassName, class.Name)++	// if late binding is configured, create and delete a pod to provision the volume+	if *class.VolumeBindingMode == storagev1.VolumeBindingWaitForFirstConsumer {

@msau42 the conversation was dropped after I did the rebase fix, sorry about that.

After debugging the test I saw that the Pod created in this if statement was failing with the following error:

 Warning  FailedMount  7s    kubelet, e2e-test-mauriciopoppe-minion-group-pnq9  Unable to attach or mount volumes: unmounted volumes=[volume1], unattached volumes=[volume1 default-token-b9jmr[]: volume volume1 has volumeMode Block, but is specified in volumeMounts

After some debugging I found out that the test is failing because the claim dynamically provisions a PV with volumeMode = Block and this pod is created with volumeMounts instead of volumeDevices for all the cases, if the objective of this test is to also make it work with a BlockDevice then I can create either volumeMounts (the default) or volumeDevices inside e2e.CreatePod, we could also skip this test for block devices, what do you think?

mauriciopoppe

comment created time in 33 minutes

pull request commentkubernetes/kubernetes

Replace top-level ginkgo.Describe with SIGDescribe

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: <a href="https://github.com/kubernetes/kubernetes/pull/99524#" title="Author self-approved">lala123912</a>, <a href="https://github.com/kubernetes/kubernetes/pull/99524#pullrequestreview-600092979" title="LGTM">spiffxp</a>

The full list of commands accepted by this bot can be found here.

The pull request process is described here

<details > Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment Approvers can cancel approval by writing /approve cancel in a comment </details> <!-- META={"approvers":[]} -->

lala123912

comment created time in 34 minutes

pull request commentkubernetes/kubernetes

Replace top-level ginkgo.Describe with SIGDescribe

/ok-to-test

lala123912

comment created time in 37 minutes

pull request commentkubernetes/kubernetes

Replace top-level ginkgo.Describe with SIGDescribe

@spiffxp

lala123912

comment created time in 37 minutes

pull request commentkubernetes/kubernetes

Replace top-level ginkgo.Describe with SIGDescribe

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: <a href="https://github.com/kubernetes/kubernetes/pull/99524#" title="Author self-approved">lala123912</a> To complete the pull request process, please assign deads2k after the PR has been reviewed. You can assign the PR to them by writing /assign @deads2k in a comment when ready.

The full list of commands accepted by this bot can be found here.

<details open> Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment Approvers can cancel approval by writing /approve cancel in a comment </details> <!-- META={"approvers":["deads2k"]} -->

lala123912

comment created time in 38 minutes

pull request commentkubernetes/kubernetes

Replace top-level ginkgo.Describe with SIGDescribe

@lala123912: This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

<details>

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. </details>

lala123912

comment created time in 39 minutes

PR opened kubernetes/kubernetes

Replace top-level ginkgo.Describe with SIGDescribe

<!-- Thanks for sending a pull request! Here are some tips for you:

  1. If this is your first time, please read our contributor guidelines: https://git.k8s.io/community/contributors/guide/first-contribution.md#your-first-contribution and developer guide https://git.k8s.io/community/contributors/devel/development.md#development-guide
  2. Please label this pull request according to what type of issue you are addressing, especially if this is a release targeted pull request. For reference on required PR/issue labels, read here: https://git.k8s.io/community/contributors/devel/sig-release/release.md#issuepr-kind-label
  3. Ensure you have added or ran the appropriate tests for your PR: https://git.k8s.io/community/contributors/devel/sig-testing/testing.md
  4. If you want faster PR reviews, read how: https://git.k8s.io/community/contributors/guide/pull-requests.md#best-practices-for-faster-reviews
  5. If the PR is unfinished, see how to mark it: https://git.k8s.io/community/contributors/guide/pull-requests.md#marking-unfinished-pull-requests -->

What type of PR is this?

/kind cleanup <!-- Add one of the following kinds: /kind bug /kind cleanup /kind documentation /kind feature /kind design

Optionally add one or more of the following kinds if applicable: /kind api-change /kind deprecation /kind failing-test /kind flake /kind regression -->

What this PR does / why we need it:

Ref #98326

Which issue(s) this PR fixes:

<!-- Automatically closes linked issue when PR is merged. Usage: Fixes #<issue number>, or Fixes (paste link of issue). If PR is about failing-tests or flakes, please post the related issues/tests in a comment and do not use Fixes --> Fixes #

Special notes for your reviewer:

Does this PR introduce a user-facing change?

<!-- If no, just write "NONE" in the release-note block below. If yes, a release note is required: Enter your extended release note in the block below. If the PR requires additional action from users switching to the new release, include the string "action required".

For more information on release notes see: https://git.k8s.io/community/contributors/guide/release-notes.md -->

NONE

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:

<!-- This section can be blank if this pull request does not require a release note.

When adding links which point to resources within git repositories, like KEPs or supporting documentation, please reference a specific commit and avoid linking directly to the master branch. This ensures that links reference a specific point in time, rather than a document that may change over time.

See here for guidance on getting permanent links to files: https://help.github.com/en/articles/getting-permanent-links-to-files

Please use the following format for linking documentation:

-->


+1 -1

0 comment

1 changed file

pr created time in 39 minutes

pull request commentkubernetes/kubernetes

Upgrade kustomize-in-kubectl to kustomize@v4.0.2

@monopole: The following tests failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
pull-kubernetes-e2e-azure-file-windows afcbc5ca7dc2976ba6479bf2804a69c5194b9ec2 link /test pull-kubernetes-e2e-azure-file-windows
pull-kubernetes-e2e-azure-disk-windows afcbc5ca7dc2976ba6479bf2804a69c5194b9ec2 link /test pull-kubernetes-e2e-azure-disk-windows
pull-kubernetes-bazel-test 3960bf9ab631a0cf7d656d28b5b29efe2033c0fa link /test pull-kubernetes-bazel-test
pull-kubernetes-e2e-kind 9e0c94856934de24aee393683cc693446634849e link /test pull-kubernetes-e2e-kind
pull-kubernetes-conformance-kind-ga-only-parallel 9e0c94856934de24aee393683cc693446634849e link /test pull-kubernetes-conformance-kind-ga-only-parallel
pull-kubernetes-e2e-kind-ipv6 9e0c94856934de24aee393683cc693446634849e link /test pull-kubernetes-e2e-kind-ipv6
pull-kubernetes-e2e-gce-100-performance 9e0c94856934de24aee393683cc693446634849e link /test pull-kubernetes-e2e-gce-100-performance
pull-kubernetes-e2e-gce-ubuntu-containerd 9e0c94856934de24aee393683cc693446634849e link /test pull-kubernetes-e2e-gce-ubuntu-containerd

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

<details>

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. </details> <!-- test report -->

monopole

comment created time in 39 minutes

pull request commentkubernetes/kubernetes

Upgrade kustomize-in-kubectl to kustomize@v4.0.2

@monopole: The following tests failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
pull-kubernetes-e2e-azure-file-windows afcbc5ca7dc2976ba6479bf2804a69c5194b9ec2 link /test pull-kubernetes-e2e-azure-file-windows
pull-kubernetes-e2e-azure-disk-windows afcbc5ca7dc2976ba6479bf2804a69c5194b9ec2 link /test pull-kubernetes-e2e-azure-disk-windows
pull-kubernetes-bazel-test 3960bf9ab631a0cf7d656d28b5b29efe2033c0fa link /test pull-kubernetes-bazel-test
pull-kubernetes-e2e-kind 9e0c94856934de24aee393683cc693446634849e link /test pull-kubernetes-e2e-kind
pull-kubernetes-conformance-kind-ga-only-parallel 9e0c94856934de24aee393683cc693446634849e link /test pull-kubernetes-conformance-kind-ga-only-parallel
pull-kubernetes-e2e-kind-ipv6 9e0c94856934de24aee393683cc693446634849e link /test pull-kubernetes-e2e-kind-ipv6
pull-kubernetes-e2e-gce-100-performance 9e0c94856934de24aee393683cc693446634849e link /test pull-kubernetes-e2e-gce-100-performance

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

<details>

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. </details> <!-- test report -->

monopole

comment created time in 40 minutes

pull request commentkubernetes/kubernetes

Upgrade kustomize-in-kubectl to kustomize@v4.0.2

@monopole: The following tests failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
pull-kubernetes-e2e-azure-file-windows afcbc5ca7dc2976ba6479bf2804a69c5194b9ec2 link /test pull-kubernetes-e2e-azure-file-windows
pull-kubernetes-e2e-azure-disk-windows afcbc5ca7dc2976ba6479bf2804a69c5194b9ec2 link /test pull-kubernetes-e2e-azure-disk-windows
pull-kubernetes-bazel-test 3960bf9ab631a0cf7d656d28b5b29efe2033c0fa link /test pull-kubernetes-bazel-test
pull-kubernetes-e2e-kind 9e0c94856934de24aee393683cc693446634849e link /test pull-kubernetes-e2e-kind
pull-kubernetes-conformance-kind-ga-only-parallel 9e0c94856934de24aee393683cc693446634849e link /test pull-kubernetes-conformance-kind-ga-only-parallel
pull-kubernetes-e2e-kind-ipv6 9e0c94856934de24aee393683cc693446634849e link /test pull-kubernetes-e2e-kind-ipv6

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

<details>

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. </details> <!-- test report -->

monopole

comment created time in 41 minutes

pull request commentkubernetes/kubernetes

Speed up pkg/volume/csi unit tests

Rebase, conflict resolution.

wzshiming

comment created time in 41 minutes

pull request commentkubernetes/kubernetes

Upgrade kustomize-in-kubectl to kustomize@v4.0.2

@monopole: The following tests failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
pull-kubernetes-e2e-azure-file-windows afcbc5ca7dc2976ba6479bf2804a69c5194b9ec2 link /test pull-kubernetes-e2e-azure-file-windows
pull-kubernetes-e2e-azure-disk-windows afcbc5ca7dc2976ba6479bf2804a69c5194b9ec2 link /test pull-kubernetes-e2e-azure-disk-windows
pull-kubernetes-e2e-kind-ipv6 3960bf9ab631a0cf7d656d28b5b29efe2033c0fa link /test pull-kubernetes-e2e-kind-ipv6
pull-kubernetes-bazel-test 3960bf9ab631a0cf7d656d28b5b29efe2033c0fa link /test pull-kubernetes-bazel-test
pull-kubernetes-e2e-kind 9e0c94856934de24aee393683cc693446634849e link /test pull-kubernetes-e2e-kind
pull-kubernetes-conformance-kind-ga-only-parallel 9e0c94856934de24aee393683cc693446634849e link /test pull-kubernetes-conformance-kind-ga-only-parallel

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

<details>

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. </details> <!-- test report -->

monopole

comment created time in 41 minutes

pull request commentkubernetes/kubernetes

Upgrade kustomize-in-kubectl to kustomize@v4.0.2

@monopole: The following tests failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
pull-kubernetes-e2e-azure-file-windows afcbc5ca7dc2976ba6479bf2804a69c5194b9ec2 link /test pull-kubernetes-e2e-azure-file-windows
pull-kubernetes-e2e-azure-disk-windows afcbc5ca7dc2976ba6479bf2804a69c5194b9ec2 link /test pull-kubernetes-e2e-azure-disk-windows
pull-kubernetes-e2e-kind-ipv6 3960bf9ab631a0cf7d656d28b5b29efe2033c0fa link /test pull-kubernetes-e2e-kind-ipv6
pull-kubernetes-bazel-test 3960bf9ab631a0cf7d656d28b5b29efe2033c0fa link /test pull-kubernetes-bazel-test
pull-kubernetes-e2e-kind 9e0c94856934de24aee393683cc693446634849e link /test pull-kubernetes-e2e-kind

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

<details>

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. </details> <!-- test report -->

monopole

comment created time in 42 minutes

Pull request review commentkubernetes/kubernetes

Speed up pkg/controller/volume/scheduling unit tests

 func TestBindPodVolumes(t *testing.T) { 		claimsToProvision := []*v1.PersistentVolumeClaim{} 		if !scenario.bindingsNil { 			if scenario.binding != nil {-				bindings = []*BindingInfo{scenario.binding}+				bindings = []*BindingInfo{makeBinding(scenario.binding.pvc, scenario.binding.pv)}

If don't add this, it will fail when run stress.

5s: 0 runs so far, 0 failures
10s: 0 runs so far, 0 failures
15s: 0 runs so far, 0 failures
20s: 0 runs so far, 0 failures

/var/folders/6v/7stmg2756wlfk9c_qnv1hnbm0000gn/T/go-stress-20210227T142520-109974642
==================
WARNING: DATA RACE
Read at 0x00c000424cc8 by goroutine 223:
  k8s.io/kubernetes/pkg/controller/volume/scheduling.(*testEnv).assumeVolumes()
      /Users/zsm/go/src/k8s.io/kubernetes/pkg/controller/volume/scheduling/scheduler_binder_test.go:401 +0x11a
  k8s.io/kubernetes/pkg/controller/volume/scheduling.TestBindPodVolumes.func6()
      /Users/zsm/go/src/k8s.io/kubernetes/pkg/controller/volume/scheduling/scheduler_binder_test.go:2023 +0xbaa
  k8s.io/kubernetes/pkg/controller/volume/scheduling.TestBindPodVolumes.func7()
      /Users/zsm/go/src/k8s.io/kubernetes/pkg/controller/volume/scheduling/scheduler_binder_test.go:2068 +0x101
  testing.tRunner()
      /usr/local/Cellar/go/1.15.6/libexec/src/testing/testing.go:1123 +0x202

Previous write at 0x00c000424cc8 by goroutine 81:
  k8s.io/kubernetes/pkg/controller/volume/scheduling.(*volumeBinder).bindAPIUpdate()
      /Users/zsm/go/src/k8s.io/kubernetes/pkg/controller/volume/scheduling/scheduler_binder.go:507 +0x8c5
  k8s.io/kubernetes/pkg/controller/volume/scheduling.(*volumeBinder).BindPodVolumes()
      /Users/zsm/go/src/k8s.io/kubernetes/pkg/controller/volume/scheduling/scheduler_binder.go:442 +0x376
  k8s.io/kubernetes/pkg/controller/volume/scheduling.TestBindPodVolumes.func6()
      /Users/zsm/go/src/k8s.io/kubernetes/pkg/controller/volume/scheduling/scheduler_binder_test.go:2054 +0x301
  k8s.io/kubernetes/pkg/controller/volume/scheduling.TestBindPodVolumes.func7()
      /Users/zsm/go/src/k8s.io/kubernetes/pkg/controller/volume/scheduling/scheduler_binder_test.go:2068 +0x101
  testing.tRunner()
      /usr/local/Cellar/go/1.15.6/libexec/src/testing/testing.go:1123 +0x202

Goroutine 223 (running) created at:
  testing.(*T).Run()
      /usr/local/Cellar/go/1.15.6/libexec/src/testing/testing.go:1168 +0x5bb
  k8s.io/kubernetes/pkg/controller/volume/scheduling.TestBindPodVolumes()
      /Users/zsm/go/src/k8s.io/kubernetes/pkg/controller/volume/scheduling/scheduler_binder_test.go:2066 +0x19d3
  testing.tRunner()
      /usr/local/Cellar/go/1.15.6/libexe
…
25s: 3 runs so far, 1 failures (33.33%)
wzshiming

comment created time in 43 minutes

Pull request review commentkubernetes/kubernetes

Avoid creation of the same storageclass in e2e tests

 func (p *provisioningTestSuite) DefineTests(driver storageframework.TestDriver, 					} 					e2evolume.TestVolumeClientSlow(f, myTestConfig, nil, "", tests) 				}-				myTestCase.TestDynamicProvisioning()+				t.TestDynamicProvisioning() 			}(i) 		} 		wg.Wait() 	}) } +// ProvisionStorageClass provisions a StorageClass from a spec, if the StorageClass already exists+// then it returns it as a `computed` StorageClass, if it doesn't exist then it's created first+// and then returned, if the spec is nil then we return the `default` StorageClass+func ProvisionStorageClass(+	client clientset.Interface,+	class *storagev1.StorageClass,+) (*storagev1.StorageClass, func()) {+	gomega.Expect(client).NotTo(gomega.BeNil(), "ProvisionStorageClass.client is required")++	var err error+	var computedStorageClass *storagev1.StorageClass+	var clearComputedStorageClass = func() {}+	if class != nil {++		computedStorageClass, err = client.StorageV1().StorageClasses().Get(context.TODO(), class.Name, metav1.GetOptions{})+		if err == nil {+			// skip storageclass creation if it already exists+			ginkgo.By("Storage class " + computedStorageClass.Name + " is already created, skipping creation.")+		} else {+			ginkgo.By("Creating a StorageClass " + class.Name)+			_, err = client.StorageV1().StorageClasses().Create(context.TODO(), class, metav1.CreateOptions{})

One of the cases where the storage class already exists when it enters this method is in the v1beta1 case, the storage class is created outside this method and when this method is called then it's fetched from the cluster but as a v1 object, do you think that in the beta case I shouldn't use this method?

mauriciopoppe

comment created time in an hour

pull request commentkubernetes/kubernetes

reduce configmap and secret watch of kubelet

/retest

chenyw1990

comment created time in an hour

pull request commentkubernetes/kubernetes

Speed up pkg/volume/csi unit tests

New changes are detected. LGTM label has been removed.

wzshiming

comment created time in an hour

Pull request review commentkubernetes/kubernetes

Avoid creation of the same storageclass in e2e tests

 func (p *provisioningTestSuite) DefineTests(driver storageframework.TestDriver, 					} 					e2evolume.TestVolumeClientSlow(f, myTestConfig, nil, "", tests) 				}-				myTestCase.TestDynamicProvisioning()+				t.TestDynamicProvisioning() 			}(i) 		} 		wg.Wait() 	}) } +// ProvisionStorageClass provisions a StorageClass from a spec, if the StorageClass already exists+// then it returns it as a `computed` StorageClass, if it doesn't exist then it's created first+// and then returned, if the spec is nil then we return the `default` StorageClass+func ProvisionStorageClass(+	client clientset.Interface,+	class *storagev1.StorageClass,+) (*storagev1.StorageClass, func()) {+	gomega.Expect(client).NotTo(gomega.BeNil(), "ProvisionStorageClass.client is required")++	var err error+	var computedStorageClass *storagev1.StorageClass+	var clearComputedStorageClass = func() {}+	if class != nil {++		computedStorageClass, err = client.StorageV1().StorageClasses().Get(context.TODO(), class.Name, metav1.GetOptions{})+		if err == nil {+			// skip storageclass creation if it already exists+			ginkgo.By("Storage class " + computedStorageClass.Name + " is already created, skipping creation.")+		} else {+			ginkgo.By("Creating a StorageClass " + class.Name)+			_, err = client.StorageV1().StorageClasses().Create(context.TODO(), class, metav1.CreateOptions{})

that's correct, if the storageclass already exists then the test could fail, I can change this to throw an exception instead

mauriciopoppe

comment created time in an hour

pull request commentkubernetes/kubernetes

Scheduler: remove outdated TODO in node_affinity.go

/assign @alculquicondor

gavinfish

comment created time in an hour