profile
viewpoint

moby/buildkit 2702

concurrent, cache-efficient, and Dockerfile-agnostic builder toolkit

chendave/hadoop-TCP 1

The repository is dedicated for TCP feature in hadoop

chendave/initrepo 1

POC or examples of popular open-sourced projects

chendave/blog 0

This is the source for my personal blog (www.jungler.cn)

chendave/buildkit 0

concurrent, cache-efficient, and Dockerfile-agnostic builder toolkit

chendave/ceph 0

Ceph is a distributed object, block, and file storage platform

chendave/community 0

Kubernetes community content

pull request commentkubernetes/kubernetes

Fix the nits found in the testcases of `PodTopologySpread`

/test pull-kubernetes-integration

chendave

comment created time in 14 hours

pull request commentkubernetes/kubernetes

Fix the nits found in the testcases of `PodTopologySpread`

/retest

chendave

comment created time in 16 hours

pull request commentkubernetes/kubernetes

Change the exception to avoid the cost of preemption

/retest

chendave

comment created time in 18 hours

pull request commentkubernetes/kubernetes

Cut off the cost to run filter plugins when no victim pods are found

/hold need to update testcase as well.

chendave

comment created time in 21 hours

push eventchendave/kubernetes

Dave Chen

commit sha 0c8859c0ec5229f9206199372a2a2c315f4fb81a

Cut off the cost to run filter plugins when no victim pods are found If no potential victims could be found, there will be no room could be released for the "pod" to be scheduled. It's safe to return and thus prevent scheduling from running the filter plugins again. Signed-off-by: Dave Chen <dave.chen@arm.com>

view details

push time in 21 hours

pull request commentkubernetes/kubernetes

Change the exception to avoid the cost of preemption

/hold cancel

@Huang-Wei PTAL?

chendave

comment created time in 21 hours

push eventchendave/kubernetes

Dave Chen

commit sha 3e65fe4378674da2bfc0047dd92476985f7bb3e0

Change the exception to avoid the cost of preemption node's labels doesn't contain the required topologyKeys in `Constraints` cannot be resolved by preempting the pods on that pods. One use case that could easily reproduce the issue is, - set `alwaysCheckAllPredicates` to true. - one node contains all the required topologyKeys but is failed in predicates such as 'taint'. - another node doesn't hold all the required topologyKeys, and thus return `Unschedulable` status code. - scheduler will try to preempt the pods on the above node with lower priorities. Signed-off-by: Dave Chen <dave.chen@arm.com>

view details

push time in 21 hours

push eventchendave/kubernetes

Patrick Ohly

commit sha c20721aa9f0a69f267f899ca387fdc489cfe6f24

storage: enhance test for ValidateCSIDriverUpdate This revised test changes one field at a time in various ways. This ensures that one failure reason doesn't mask the other. An incorrect comment also gets fixed. Suggested in https://github.com/kubernetes/kubernetes/pull/80568#pullrequestreview-272836101.

view details

Claudiu Belu

commit sha 31ea600b284515dd3470b244f9c6e1548408ecfe

images: Adds GOARM to images' Makefiles In order to build the image binaries, the GOARM variable is required, but not all Makefiles have it defined, causing the make to fail on those images. This adds the require GOARM variable to the Makefiles in question.

view details

Alena Prokharchyk

commit sha d634ed3850ff1980f21590eee051d4428acf4601

Removed unnecessary not nil check in node registration process

view details

Manuel Rüger

commit sha eb6c7169276a1978b851deafb25b507caf696ac4

PodTolerationRestriction: Mention Whitelist Scope in Error Currently it's not clear if the issue came from the namespace whitelist of if the namespace whitelist was not applied at all (i.e. via a misspelled annotation). This makes the error more explicit if the pod tolerations caused a conflict with cluster-level or namespace-level whitelist. Signed-off-by: Manuel Rüger <manuel@rueg.eu>

view details

Kenjiro Nakayama

commit sha 18856dace935db46d3ba84374ce23438922e272b

Add DNS1123Label validation to IsFullyQualifiedDomainName func This patch adds IsDNS1123Label validation to IsFullyQualifiedDomainName func. Even when one label is longer than 64 characters, the current validation does not validate it. Hence this patch adds the label check and do not allow invalid domain.

view details

Gaurav Singh

commit sha 862c30a2284a54a8c91778828b41f157b6a506ec

[Provider/Azure] optimize mutex locks

view details

Seth Pollack

commit sha 75af2fca6125516dff42e9825ceea89367986f78

add labels to diff command

view details

Paulo Gomes

commit sha e7ced21235820139afc8dbb2e99314b9b69ec7fa

Invert error validation

view details

junxu

commit sha c58959c8ba7819345d5cc2b17ce4e95ccbc92d5b

Fix code style

view details

liuxu

commit sha 2367569f138ddb35385aed5e7e485eed425c73a9

fix if don't set ephemeral-storage limit emptyDir's sizeLimit doesn't work

view details

SataQiu

commit sha 17f3cd48a54483b4c6b7dc1d742194a1f41daf0a

add '--logging-format' flag to kube-controller-manager Signed-off-by: SataQiu <1527062125@qq.com>

view details

Marcin Maciaszczyk

commit sha e5af792ad29fdbd8f401b5d0d78e8fb0d64c1fe6

Bump Dashboard to v2.0.1

view details

lo24

commit sha cda593e822d2e03f621167f007c183faf5b1d910

fix TestValidateNodeIPParam: break was being used instead of continue

view details

Dan Winship

commit sha ddebbfd806b5813cc0f6d67ae9608be393729922

update for APIs being moved to utilnet Several of the functions in pkg/registry/core/service/ipallocator were moved to k8s.io/utils/net, but then the original code was never updated to used to the vendored versions. (utilnet's version of RangeSize does not have the IPv6 special case that the original code did, so we need to move that to NewAllocatorCIDRRange now.)

view details

Dan Winship

commit sha 4a7c86c105f49972a5d7b8150cdba59eafb8a0fd

make test a bit more generic

view details

Dan Winship

commit sha f6dcc1c07e0a2d3c583cb90e1cdb7ec4718625ce

Minor tweak to IPv6 service IP allocation The service allocator skips the "broadcast address" in the service CIDR, but that concept only applies to IPv4 addressing.

view details

Lubomir I. Ivanov

commit sha 7ddd966ed2038133dce3f93a062a9da1cb809088

kubeadm: mark --experimental-kustomize as deprecated

view details

Kaivalya Shah

commit sha 9fc229012cf8b4be99466701c220519cfb6a7897

gce_instances: Add check for multiple interface addresses in nodeAddresses and nodeAddressesFromInstance

view details

SataQiu

commit sha 800dd19fc23f14f9e83f09ff40155395af9f7cb0

increase robustness for kubeadm etcd operations Signed-off-by: SataQiu <1527062125@qq.com>

view details

David Eads

commit sha 7f172286342c4238d4b636c113d35835ff9299ee

prevent panic in azure cloud provider from killing the process

view details

push time in 21 hours

pull request commentkubernetes/kubernetes

No need run filter plugins when no pods could be removed

/retest

chendave

comment created time in 21 hours

pull request commentkubernetes/kubernetes

Change the exception to avoid the cost of preemption

/hold for rebase

chendave

comment created time in a day

Pull request review commentkubernetes/kubernetes

Change the exception to avoid the cost of preemption

 func TestNodesWherePreemptionMightHelp(t *testing.T) { 			}, 			expected: map[string]bool{"machine1": true, "machine3": true}, 		},+		{+			name: "ErrReasonNodeLabelNotMatch should not be tried as it indicates that the pod is unschedulable due to node doesn't have the required label",+			nodesStatuses: framework.NodeToStatusMap{+				"machine2": framework.NewStatus(framework.UnschedulableAndUnresolvable, podtopologyspread.ErrReasonNodeLabelNotMatch),

will do, thanks for the reminder.

chendave

comment created time in a day

pull request commentkubernetes/kubernetes

No need run filter plugins when no pods could be removed

/sig scheduling

chendave

comment created time in 2 days

PR opened kubernetes/kubernetes

No need run filter plugins when no pods could be removed

If no potential victims could be found, there will be no room could be released for the "pod" to be scheduled.

It's safe to return and thus prevent scheduling from running the filter plugins again.

Signed-off-by: Dave Chen dave.chen@arm.com

<!-- Thanks for sending a pull request! Here are some tips for you:

  1. If this is your first time, please read our contributor guidelines: https://git.k8s.io/community/contributors/guide/first-contribution.md#your-first-contribution and developer guide https://git.k8s.io/community/contributors/devel/development.md#development-guide
  2. Please label this pull request according to what type of issue you are addressing, especially if this is a release targeted pull request. For reference on required PR/issue labels, read here: https://git.k8s.io/community/contributors/devel/sig-release/release.md#issuepr-kind-label
  3. Ensure you have added or ran the appropriate tests for your PR: https://git.k8s.io/community/contributors/devel/sig-testing/testing.md
  4. If you want faster PR reviews, read how: https://git.k8s.io/community/contributors/guide/pull-requests.md#best-practices-for-faster-reviews
  5. If the PR is unfinished, see how to mark it: https://git.k8s.io/community/contributors/guide/pull-requests.md#marking-unfinished-pull-requests -->

What type of PR is this?

Uncomment only one /kind <> line, hit enter to put that in a new line, and remove leading whitespace from that line:

/kind bug

What this PR does / why we need it:

Which issue(s) this PR fixes: <!-- Automatically closes linked issue when PR is merged. Usage: Fixes #<issue number>, or Fixes (paste link of issue). If PR is about failing-tests or flakes, please post the related issues/tests in a comment and do not use Fixes --> Fixes #

Special notes for your reviewer:

Does this PR introduce a user-facing change?: <!-- If no, just write "NONE" in the release-note block below. If yes, a release note is required: Enter your extended release note in the block below. If the PR requires additional action from users switching to the new release, include the string "action required".

For more information on release notes see: https://git.k8s.io/community/contributors/guide/release-notes.md -->

NONE

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:

<!-- This section can be blank if this pull request does not require a release note.

When adding links which point to resources within git repositories, like KEPs or supporting documentation, please reference a specific commit and avoid linking directly to the master branch. This ensures that links reference a specific point in time, rather than a document that may change over time.

See here for guidance on getting permanent links to files: https://help.github.com/en/articles/getting-permanent-links-to-files

Please use the following format for linking documentation:

-->


+6 -0

0 comment

1 changed file

pr created time in 2 days

create barnchchendave/kubernetes

branch : skip_preemption

created branch time in 2 days

Pull request review commentkubernetes/kubernetes

Fix the nits found in the testcases of `PodTopologySpread`

 func TestMultipleConstraints(t *testing.T) { 		}, 		{ 			// 1. to fulfil "zone" constraint, incoming pod can be placed on zone2 (node-x or node-y)-			// 2. to fulfil "node" constraint, incoming pod can be placed on node-b or node-x+			// 2. to fulfil "node" constraint, incoming pod can be placed on node-a, node-b or node-x 			// intersection of (1) and (2) returns node-x-			name: "Constraints hold different labelSelectors, spreads = [1/0, 1/0/0/1]",+			name: "Constraints hold different labelSelectors, spreads = [1/0, 0/0/0/1]",

Done

chendave

comment created time in 2 days

push eventchendave/kubernetes

Dave Chen

commit sha 41fd19760ee985b355727deeff1741655151b9a7

Fix the nits found in the testcases of `PodTopologySpread` Signed-off-by: Dave Chen <dave.chen@arm.com>

view details

push time in 2 days

Pull request review commentkubernetes/kubernetes

Fix the nits found in the testcases of `PodTopologySpread`

 func TestMultipleConstraints(t *testing.T) { 		}, 		{ 			// 1. to fulfil "zone" constraint, incoming pod can be placed on zone2 (node-x or node-y)-			// 2. to fulfil "node" constraint, incoming pod can be placed on node-b or node-x+			// 2. to fulfil "node" constraint, incoming pod can be placed on node-a, node-b or node-x 			// intersection of (1) and (2) returns node-x-			name: "Constraints hold different labelSelectors, spreads = [1/0, 1/0/0/1]",+			name: "Constraints hold different labelSelectors, spreads = [1/0, 0/0/0/1]",

okay, I will revert this line back.

chendave

comment created time in 2 days

Pull request review commentkubernetes/kubernetes

Fix the nits found in the testcases of `PodTopologySpread`

 func TestMultipleConstraints(t *testing.T) { 		}, 		{ 			// 1. to fulfil "zone" constraint, incoming pod can be placed on zone2 (node-x or node-y)-			// 2. to fulfil "node" constraint, incoming pod can be placed on node-b or node-x+			// 2. to fulfil "node" constraint, incoming pod can be placed on node-a, node-b or node-x 			// intersection of (1) and (2) returns node-x-			name: "Constraints hold different labelSelectors, spreads = [1/0, 1/0/0/1]",+			name: "Constraints hold different labelSelectors, spreads = [1/0, 0/0/0/1]",

Due to the constraints is defined as this,

SpreadConstraint(1, "node", v1.DoNotSchedule, st.MakeLabelSelector().Exists("bar").Obj()).

and this pod is defined as this,

st.MakePod().Name("p-a1").Node("node-a").Label("foo", "").Obj(),

constraint selector doesn't match with the pod label,

Double confirmed by,

(dlv) p s.TpPairToMatchNum
map[k8s.io/kubernetes/pkg/scheduler/framework/plugins/podtopologyspread.topologyPair]*int32 [
        {key: "zone", value: "zone2"}: *0,
        {key: "node", value: "node-y"}: *1,
        {key: "zone", value: "zone1"}: *1,
        {key: "node", value: "node-b"}: *0,
        {key: "node", value: "node-x"}: *0,
        {key: "node", value: "node-a"}: *0,
]
chendave

comment created time in 2 days

pull request commentkubernetes/kubernetes

Fix the nits found in the testcases of `PodTopologySpread`

/retest

chendave

comment created time in 2 days

Pull request review commentkubernetes/kubernetes

Preemption plugin to fetch pod from informer cache

 func (pl *DefaultPreemption) PostFilter(ctx context.Context, state *framework.Cy // using the nominated resources and the nominated pod could take a long time // before it is retried after many other pending pods. func preempt(ctx context.Context, fh framework.FrameworkHandle, state *framework.CycleState, pod *v1.Pod, m framework.NodeToStatusMap) (string, error) {-	cs := fh.ClientSet()-	// TODO(Huang-Wei): get pod from informer cache instead of API server.-	pod, err := util.GetUpdatedPod(cs, pod)+	// It's safe to directly fetch pod here. Because the informer cache has already been+	// initialized when creating the Scheduler obj, i.e., factory.go#MakeDefaultErrorFunc().+	// However, tests may need to manually initialize the shared pod informer.+	pod, err := fh.SharedInformerFactory().Core().V1().Pods().Lister().Pods(pod.Namespace).Get(pod.Name)

yeah, How to make sure the cache has the latest version of pod?

Huang-Wei

comment created time in 3 days

Pull request review commentkubernetes/kubernetes

Preemption plugin to fetch pod from informer cache

 func TestPreempt(t *testing.T) { 	labelKeys := []string{"hostname", "zone", "region"} 	for _, test := range tests { 		t.Run(test.name, func(t *testing.T) {-			apiObjs := mergeObjs(test.pod, test.pods)-			client := clientsetfake.NewSimpleClientset(apiObjs...)+			client := clientsetfake.NewSimpleClientset()+			informerFactory := informers.NewSharedInformerFactory(client, 0)+			podInformer := informerFactory.Core().V1().Pods().Informer()+			test.pod.Namespace = v1.NamespaceDefault

sound like a bug in client-go.

Huang-Wei

comment created time in 3 days

pull request commentkubernetes/kubernetes

Preemption plugin to fetch pod from informer cache

/retest

Huang-Wei

comment created time in 3 days

Pull request review commentkubernetes/kubernetes

no need to check nominated pod exist or not as we always delete it first

 func (npm *nominatedPodMap) add(p *v1.Pod, nodeName string) { 		} 	} 	npm.nominatedPodToNode[p.UID] = nnn-	for _, np := range npm.nominatedPods[nnn] {

There is a possibility that pod was nominated to a different node other than the nodeName passed to add function here.

you can see the map is set here,

npm.nominatedPodToNode[p.UID] = nnn

while the nnn is get from the existing nominatedPodToNode

nnn, ok := npm.nominatedPodToNode[p.UID]
mlmhl

comment created time in 3 days

pull request commentkubernetes/kubernetes

Change the exception to avoid the cost of preemption

/retest

chendave

comment created time in 3 days

pull request commentkubernetes/kubernetes

Fix the nits found in the testcases of `PodTopologySpread`

/sig scheduling

cc @Huang-Wei

chendave

comment created time in 3 days

PR opened kubernetes/kubernetes

Fix the nits found in the testcases of `PodTopologySpread`

Signed-off-by: Dave Chen dave.chen@arm.com

<!-- Thanks for sending a pull request! Here are some tips for you:

  1. If this is your first time, please read our contributor guidelines: https://git.k8s.io/community/contributors/guide/first-contribution.md#your-first-contribution and developer guide https://git.k8s.io/community/contributors/devel/development.md#development-guide
  2. Please label this pull request according to what type of issue you are addressing, especially if this is a release targeted pull request. For reference on required PR/issue labels, read here: https://git.k8s.io/community/contributors/devel/sig-release/release.md#issuepr-kind-label
  3. Ensure you have added or ran the appropriate tests for your PR: https://git.k8s.io/community/contributors/devel/sig-testing/testing.md
  4. If you want faster PR reviews, read how: https://git.k8s.io/community/contributors/guide/pull-requests.md#best-practices-for-faster-reviews
  5. If the PR is unfinished, see how to mark it: https://git.k8s.io/community/contributors/guide/pull-requests.md#marking-unfinished-pull-requests -->

What type of PR is this?

Uncomment only one /kind <> line, hit enter to put that in a new line, and remove leading whitespace from that line:

/kind cleanup

What this PR does / why we need it:

Which issue(s) this PR fixes: <!-- Automatically closes linked issue when PR is merged. Usage: Fixes #<issue number>, or Fixes (paste link of issue). If PR is about failing-tests or flakes, please post the related issues/tests in a comment and do not use Fixes --> Fixes #

Special notes for your reviewer:

Does this PR introduce a user-facing change?: <!-- If no, just write "NONE" in the release-note block below. If yes, a release note is required: Enter your extended release note in the block below. If the PR requires additional action from users switching to the new release, include the string "action required".

For more information on release notes see: https://git.k8s.io/community/contributors/guide/release-notes.md -->

NONE

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:

<!-- This section can be blank if this pull request does not require a release note.

When adding links which point to resources within git repositories, like KEPs or supporting documentation, please reference a specific commit and avoid linking directly to the master branch. This ensures that links reference a specific point in time, rather than a document that may change over time.

See here for guidance on getting permanent links to files: https://help.github.com/en/articles/getting-permanent-links-to-files

Please use the following format for linking documentation:

-->


+3 -3

0 comment

1 changed file

pr created time in 3 days

create barnchchendave/kubernetes

branch : fix_testcase

created branch time in 3 days

pull request commentkubernetes/kubernetes

Change the exception to avoid the cost of preemption

/retest

chendave

comment created time in 3 days

Pull request review commentkubeedge/kubeedge

Optimize binary build process

 jobs:       arch: amd64       before_script:         - sudo apt-get install upx-ucl -y-        - sudo apt-get install gcc-aarch64-linux-gnu -y-        - sudo apt-get install libc6-dev-arm64-cross -y-        - sudo apt-get install gcc-arm-linux-gnueabi -y-        - sudo apt-get install libc6-dev-armel-cross -y

why? iirc, this is for crossing build on amd64

daixiang0

comment created time in 3 days

pull request commentkubeedge/kubeedge

Docs: update local setup doc

/hold cancel

I will try to add some doc later.

daixiang0

comment created time in 3 days

pull request commentkubeedge/kubeedge

Docs: update local setup doc

/hold

daixiang0

comment created time in 3 days

Pull request review commentkubeedge/kubeedge

Docs: update local setup doc

 Deploying KubeEdge locally is used to test, never use this way in production env  ## Setup Cloud Side (KubeEdge Master Node) +### Create CDRs++```shell+kubectl apply -f https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/crds/devices/devices_v1alpha1_device.yaml+kubectl apply -f https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/crds/devices/devices_v1alpha1_devicemodel.yaml+kubectl apply -f https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/crds/reliablesyncs/cluster_objectsync_v1alpha1.yaml+kubectl apply -f https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/crds/reliablesyncs/objectsync_v1alpha1.yaml+```++ ### Prepare config file  ```shell # cloudcore --minconfig > cloudcore.yaml

BTW, I still think we need the doc on the manually build.

daixiang0

comment created time in 3 days

pull request commentkubeedge/kubeedge

Docs: update local setup doc

Thanks for addressing my comments!

/lgtm /approve

daixiang0

comment created time in 3 days

Pull request review commentkubeedge/kubeedge

Docs: update local setup doc

 The goal of the community is to develop a cloud native edge computing platform b  - Fork the repository on GitHub - Download the repository--## Your First Contribution--We will help you to contribute in different areas like filing issues, developing features, fixing critical bugs and getting your work reviewed and merged.---### Find something to work on--We are always in need of help, be it fixing documentation, reporting bugs or writing some code.-Look at places where you feel best coding practices aren't followed, code refactoring is needed or tests are missing.-Here is how you get started.--#### Find a good first topic--There are [multiple repositories](https://github.com/kubeedge/) within the KubeEdge organization.-Each repository has beginner-friendly issues that provide a good first issue.-For example, [kubeedge/kubeedge](https://github.com/kubeedge/kubeedge) has [help wanted](https://github.com/kubeedge/kubeedge/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22) and [good first issue](https://github.com/kubeedge/kubeedge/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) labels for issues that should not need deep knowledge of the system.-We can help new contributors who wish to work on such issues.--Another good way to contribute is to find a documentation improvement, such as a missing/broken link. Please see [Contributing](#Contributor Workflow) below for the workflow.--#### Work on an issue--When you are willing to take on an issue, you can assign it to yourself. Just reply with `/assign` or `/assign @yourself` on an issue,-then the robot will assign the issue to you and your name will present at `Assignees` list.--## File an Issue--While we encourage everyone to contribute code, it is also appreciated when someone reports an issue.--Issues should be filed under the appropriate KubeEdge sub-repository.--*Example:* a KubeEdge issue should be opened to [kubeedge/kubeedge](https://github.com/kubeedge/kubeedge/issues).--Please follow the prompted submission guidelines while opening an issue:--- Specific. Include as much details as possible: which version, what environment, what configuration, etc. If the bug is related to running the kubeedge server, please attach the kubeedge log (the starting log with kubeedge configuration is especially important).--- Reproducible. Include the steps to reproduce the problem. We understand some issues might be hard to reproduce, please includes the steps that might lead to the problem.--- Isolated. Please try to isolate and reproduce the bug with minimum dependencies. It would significantly slow down the speed to fix a bug if too many dependencies are involved in a bug report.--- Unique. Do not duplicate existing bug report.--- Scoped. One bug per report. Do not follow up with another bug inside one report.--We might ask for further information about the issue. Any duplicated report will be closed.--## Contributor Workflow--Please do not ever hesitate to ask a question or send a pull request.--This is a rough outline of what a contributor's workflow looks like:--- Create a topic branch from where to base the contribution. This is usually master.-- Make commits of logical units.-- Make sure commit messages are in the proper format (see below).-- Push changes in a topic branch to a personal fork of the repository.-- Submit a pull request to [kubeedge/kubeedge](https://github.com/kubeedge/kubeedge).-- The PR must receive an approval from two maintainers.--### Creating Pull Requests--Pull requests are often called simply "PR".-KubeEdge generally follows the standard [github pull request](https://help.github.com/articles/about-pull-requests/) process.--In addition to the above process, a bot will begin applying structured labels to your PR.--The bot may also make some helpful suggestions for commands to run in your PR to facilitate review.-These `/command` options can be entered in comments to trigger auto-labeling and notifications.-Refer to its [command reference documentation](https://go.k8s.io/bot-commands).--### Code Review--To make it easier for your PR to receive reviews, consider the reviewers will need you to:--* follow [good coding guidelines](https://github.com/golang/go/wiki/CodeReviewComments).-* write [good commit messages](https://chris.beams.io/posts/git-commit/).-* break large changes into a logical series of smaller patches which individually make easily understandable changes, and in aggregate solve a broader issue.-* label PRs with appropriate reviewers: to do this read the messages the bot sends you to guide you through the PR process.--### Format of the commit message--We follow a rough convention for commit messages that is designed to answer two questions: what changed and why.-The subject line should feature the what and the body of the commit should describe the why.--```-scripts: add test codes for metamanager--this add some unit test codes to improve code coverage for metamanager--Fixes #12-```--The format can be described more formally as follows:--```-<subsystem>: <what changed>-<BLANK LINE>-<why this change was made>-<BLANK LINE>-<footer>-```--The first line is the subject and should be no longer than 70 characters, the second line is always blank, and other lines should be wrapped at 80 characters. This allows the message to be easier to read on GitHub as well as in various git tools.--Note: if your pull request isn't getting enough attention, you can use the reach out on Slack to get help finding reviewers.--### Testing--There are multiple types of tests.-The location of the test code varies with type, as do the specifics of the environment needed to successfully run the test:--* Unit: These confirm that a particular function behaves as intended. Unit test source code can be found adjacent to the corresponding source code within a given package. These are easily run locally by any developer.-* Integration: These tests cover interactions of package components or interactions between KubeEdge components and Kubernetes control plane components like API server.  An example would be testing whether the device controller is able to create config maps when device CRDs are created in the API server.-* End-to-end ("e2e"): These are broad tests of overall system behavior and coherence. The e2e tests are in [kubeedge e2e](https://github.com/kubeedge/kubeedge/tree/master/tests/e2e).--Continuous integration will run these tests on PRs.+- Read [this](../../CONTRIBUTING.md) for more details

Get you!

daixiang0

comment created time in 3 days

Pull request review commentkubeedge/kubeedge

Docs: update local setup doc

 The goal of the community is to develop a cloud native edge computing platform b  - Fork the repository on GitHub - Download the repository--## Your First Contribution--We will help you to contribute in different areas like filing issues, developing features, fixing critical bugs and getting your work reviewed and merged.---### Find something to work on--We are always in need of help, be it fixing documentation, reporting bugs or writing some code.-Look at places where you feel best coding practices aren't followed, code refactoring is needed or tests are missing.-Here is how you get started.--#### Find a good first topic--There are [multiple repositories](https://github.com/kubeedge/) within the KubeEdge organization.-Each repository has beginner-friendly issues that provide a good first issue.-For example, [kubeedge/kubeedge](https://github.com/kubeedge/kubeedge) has [help wanted](https://github.com/kubeedge/kubeedge/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22) and [good first issue](https://github.com/kubeedge/kubeedge/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) labels for issues that should not need deep knowledge of the system.-We can help new contributors who wish to work on such issues.--Another good way to contribute is to find a documentation improvement, such as a missing/broken link. Please see [Contributing](#Contributor Workflow) below for the workflow.--#### Work on an issue--When you are willing to take on an issue, you can assign it to yourself. Just reply with `/assign` or `/assign @yourself` on an issue,-then the robot will assign the issue to you and your name will present at `Assignees` list.--## File an Issue--While we encourage everyone to contribute code, it is also appreciated when someone reports an issue.--Issues should be filed under the appropriate KubeEdge sub-repository.--*Example:* a KubeEdge issue should be opened to [kubeedge/kubeedge](https://github.com/kubeedge/kubeedge/issues).--Please follow the prompted submission guidelines while opening an issue:--- Specific. Include as much details as possible: which version, what environment, what configuration, etc. If the bug is related to running the kubeedge server, please attach the kubeedge log (the starting log with kubeedge configuration is especially important).--- Reproducible. Include the steps to reproduce the problem. We understand some issues might be hard to reproduce, please includes the steps that might lead to the problem.--- Isolated. Please try to isolate and reproduce the bug with minimum dependencies. It would significantly slow down the speed to fix a bug if too many dependencies are involved in a bug report.--- Unique. Do not duplicate existing bug report.--- Scoped. One bug per report. Do not follow up with another bug inside one report.--We might ask for further information about the issue. Any duplicated report will be closed.--## Contributor Workflow--Please do not ever hesitate to ask a question or send a pull request.--This is a rough outline of what a contributor's workflow looks like:--- Create a topic branch from where to base the contribution. This is usually master.-- Make commits of logical units.-- Make sure commit messages are in the proper format (see below).-- Push changes in a topic branch to a personal fork of the repository.-- Submit a pull request to [kubeedge/kubeedge](https://github.com/kubeedge/kubeedge).-- The PR must receive an approval from two maintainers.--### Creating Pull Requests--Pull requests are often called simply "PR".-KubeEdge generally follows the standard [github pull request](https://help.github.com/articles/about-pull-requests/) process.--In addition to the above process, a bot will begin applying structured labels to your PR.--The bot may also make some helpful suggestions for commands to run in your PR to facilitate review.-These `/command` options can be entered in comments to trigger auto-labeling and notifications.-Refer to its [command reference documentation](https://go.k8s.io/bot-commands).--### Code Review--To make it easier for your PR to receive reviews, consider the reviewers will need you to:--* follow [good coding guidelines](https://github.com/golang/go/wiki/CodeReviewComments).-* write [good commit messages](https://chris.beams.io/posts/git-commit/).-* break large changes into a logical series of smaller patches which individually make easily understandable changes, and in aggregate solve a broader issue.-* label PRs with appropriate reviewers: to do this read the messages the bot sends you to guide you through the PR process.--### Format of the commit message--We follow a rough convention for commit messages that is designed to answer two questions: what changed and why.-The subject line should feature the what and the body of the commit should describe the why.--```-scripts: add test codes for metamanager--this add some unit test codes to improve code coverage for metamanager--Fixes #12-```--The format can be described more formally as follows:--```-<subsystem>: <what changed>-<BLANK LINE>-<why this change was made>-<BLANK LINE>-<footer>-```--The first line is the subject and should be no longer than 70 characters, the second line is always blank, and other lines should be wrapped at 80 characters. This allows the message to be easier to read on GitHub as well as in various git tools.--Note: if your pull request isn't getting enough attention, you can use the reach out on Slack to get help finding reviewers.--### Testing--There are multiple types of tests.-The location of the test code varies with type, as do the specifics of the environment needed to successfully run the test:--* Unit: These confirm that a particular function behaves as intended. Unit test source code can be found adjacent to the corresponding source code within a given package. These are easily run locally by any developer.-* Integration: These tests cover interactions of package components or interactions between KubeEdge components and Kubernetes control plane components like API server.  An example would be testing whether the device controller is able to create config maps when device CRDs are created in the API server.-* End-to-end ("e2e"): These are broad tests of overall system behavior and coherence. The e2e tests are in [kubeedge e2e](https://github.com/kubeedge/kubeedge/tree/master/tests/e2e).--Continuous integration will run these tests on PRs.+- Read [this](../../CONTRIBUTING.md) for more details

why need CONTRIBUTING.md under root dir? cannot this move to docs as well?

daixiang0

comment created time in 3 days

Pull request review commentkubeedge/kubeedge

Docs: update local setup doc

 Deploying KubeEdge locally is used to test, never use this way in production env  ## Setup Cloud Side (KubeEdge Master Node) +### Create CDRs++```shell+kubectl apply -f https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/crds/devices/devices_v1alpha1_device.yaml+kubectl apply -f https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/crds/devices/devices_v1alpha1_devicemodel.yaml+kubectl apply -f https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/crds/reliablesyncs/cluster_objectsync_v1alpha1.yaml+kubectl apply -f https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/crds/reliablesyncs/objectsync_v1alpha1.yaml+```++ ### Prepare config file  ```shell # cloudcore --minconfig > cloudcore.yaml

Actually, we use to have it but I have no idea where they are gone.

daixiang0

comment created time in 3 days

Pull request review commentkubeedge/kubeedge

Docs: update local setup doc

 Deploying KubeEdge locally is used to test, never use this way in production env  ## Setup Cloud Side (KubeEdge Master Node) +### Create CDRs++```shell+kubectl apply -f https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/crds/devices/devices_v1alpha1_device.yaml+kubectl apply -f https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/crds/devices/devices_v1alpha1_devicemodel.yaml+kubectl apply -f https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/crds/reliablesyncs/cluster_objectsync_v1alpha1.yaml+kubectl apply -f https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/crds/reliablesyncs/objectsync_v1alpha1.yaml+```++ ### Prepare config file  ```shell # cloudcore --minconfig > cloudcore.yaml

This is not what I mean, it's purely for cross-compiling.

daixiang0

comment created time in 3 days

Pull request review commentkubeedge/kubeedge

Docs: update local setup doc

 The goal of the community is to develop a cloud native edge computing platform b  - Fork the repository on GitHub - Download the repository--## Your First Contribution--We will help you to contribute in different areas like filing issues, developing features, fixing critical bugs and getting your work reviewed and merged.---### Find something to work on--We are always in need of help, be it fixing documentation, reporting bugs or writing some code.-Look at places where you feel best coding practices aren't followed, code refactoring is needed or tests are missing.-Here is how you get started.--#### Find a good first topic--There are [multiple repositories](https://github.com/kubeedge/) within the KubeEdge organization.-Each repository has beginner-friendly issues that provide a good first issue.-For example, [kubeedge/kubeedge](https://github.com/kubeedge/kubeedge) has [help wanted](https://github.com/kubeedge/kubeedge/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22) and [good first issue](https://github.com/kubeedge/kubeedge/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) labels for issues that should not need deep knowledge of the system.-We can help new contributors who wish to work on such issues.--Another good way to contribute is to find a documentation improvement, such as a missing/broken link. Please see [Contributing](#Contributor Workflow) below for the workflow.--#### Work on an issue--When you are willing to take on an issue, you can assign it to yourself. Just reply with `/assign` or `/assign @yourself` on an issue,-then the robot will assign the issue to you and your name will present at `Assignees` list.--## File an Issue--While we encourage everyone to contribute code, it is also appreciated when someone reports an issue.--Issues should be filed under the appropriate KubeEdge sub-repository.--*Example:* a KubeEdge issue should be opened to [kubeedge/kubeedge](https://github.com/kubeedge/kubeedge/issues).--Please follow the prompted submission guidelines while opening an issue:--- Specific. Include as much details as possible: which version, what environment, what configuration, etc. If the bug is related to running the kubeedge server, please attach the kubeedge log (the starting log with kubeedge configuration is especially important).--- Reproducible. Include the steps to reproduce the problem. We understand some issues might be hard to reproduce, please includes the steps that might lead to the problem.--- Isolated. Please try to isolate and reproduce the bug with minimum dependencies. It would significantly slow down the speed to fix a bug if too many dependencies are involved in a bug report.--- Unique. Do not duplicate existing bug report.--- Scoped. One bug per report. Do not follow up with another bug inside one report.--We might ask for further information about the issue. Any duplicated report will be closed.--## Contributor Workflow--Please do not ever hesitate to ask a question or send a pull request.--This is a rough outline of what a contributor's workflow looks like:--- Create a topic branch from where to base the contribution. This is usually master.-- Make commits of logical units.-- Make sure commit messages are in the proper format (see below).-- Push changes in a topic branch to a personal fork of the repository.-- Submit a pull request to [kubeedge/kubeedge](https://github.com/kubeedge/kubeedge).-- The PR must receive an approval from two maintainers.--### Creating Pull Requests--Pull requests are often called simply "PR".-KubeEdge generally follows the standard [github pull request](https://help.github.com/articles/about-pull-requests/) process.--In addition to the above process, a bot will begin applying structured labels to your PR.--The bot may also make some helpful suggestions for commands to run in your PR to facilitate review.-These `/command` options can be entered in comments to trigger auto-labeling and notifications.-Refer to its [command reference documentation](https://go.k8s.io/bot-commands).--### Code Review--To make it easier for your PR to receive reviews, consider the reviewers will need you to:--* follow [good coding guidelines](https://github.com/golang/go/wiki/CodeReviewComments).-* write [good commit messages](https://chris.beams.io/posts/git-commit/).-* break large changes into a logical series of smaller patches which individually make easily understandable changes, and in aggregate solve a broader issue.-* label PRs with appropriate reviewers: to do this read the messages the bot sends you to guide you through the PR process.--### Format of the commit message--We follow a rough convention for commit messages that is designed to answer two questions: what changed and why.-The subject line should feature the what and the body of the commit should describe the why.--```-scripts: add test codes for metamanager--this add some unit test codes to improve code coverage for metamanager--Fixes #12-```--The format can be described more formally as follows:--```-<subsystem>: <what changed>-<BLANK LINE>-<why this change was made>-<BLANK LINE>-<footer>-```--The first line is the subject and should be no longer than 70 characters, the second line is always blank, and other lines should be wrapped at 80 characters. This allows the message to be easier to read on GitHub as well as in various git tools.--Note: if your pull request isn't getting enough attention, you can use the reach out on Slack to get help finding reviewers.--### Testing--There are multiple types of tests.-The location of the test code varies with type, as do the specifics of the environment needed to successfully run the test:--* Unit: These confirm that a particular function behaves as intended. Unit test source code can be found adjacent to the corresponding source code within a given package. These are easily run locally by any developer.-* Integration: These tests cover interactions of package components or interactions between KubeEdge components and Kubernetes control plane components like API server.  An example would be testing whether the device controller is able to create config maps when device CRDs are created in the API server.-* End-to-end ("e2e"): These are broad tests of overall system behavior and coherence. The e2e tests are in [kubeedge e2e](https://github.com/kubeedge/kubeedge/tree/master/tests/e2e).--Continuous integration will run these tests on PRs.+- Read [this](../../CONTRIBUTING.md) for more details

I would hope we can combine two doc into one.

daixiang0

comment created time in 3 days

Pull request review commentkubeedge/kubeedge

Docs: update local setup doc

 Deploying KubeEdge locally is used to test, never use this way in production env  ## Setup Cloud Side (KubeEdge Master Node) +### Create CDRs++```shell+kubectl apply -f https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/crds/devices/devices_v1alpha1_device.yaml+kubectl apply -f https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/crds/devices/devices_v1alpha1_devicemodel.yaml+kubectl apply -f https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/crds/reliablesyncs/cluster_objectsync_v1alpha1.yaml+kubectl apply -f https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/crds/reliablesyncs/objectsync_v1alpha1.yaml+```++ ### Prepare config file  ```shell # cloudcore --minconfig > cloudcore.yaml ``` -Update any fields if needed.+Update any fields if needed, details refer to [this](../configuration/kubeedge.md#configuration-edge-side-kubeedge-worker-node).

change to something like this,

please refer to [configuration for edge](...) for details

and link should be linked to cloud configuration instead of edge.

daixiang0

comment created time in 3 days

Pull request review commentkubeedge/kubeedge

Docs: update local setup doc

 Deploying KubeEdge locally is used to test, never use this way in production env  ## Setup Cloud Side (KubeEdge Master Node) +### Create CDRs++```shell+kubectl apply -f https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/crds/devices/devices_v1alpha1_device.yaml+kubectl apply -f https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/crds/devices/devices_v1alpha1_devicemodel.yaml+kubectl apply -f https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/crds/reliablesyncs/cluster_objectsync_v1alpha1.yaml+kubectl apply -f https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/crds/reliablesyncs/objectsync_v1alpha1.yaml+```++ ### Prepare config file  ```shell # cloudcore --minconfig > cloudcore.yaml ``` -Update any fields if needed.+Update any fields if needed, details refer to [this](../configuration/kubeedge.md#configuration-edge-side-kubeedge-worker-node).  ### Run  ```shell # cloudcore --config cloudcore.yaml ``` +Run `cloudcore -h` to get help info and add options if needed.++ ## Setup Edge Side (KubeEdge Worker Node)  ### Prepare config file +- generate config file+ ```shell # edgecore --minconfig > edgecore.yaml ``` -Update any fields if needed.+- get token value at cloud side:++```shell+# kubectl get secret -nkubeedge tokensecret -o=jsonpath='{.data.tokendata}' | base64 -d+```++- update token value in edgecore config file:++```shell+# sed -i -e "s|token: .*|token: ${token}|g" edgecore.yaml+```++The `token` is what above step get.++Update any fields if needed, details refer to [this](../configuration/kubeedge.md#configuration-edge-side-kubeedge-worker-node).

change to something like this,

please refer to [configuration for edge](...) for details
daixiang0

comment created time in 3 days

Pull request review commentkubeedge/kubeedge

Docs: update local setup doc

 Deploying KubeEdge locally is used to test, never use this way in production env  ## Setup Cloud Side (KubeEdge Master Node) +### Create CDRs++```shell+kubectl apply -f https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/crds/devices/devices_v1alpha1_device.yaml+kubectl apply -f https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/crds/devices/devices_v1alpha1_devicemodel.yaml+kubectl apply -f https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/crds/reliablesyncs/cluster_objectsync_v1alpha1.yaml+kubectl apply -f https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/crds/reliablesyncs/objectsync_v1alpha1.yaml+```++ ### Prepare config file  ```shell # cloudcore --minconfig > cloudcore.yaml

Do we have one already? We use to have but I have no idea where it it now.

daixiang0

comment created time in 3 days

Pull request review commentkubeedge/kubeedge

Docs: update local setup doc

 KubeEdge 由以下组件构成:  ## 使用 -* [先决条件](./docs/setup/setup.md#prerequisites)-* [运行KubeEdge](./docs/setup/setup.md)-* [部署应用](./docs/setup/setup.md#deploy-application-on-cloud-side)-* [运行测试](./docs/setup/setup.md#run-tests)

Do we have useful doc for other two items?

daixiang0

comment created time in 3 days

pull request commentkubernetes/kubernetes

Change the exception to avoid the cost of preemption

/retest

chendave

comment created time in 3 days

pull request commentkubernetes/kubernetes

Change the exception to avoid the cost of preemption

Thanks for all those great suggestion! all comments is addressed.

chendave

comment created time in 3 days

Pull request review commentkubernetes/kubernetes

Change the exception to avoid the cost of preemption

 func TestMultipleConstraints(t *testing.T) { 		nodes        []*v1.Node 		existingPods []*v1.Pod 		fits         map[string]bool+		wantErrCode  map[string]framework.Code

Done

chendave

comment created time in 3 days

Pull request review commentkubernetes/kubernetes

Change the exception to avoid the cost of preemption

 import ( const ( 	// ErrReasonConstraintsNotMatch is used for PodTopologySpread filter error. 	ErrReasonConstraintsNotMatch = "node(s) didn't match pod topology spread constraints"+	// ErrReasonNodeLabelNotMatch is used when the node label doesn't hold the required label.

Done

chendave

comment created time in 3 days

Pull request review commentkubernetes/kubernetes

Change the exception to avoid the cost of preemption

 import ( const ( 	// ErrReasonConstraintsNotMatch is used for PodTopologySpread filter error. 	ErrReasonConstraintsNotMatch = "node(s) didn't match pod topology spread constraints"+	// ErrReasonNodeLabelNotMatch is used when the node label doesn't hold the required label.+	ErrReasonNodeLabelNotMatch = "node doesn't have the required label"

Done

chendave

comment created time in 3 days

push eventchendave/kubernetes

Dave Chen

commit sha b2b813904cc780dddb6de7d0a088bd366499c66e

Change the exception to avoid the cost of preemption node's labels doesn't contain the required topologyKeys in `Constraints` cannot be resolved by preempting the pods on that pods. One use case that could easily reproduce the issue is, - set `alwaysCheckAllPredicates` to true. - one node contains all the required topologyKeys but is failed in predicates such as 'taint'. - another node doesn't hold all the required topologyKeys, and thus return `Unschedulable` status code. - scheduler will try to preempt the pods on the above node with lower priorities. Signed-off-by: Dave Chen <dave.chen@arm.com>

view details

push time in 3 days

Pull request review commentkubernetes/kubernetes

Change the exception to avoid the cost of preemption

 func TestSingleConstraint(t *testing.T) { 		nodes        []*v1.Node 		existingPods []*v1.Pod 		fits         map[string]bool+		wantErrCode  map[string]int

Done, pls help to check if that is what you want.

chendave

comment created time in 3 days

push eventchendave/kubernetes

Dave Chen

commit sha be202b445c0fbfcfc653301a7de2486dfb9b4019

Change the exception to avoid the cost of preemption node's labels doesn't contain the required topologyKeys in `Constraints` cannot be resolved by preempting the pods on that pods. One use case that could easily reproduce the issue is - set `alwaysCheckAllPredicates` to true. - one node contains all the required topologyKeys but is failed in predicates such as 'taint'. - another node doesn't hold all the required topologyKeys, and thus return `Unschedulable` status code. - scheduler will try to preempt the pods on the above node with lower priorities. Signed-off-by: Dave Chen <dave.chen@arm.com>

view details

push time in 3 days

pull request commentkubernetes/kubernetes

Change the exception to avoid the cost of preemption

/approve

Also fix golint

Done

chendave

comment created time in 3 days

pull request commentkubernetes/kubernetes

Change the exception to avoid the cost of preemption

cause the pod is pending on scheduling, the scheduler will recheck whether there is a node available for binding after a period of time, then the preemption will happen there again.

Well, that's normal behavior for any pending pod. Please remove that note in your PR description so we don't get confused about the bug in the future.

done

chendave

comment created time in 3 days

push eventchendave/kubernetes

Dave Chen

commit sha 0531cdfa4716b70c6f4925a2cce1ea8f877b518f

Change the exception to avoid the cost of preemption node's labels doesn't contain the required topologyKeys in `Constraints` cannot be resolved by preempting the pods on that pods. One use case that could easily reproduce the issue is - set `alwaysCheckAllPredicates` to true. - one node contains all the required topologyKeys but is failed in predicates such as 'taint'. - another node doesn't hold all the required topologyKeys, and thus return `Unschedulable` status code. - scheduler will try to preempt the pods on the above node with lower priorities. Signed-off-by: Dave Chen <dave.chen@arm.com>

view details

push time in 3 days

Pull request review commentkubernetes/kubernetes

Change the exception to avoid the cost of preemption

 func (pl *PodTopologySpread) Filter(ctx context.Context, cycleState *framework.C 		tpVal, ok := node.Labels[c.TopologyKey] 		if !ok { 			klog.V(5).Infof("node '%s' doesn't have required label '%s'", node.Name, tpKey)-			return framework.NewStatus(framework.Unschedulable, ErrReasonConstraintsNotMatch)+			return framework.NewStatus(framework.UnschedulableAndUnresolvable, ErrReasonConstraintsNotMatch)

done

chendave

comment created time in 4 days

push eventchendave/kubernetes

Dave Chen

commit sha f12f747be5d8c2bd697b9601711a42021c4ef6cd

Change the exception to avoid the cost of preemption node's labels doesn't contain the required topologyKeys in `Constraints` cannot be resolved by preempting the pods on that pods. Current exception type will make the scheduling a infinite loop of pod preemption. One use case that could easily reproduce the issue is - set `alwaysCheckAllPredicates` to true. - one node contains all the required topologyKeys but is failed in predicates such as 'taint'. - another node doesn't hold all the required topologyKeys, and thus return `Unschedulable` status code. - scheduler will try to preempt the pods on the above node with lower priorities and loop infinitely. Signed-off-by: Dave Chen <dave.chen@arm.com>

view details

push time in 4 days

push eventchendave/kubernetes

Claudiu Belu

commit sha 862bd63f808106bf4194bad0320820a2c0b587f4

tests: Fixes Windows kubelet-stats test The test spawns 10 pods with the same pod name, which contains multiple containers with the same container name. Because of this, the test fails. This commit addresses this issue.

view details

Rostislav M. Georgiev

commit sha e7427c66f3b5cc0d3b4c39361ed22b40a9475ceb

kubeadm: Merge getK8sVersionFromUserInput into enforceRequirements `getK8sVersionFromUserInput` would attempt to load the config from a user specified YAML file (via the `--config` option of `kubeadm upgrade plan` or `kubeadm upgrade apply`). This is done in order to fetch the `KubernetesVersion` field of the `ClusterConfiguration`. The complete config is then immediately discarded. The actual config that is used during the upgrade process is fetched from within `enforceRequirements`. This, along with the fact that `getK8sVersionFromUserInput` is always called immediately after `enforceRequirements` makes it possible to merge the two. Merging them would help us simplify things and avoid future problems in component config related patches. Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>

view details

Rostislav M. Georgiev

commit sha 5d0127493c8d1771769a3b79bdfa661fdfb0a90b

kubeadm upgrade plan: don't load component configs Component configs are used by kubeadm upgrade plan at the moment. However, they can prevent kubeadm upgrade plan from functioning if loading of an unsupported version of a component config is attempted. For that matter it's best to just stop loading component configs as part of the kubeadm config load process. Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>

view details

Claudiu Belu

commit sha 06ca9c8aab3373a073f44f0800bf1d33f8ddd741

test images: Adds OWNERS files for images Adds reviewers to the OWNERS files in the kubernetes/test/images folder. The reviewers are added automatically, based on their contributions on an image (>= 20% code churn). Note that the code churn is taken into account for authors, and not committers. Adds OWNERS files for: ipc-utils, node-perf, nonroot, regression-issue-74839, resource-consumer, sample-device-plugin.

view details

Kishor Joshi

commit sha f76c21cce6237b89f81883cef27b182f825dbab5

Allow UDP for AWS NLB Co-authored-by: Patrick Ryan <pjryan@my.okcu.edu> Co-authored-by: Owen Ou <jingweno@gmail.com>

view details

Kenichi Omichi

commit sha f7fb21c39468554d27ccc53ad9aebabde744d18d

Add check for blocking tests in e2e framework e2e test framework provides useful functions for implementing e2e tests, but the framework itself should not contain e2e tests theirself. This adds hacking check for blocking implementing e2e tests in the framework.

view details

Matthias Bertschy

commit sha 681202abd0c22746c93688958da78390f3be0333

Add tests covering startup probe without readiness

view details

Federico Paolinelli

commit sha b8819b91a83481893dbd0fb9e5ac61458529f6c6

Add sctp support to agnhost connect / porter commands. Signed-off-by: Federico Paolinelli <fpaoline@redhat.com>

view details

Rodrigo Campos

commit sha 82856541fb42d50164d86d816c054d8b53740ecd

kubelet: Fix log typo when killing a container Signed-off-by: Rodrigo Campos <rodrigo@kinvolk.io>

view details

Ali Farah

commit sha 51028023c968de597f4ad41d5b23cb34f6a1c3d5

fix typo

view details

qini

commit sha 9a37d1d92c8b7372fe24b610159a04734bc155fa

Add mock clients for container service client and deployment client.

view details

Mark Janssen

commit sha e3a0ca27314ca60ebe22eacaeaba158b8c494e3a

Fix staticcheck failures for pkg/registry/... Errors from staticcheck: pkg/registry/autoscaling/horizontalpodautoscaler/storage/storage_test.go:207:7: this value of err is never used (SA4006) pkg/registry/core/namespace/storage/storage.go:256:5: options.OrphanDependents is deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. +optional (SA1019) pkg/registry/core/namespace/storage/storage.go:257:11: options.OrphanDependents is deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. +optional (SA1019) pkg/registry/core/namespace/storage/storage.go:266:5: options.OrphanDependents is deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. +optional (SA1019) pkg/registry/core/namespace/storage/storage.go:267:11: options.OrphanDependents is deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. +optional (SA1019) pkg/registry/core/persistentvolumeclaim/storage/storage_test.go:165:2: this value of err is never used (SA4006) pkg/registry/core/resourcequota/storage/storage_test.go:202:7: this value of err is never used (SA4006) pkg/registry/core/service/ipallocator/allocator_test.go:338:2: this value of other is never used (SA4006) pkg/registry/core/service/portallocator/allocator_test.go:199:2: this value of other is never used (SA4006) pkg/registry/core/service/storage/rest_test.go:1843:2: this value of location is never used (SA4006) pkg/registry/core/service/storage/rest_test.go:1849:2: this value of location is never used (SA4006) pkg/registry/core/service/storage/rest_test.go:3174:20: use net.IP.Equal to compare net.IPs, not bytes.Equal (SA1021) pkg/registry/core/service/storage/rest_test.go:3178:20: use net.IP.Equal to compare net.IPs, not bytes.Equal (SA1021) pkg/registry/core/service/storage/rest_test.go:3185:20: use net.IP.Equal to compare net.IPs, not bytes.Equal (SA1021) pkg/registry/core/service/storage/rest_test.go:3189:20: use net.IP.Equal to compare net.IPs, not bytes.Equal (SA1021)

view details

Brian Pursley

commit sha 02742e3450b9534ea6adeb059cd451b4cb35c820

Add bazel_skylib_workspace to fix make bazel-test 'no matching toolchains found' error

view details

Rostislav M. Georgiev

commit sha 1d2d15ee033a55767dfb84261940f719a963b446

kubeadm upgrade: Allow supplying hand migrated component configs Currently, kubeadm would refuse to perfom an upgrade (or even planing for one) if it detects a user supplied unsupported component config version. Hence, users are required to manually upgrade their component configs and store them in the config maps prior to executing `kubeadm upgrade plan` or `kubeadm upgrade apply`. This change introduces the ability to use the `--config` option of the `kubeadm upgrade plan` and `kubeadm upgrade apply` commands to supply a YAML file containing component configs to be used in place of the existing ones in the cluster upon upgrade. The old behavior where `--config` is used to reconfigure a cluster is still supported. kubeadm automatically detects which behavior to use based on the presence (or absense) of kubeadm config types (API group `kubeadm.kubernetes.io`). Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>

view details

Jing Xu

commit sha 7012994a61c142b173e4a1fbe2a12e3d580b11d5

Fix issue in kubelet getMountedVolumePathListFromDisk This PR fixes issue #74650. It adds the extra check for /mount dir under pod volume dir. It also adds the unit test for this function

view details

wojtekt

commit sha 2a7978e2be62e7aa769eca1ca9a47bd940c09c17

Merge migrate-if-needed etcd bash script

view details

Lubomir I. Ivanov

commit sha bcc16b9c1e660bf008f7d8a484f18951a06fd5bb

kubeadm: remove negative test cases from TestUploadConfiguration UploadConfiguration() now always retries the underling API calls, which can make TestUploadConfiguration run for a long time. Remove the negative test cases, where errors are expected. Negative test cases should be tested in app/util/apiclient, where a short timeout / retry count should be possible for unit tests.

view details

Samuel Davidson

commit sha 31ae200ebf31659975ea1cbdee75dc7e1b08b54d

fix for missing kube-env var in SNI config

view details

RainbowMango

commit sha 168c695e1ad1546f16c55cfc6a7fb339405c4785

Update two metrics name to make promlint happy.

view details

Dave Chen

commit sha e1d61b621a987b69f6f4ccf5d6ba66eef4409b15

Scheduler: remove the misleading comments in `NodeResourcesBalancedAllocation` Signed-off-by: Dave Chen dave.chen@arm.com

view details

push time in 4 days

push eventchendave/initrepo

Dave Chen

commit sha b81658a548efae4d55733346d5c99d1048311251

Add a sample policy config file for kube-scheduler btw, kube-scheduler.yaml is updated as well to fit the latest update in the code base.

view details

push time in 4 days

pull request commentkubernetes/kubernetes

Unify the scoring process for `PodTopologySpread` plugin

/close

chendave

comment created time in 4 days

pull request commentkubernetes/kubernetes

Change the exception to avoid the cost of preemption

+1 to the fix, but, what do you mean by

scheduler will try to preempt the pods on the above node with lower priorities and loop infinitely.

That sounds like a separate bug. But maybe you just meant that it unnecessarily did a preemption step.

cause the pod is pending on scheduling, the scheduler will recheck whether there is a node available for binding after a period of time, then the preemption will happen there again.

chendave

comment created time in 4 days

Pull request review commentkubeedge/kubeedge

Docs: update local setup doc

 Deploying KubeEdge locally is used to test, never use this way in production env  ## Setup Cloud Side (KubeEdge Master Node) +### Create CDRs++```shell+kubectl apply -f https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/crds/devices/devices_v1alpha1_device.yaml+kubectl apply -f https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/crds/devices/devices_v1alpha1_devicemodel.yaml+kubectl apply -f https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/crds/reliablesyncs/cluster_objectsync_v1alpha1.yaml+kubectl apply -f https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/crds/reliablesyncs/objectsync_v1alpha1.yaml+```++ ### Prepare config file  ```shell # cloudcore --minconfig > cloudcore.yaml ``` -Update any fields if needed.+Update any fields if needed, details refer to [this](../configuration/kubeedge.md).  ### Run  ```shell # cloudcore --config cloudcore.yaml ``` +Run `cloudcore -h` to get help info and add options if needed.++ ## Setup Edge Side (KubeEdge Worker Node)  ### Prepare config file +- generate config file+ ```shell # edgecore --minconfig > edgecore.yaml ``` -Update any fields if needed.+- get token value at cloud side:++```shell+# kubectl get secret -nkubeedge tokensecret -o=jsonpath='{.data.tokendata}' | base64 -d+```++- update token value in edgecore config file:++```shell+# sed -i -e "s|token: .*|token: ${token}|g" edgecore.yaml+```++The `token` is what above step get.++Update any fields if needed, details refer to [this](../configuration/kubeedge.md).

better to link to "configuration-edge-side-kubeedge-worker-node"

daixiang0

comment created time in 5 days

Pull request review commentkubeedge/kubeedge

Docs: update local setup doc

 Deploying KubeEdge locally is used to test, never use this way in production env  ## Setup Cloud Side (KubeEdge Master Node) +### Create CDRs++```shell+kubectl apply -f https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/crds/devices/devices_v1alpha1_device.yaml+kubectl apply -f https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/crds/devices/devices_v1alpha1_devicemodel.yaml+kubectl apply -f https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/crds/reliablesyncs/cluster_objectsync_v1alpha1.yaml+kubectl apply -f https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/crds/reliablesyncs/objectsync_v1alpha1.yaml+```++ ### Prepare config file  ```shell # cloudcore --minconfig > cloudcore.yaml ``` -Update any fields if needed.+Update any fields if needed, details refer to [this](../configuration/kubeedge.md).

better to link to "configuration-cloud-side-kubeedge-master"

daixiang0

comment created time in 5 days

Pull request review commentkubeedge/kubeedge

Docs: update local setup doc

 Deploying KubeEdge locally is used to test, never use this way in production env  ## Setup Cloud Side (KubeEdge Master Node) +### Create CDRs++```shell+kubectl apply -f https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/crds/devices/devices_v1alpha1_device.yaml+kubectl apply -f https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/crds/devices/devices_v1alpha1_devicemodel.yaml+kubectl apply -f https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/crds/reliablesyncs/cluster_objectsync_v1alpha1.yaml+kubectl apply -f https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/crds/reliablesyncs/objectsync_v1alpha1.yaml+```++ ### Prepare config file  ```shell # cloudcore --minconfig > cloudcore.yaml

I remember there is a step to build those binaries, but I cannot find them anymore, where is it?

daixiang0

comment created time in 5 days

Pull request review commentkubeedge/kubeedge

Docs: update local setup doc

 KubeEdge 由以下组件构成:  ## 使用 -* [先决条件](./docs/setup/setup.md#prerequisites)-* [运行KubeEdge](./docs/setup/setup.md)-* [部署应用](./docs/setup/setup.md#deploy-application-on-cloud-side)-* [运行测试](./docs/setup/setup.md#run-tests)

can we keep some of them? for example

docs/setup/kubeedge_precheck.md could be used for "[先决条件]"

daixiang0

comment created time in 5 days

PR opened kubernetes/kubernetes

Unify the scoring process for `PodTopologySpread` plugin

Current IMPL assumes all the node has hostname label defined and with different value but highlight hostname out make the flow of scoring unnecessary complicated and hard to understand.

Signed-off-by: Dave Chen dave.chen@arm.com

<!-- Thanks for sending a pull request! Here are some tips for you:

  1. If this is your first time, please read our contributor guidelines: https://git.k8s.io/community/contributors/guide/first-contribution.md#your-first-contribution and developer guide https://git.k8s.io/community/contributors/devel/development.md#development-guide
  2. Please label this pull request according to what type of issue you are addressing, especially if this is a release targeted pull request. For reference on required PR/issue labels, read here: https://git.k8s.io/community/contributors/devel/sig-release/release.md#issuepr-kind-label
  3. Ensure you have added or ran the appropriate tests for your PR: https://git.k8s.io/community/contributors/devel/sig-testing/testing.md
  4. If you want faster PR reviews, read how: https://git.k8s.io/community/contributors/guide/pull-requests.md#best-practices-for-faster-reviews
  5. If the PR is unfinished, see how to mark it: https://git.k8s.io/community/contributors/guide/pull-requests.md#marking-unfinished-pull-requests -->

What type of PR is this?

Uncomment only one /kind <> line, hit enter to put that in a new line, and remove leading whitespace from that line:

/kind cleanup

/sig scheduling

What this PR does / why we need it:

Which issue(s) this PR fixes: <!-- Automatically closes linked issue when PR is merged. Usage: Fixes #<issue number>, or Fixes (paste link of issue). If PR is about failing-tests or flakes, please post the related issues/tests in a comment and do not use Fixes --> Fixes #

Special notes for your reviewer:

Does this PR introduce a user-facing change?: <!-- If no, just write "NONE" in the release-note block below. If yes, a release note is required: Enter your extended release note in the block below. If the PR requires additional action from users switching to the new release, include the string "action required".

For more information on release notes see: https://git.k8s.io/community/contributors/guide/release-notes.md -->

NONE

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:

<!-- This section can be blank if this pull request does not require a release note.

When adding links which point to resources within git repositories, like KEPs or supporting documentation, please reference a specific commit and avoid linking directly to the master branch. This ensures that links reference a specific point in time, rather than a document that may change over time.

See here for guidance on getting permanent links to files: https://help.github.com/en/articles/getting-permanent-links-to-files

Please use the following format for linking documentation:

-->


+16 -22

0 comment

2 changed files

pr created time in 5 days

create barnchchendave/kubernetes

branch : unified_score

created branch time in 5 days

Pull request review commentkubernetes/kubernetes

Change the exception to avoid the cost of preemption

 func (pl *PodTopologySpread) Filter(ctx context.Context, cycleState *framework.C 		tpVal, ok := node.Labels[c.TopologyKey] 		if !ok { 			klog.V(5).Infof("node '%s' doesn't have required label '%s'", node.Name, tpKey)-			return framework.NewStatus(framework.Unschedulable, ErrReasonConstraintsNotMatch)+			return framework.NewStatus(framework.UnschedulableAndUnresolvable, ErrReasonConstraintsNotMatch)

create a unittest?

chendave

comment created time in 5 days

pull request commentkubernetes/kubernetes

Change the exception to avoid the cost of preemption

/retest

chendave

comment created time in 6 days

pull request commentkubernetes/kubernetes

Polish unit tests of defaultpreemptio plugin

/retest

Huang-Wei

comment created time in 6 days

pull request commentkubernetes/kubernetes

Fix scheduler preemt function comment

/retest

zhouya0

comment created time in 6 days

Pull request review commentkubernetes/kubernetes

Fix scheduler preemt function comment

 func (pl *DefaultPreemption) PostFilter(ctx context.Context, state *framework.Cy  // preempt finds nodes with pods that can be preempted to make room for "pod" to // schedule. It chooses one of the nodes and preempts the pods on the node and-// returns 1) the node, 2) the list of preempted pods if such a node is found,-// 3) A list of pods whose nominated node name should be cleared, and 4) any-// possible error.+// returns 1) the node, 2) any possible error.

nice catch! just a soft suggestion, it could be more descriptive, maybe something like 1) the node name which is picked up for preemption, fine as it is.

zhouya0

comment created time in 6 days

pull request commentkubernetes/kubernetes

Change the exception to avoid the cost of preemption

/retest

chendave

comment created time in 6 days

pull request commentkubernetes/kubernetes

Change the exception to avoid the cost of preemption

/retest

chendave

comment created time in 6 days

pull request commentkubernetes/kubernetes

Change the exception to avoid the cost of preemption

cc @Huang-Wei @alculquicondor @ahg-g

chendave

comment created time in 6 days

pull request commentkubernetes/kubernetes

Change the exception to avoid the cost of preemption

/sig scheduling /priority important-soon

chendave

comment created time in 6 days

PR opened kubernetes/kubernetes

Change the exception to avoid the cost of preemption

node's labels doesn't contain the required topologyKeys in Constraints cannot be resolved by preempting the pods on that pods.

Current exception type will make the scheduling a infinite loop of pod preemption

One use case that could easily reproduce the issue is

  • set alwaysCheckAllPredicates to true.
  • one node contains all the required topologyKeys but is failed in predicates such as 'taint'.
  • another node doesn't hold all the equired topologyKeys, and thus return Unschedulable status code.
  • scheduler will try to preempt the pods on the above node with lower priorities and loop infinitely.

Signed-off-by: Dave Chen dave.chen@arm.com

<!-- Thanks for sending a pull request! Here are some tips for you:

  1. If this is your first time, please read our contributor guidelines: https://git.k8s.io/community/contributors/guide/first-contribution.md#your-first-contribution and developer guide https://git.k8s.io/community/contributors/devel/development.md#development-guide
  2. Please label this pull request according to what type of issue you are addressing, especially if this is a release targeted pull request. For reference on required PR/issue labels, read here: https://git.k8s.io/community/contributors/devel/sig-release/release.md#issuepr-kind-label
  3. Ensure you have added or ran the appropriate tests for your PR: https://git.k8s.io/community/contributors/devel/sig-testing/testing.md
  4. If you want faster PR reviews, read how: https://git.k8s.io/community/contributors/guide/pull-requests.md#best-practices-for-faster-reviews
  5. If the PR is unfinished, see how to mark it: https://git.k8s.io/community/contributors/guide/pull-requests.md#marking-unfinished-pull-requests -->

What type of PR is this?

Uncomment only one /kind <> line, hit enter to put that in a new line, and remove leading whitespace from that line:

/kind bug

What this PR does / why we need it:

Which issue(s) this PR fixes: <!-- Automatically closes linked issue when PR is merged. Usage: Fixes #<issue number>, or Fixes (paste link of issue). If PR is about failing-tests or flakes, please post the related issues/tests in a comment and do not use Fixes --> Fixes #

Special notes for your reviewer:

Does this PR introduce a user-facing change?: <!-- If no, just write "NONE" in the release-note block below. If yes, a release note is required: Enter your extended release note in the block below. If the PR requires additional action from users switching to the new release, include the string "action required".

For more information on release notes see: https://git.k8s.io/community/contributors/guide/release-notes.md -->


Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:

<!-- This section can be blank if this pull request does not require a release note.

When adding links which point to resources within git repositories, like KEPs or supporting documentation, please reference a specific commit and avoid linking directly to the master branch. This ensures that links reference a specific point in time, rather than a document that may change over time.

See here for guidance on getting permanent links to files: https://help.github.com/en/articles/getting-permanent-links-to-files

Please use the following format for linking documentation:

-->


+1 -1

0 comment

1 changed file

pr created time in 6 days

create barnchchendave/kubernetes

branch : preemption_topology

created branch time in 6 days

pull request commentkubernetes/kubernetes

Scheduler: remove the misleading comments in `NodeResourcesBalancedAllocation`

weird, I cannot see this comment on the github directly, just a test with my gmail. I was thinking the discussion result has already been included in the original comments, so no need to update them any more.

On Wed, Jun 24, 2020 at 2:25 AM Wei Huang notifications@github.com wrote:

@chendave https://github.com/chendave could you update the comments based on our discussion result?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kubernetes/kubernetes/pull/91317#issuecomment-648336746, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABN2U6JTVPV7VMVMZVUQOA3RYDXQLANCNFSM4NGPJOVA .

chendave

comment created time in 10 days

pull request commentkubernetes/kubernetes

scheduler: merge Reserve and Unreserve plugins

/cc

need rebase, and the cover message should be updated as well.

adtac

comment created time in 11 days

Pull request review commentkubernetes/kubernetes

Clean unsed code in Merge status

 func (p PluginToStatus) Merge() *Status { 		} else if s.Code() == Unschedulable { 			hasUnschedulable = true 		}-		finalStatus.code = s.Code()

the comments has explicitly claimed the error list,

// Merge merges the statuses in the map into one. The resulting status code have the following
// precedence: Error, UnschedulableAndUnresolvable, Unschedulable.

As I agree this a generic function, so it might be good to consider to update the comments above and also the line of 177, so the code would be more clearly to read.

lixiang233

comment created time in 11 days

pull request commentkubernetes/kubernetes

Scheduler: remove the misleading comments in `NodeResourcesBalancedAllocation`

@chendave Sry for the late response. This plugin wasn't authored by me actually...

Based on my understanding, how NodeResourcesBalancedAllocation scores just depend on existing Pods' total cpu/mem requests vs. incoming Pod's cpu/mem request - either the incoming Pod makes the node's cpu/mem ratio more balanced or skewed. It's not related with whether NodeResourcesLeastAllocated or NodeResourcesMostAllocated is enabled - those 2 plugins are scored based on absolute usage of cpu/mem. I can easily give examples on NodeResourcesBalancedAllocation behave orthogonally or tangentially with either of those 2 plugins.

So I agree that the comment is problematic and we need to fix the comment.

fallback to remove those problematic doc comments only.

chendave

comment created time in 11 days

push eventchendave/kubernetes

David Ashpole

commit sha fca84c02bb5a089d071e608a200b7dc5dc79d074

add dashpole as kubelet approver

view details

Aresforchina

commit sha 2293b473464a78877a57bdfff5a2b0f11c3283bf

add some comments for const variable

view details

Kewei Ma

commit sha 34fce9faee46e38adf55255a77188120f2567d03

Fix a comment in job_controller

view details

Philipp Stehle

commit sha ff69810f1a5131855a0b41dfa1542b3f2a70772c

Fix: UpdateStrategy.RollingUpdate.Partition is lost when UpdateStrategy.Type is not set

view details

Lukasz Szaszkiewicz

commit sha fbb1f23735aa26c61305ff1337a02a8add830142

kube-aggregator: changes the name of aggregator_unavailable_apiservice_count metrics

view details

Odin Ugedal

commit sha 65e9b6099d0a15fab9a1549c24bb57b868b2df59

Add odinuge to sig-node-reviewers

view details

mattjmcnaughton

commit sha f5080850fc7fd1997cb8c0be32a3f4e0e26b9c23

Delete TODO in `image_gc_manager` I think the TODO here may have actually been unnecessary. There isn't a ton of interest around merging https://github.com/kubernetes/kubernetes/pull/87425, which contains a fix. Delete the TODO so we don't devote time to working on this area in the future.

view details

YuikoTakada

commit sha a80564dbdd87f088be09e516e4a2f69e78bfd42f

Fix non-ascii characters in pkg/kubelet/qos/doc.go

view details

Sergey Yedrikov

commit sha e5370afba2842b844699b239f3f14306f9923eb6

Fix for kubectl issue 834: "kubectl help api-resources --sort-by" text mentions nodes, not API resources

view details

Laszlo Janosi

commit sha 1c393c73a63fedbcd041f21cc1a9a59de83167cb

Change SCTPSupport default value to true

view details

Łukasz Osipiuk

commit sha 02915ef1797681d2f51019a46989e9ec597d294a

Remove endpoints RBAC for Cluster Autoscaler

view details

Eric Mountain

commit sha 22e0ee768bfaa56dac511759207ac0c42a33b545

Removes container RefManager

view details

Odin Ugedal

commit sha 2830827442358235a729b39e5eed8ec6cda29e12

Add support for removing unsupported huge page sizes When kubelet is restarted, it will now remove the resources for huge page sizes no longer supported. This is required when: - node disables huge pages - changing the default huge page size in older versions of linux (because it will then only support the newly set default). - Software updates that change what sizes are supported (eg. by changing boot parameters).

view details

Odin Ugedal

commit sha 8b6160a3679bd5c040d05b3e9d05a4a446f8d291

Add support for stopping kubelet in node-e2e This makes it possible to stop the kubelet, do some work, and then start it again.

view details

Odin Ugedal

commit sha a233b9aab0d2ea595a08000d1c94564af8cd8e94

Add verbose message when more than one kubelet is running

view details

Benjamin Danon

commit sha 77ca434ec378d607fee83dc790eb8981d5beb4e7

Fix the newName field name in the example

view details

Benjamin Danon

commit sha c39c64ffdaf677d09c412e7447b5bf400394068b

Fix the newName field name in the page

view details

Keerthan Reddy,Mala

commit sha aae8a2847aa4831b4e8514ca061d391b3b163bcd

Check for sandboxes before deleting the pod from apiserver

view details

Keerthan Reddy,Mala

commit sha 1e42737e5819f586ccc628fec567d75bc669a5da

add unit tests

view details

Keerthan Reddy,Mala

commit sha c24349e9f288d8789046a2db125d6a60807e7b41

update the build file for bazel

view details

push time in 11 days

Pull request review commentkubeedge/kubeedge

EdgeMesh: support container level proxy

 func (esd *EdgeServiceDiscovery) FindMicroServiceInstances(consumerID, microServ  	// gen 	var microServiceInstances []*registry.MicroServiceInstance-	var hostPort int32-	// all pods share the same hostport, get from pods[0]-	if pods[0].Spec.HostNetwork {-		// host network-		hostPort = int32(targetPort)-	} else {-		// container network-		for _, container := range pods[0].Spec.Containers {-			for _, port := range container.Ports {-				if port.ContainerPort == int32(targetPort) {-					hostPort = port.HostPort-				}-			}-		}

I'd like to see the reason behind those change, either in the commit message or an issue filed somewhere.

daixiang0

comment created time in 12 days

delete branch chendave/kubernetes

delete branch : skiptopology

delete time in 12 days

Pull request review commentkubeedge/kubeedge

Lint: enable errcheck linter

 func (p *Proxier) readAndCleanRule() { 	for scan.Scan() { 		serverString := scan.Text() 		if strings.Contains(serverString, "-o") {-			p.iptables.DeleteRule(utiliptables.TableNAT, utiliptables.ChainOutput, strings.Split(serverString, " ")...)+			_ = p.iptables.DeleteRule(utiliptables.TableNAT, utiliptables.ChainOutput, strings.Split(serverString, " ")...)

I don't understand those change and other occurrences as well.

daixiang0

comment created time in 15 days

pull request commentkubeedge/kubeedge

Lint: enable errcheck linter

btw, lint check is failed as well.

daixiang0

comment created time in 15 days

Pull request review commentkubeedge/kubeedge

Lint: enable errcheck linter

 func newTunnelServer() *TunnelServer { 			ReadBufferSize:   1024, 			Error: func(w http.ResponseWriter, r *http.Request, status int, reason error) { 				w.WriteHeader(status)-				w.Write([]byte(reason.Error()))+				_, _ = w.Write([]byte(reason.Error()))

revert this change?

daixiang0

comment created time in 15 days

push eventchendave/initrepo

Dave Chen

commit sha ea532e7901e642cc24e003b94bc29abd6890a69d

test account

view details

push time in 15 days

push eventchendave/kubeedge

Dave Chen

commit sha f551a94d7bcdbff7f20d4d1b7380f4cfd815c6ec

WIP: test for e2e test.

view details

push time in 15 days

pull request commentkubernetes/kubernetes

Skip `PreScore` when the `TopologySpreadConstraints` is specified

I am not sure I understand what perf improvement you see, are you referring to the 2364095375 vs 2350705128? that is within the margin of error! nothing to investigate.

gotcha, thank you!

chendave

comment created time in 15 days

Pull request review commentkubeedge/kubeedge

Lint: enable errcheck linter

 func StartServer(address string) { 	})  	klog.Info("start unix domain socket server")-	uds.StartServer()+	if err := uds.StartServer(); err != nil {+		klog.Errorf("failed to start uds server: %v", err)

I am not sure about this, maybe fatal?

cc @fisherxu @kevin-wangzefeng what do you think?

daixiang0

comment created time in 15 days

Pull request review commentkubeedge/kubeedge

Lint: enable errcheck linter

 func GenerateToken() error { 	caHashToken := strings.Join([]string{caHash, tokenString}, ".") 	// save caHashAndToken to secret 	err = CreateTokenSecret([]byte(caHashToken))-	if err != nil {-		return fmt.Errorf("Failed to create tokenSecret, err: %v", err)+	if err := CreateTokenSecret([]byte(caHashToken)); err != nil {+		klog.Fatalf("Failed to create the ca token  for edgecore register, err: %v", err) 	}  	t := time.NewTicker(time.Hour * 12) 	go func() { 		for { 			<-t.C 			refreshedCaHashToken := refreshToken()-			CreateTokenSecret([]byte(refreshedCaHashToken))+			if err := CreateTokenSecret([]byte(refreshedCaHashToken)); err != nil {+				klog.Fatalf("failed to craete the ca token for edgecore register, err: %v", err)

s/craete/create

daixiang0

comment created time in 15 days

Pull request review commentkubeedge/kubeedge

Lint: enable errcheck linter

 func StartHTTPServer() { // getCA returns the caCertDER func getCA(w http.ResponseWriter, r *http.Request) { 	caCertDER := hubconfig.Config.Ca-	w.Write(caCertDER)+	_, _ = w.Write(caCertDER) }  //electionHandler returns the status whether the cloudcore is ready func electionHandler(w http.ResponseWriter, r *http.Request) { 	checker := hubconfig.Config.Checker 	if checker.Check(r) != nil { 		w.WriteHeader(http.StatusNotFound)-		w.Write([]byte(fmt.Sprintf("Cloudcore is not ready")))+		if _, err := w.Write([]byte(fmt.Sprintf("Cloudcore is not ready"))); err != nil {+			klog.Errorf("failed to write, err: %v", err)+		} 	} else { 		w.WriteHeader(http.StatusOK)-		w.Write([]byte(fmt.Sprintf("Cloudcore is ready")))+		if _, err := w.Write([]byte(fmt.Sprintf("Cloudcore is ready"))); err != nil {+			klog.Errorf("failed to write, err: %v", err)

better to be more verbose , this is helpful for debugging, but I am good as is.

daixiang0

comment created time in 15 days

Pull request review commentkubeedge/kubeedge

Lint: enable errcheck linter

 func (mh *MessageHandle) saveSuccessPoint(msg *beehiveModel.Message, info *model 		objectSyncName := synccontroller.BuildObjectSyncName(info.NodeID, resourceUID)  		if msg.GetOperation() == beehiveModel.DeleteOperation {-			nodeStore.Delete(msg)+			if err := nodeStore.Delete(msg); err != nil {+				return

ditto.

daixiang0

comment created time in 15 days

Pull request review commentkubeedge/kubeedge

Lint: enable errcheck linter

 func edgeCoreClientCert(w http.ResponseWriter, r *http.Request) { 	authorizationHeader := r.Header.Get("authorization") 	if authorizationHeader == "" { 		w.WriteHeader(http.StatusUnauthorized)-		w.Write([]byte(fmt.Sprintf("Invalid authorization token")))+		_, _ = w.Write([]byte(fmt.Sprintf("Invalid authorization token")))

I don't understand the change like this and other occurrences.

daixiang0

comment created time in 15 days

Pull request review commentkubeedge/kubeedge

Lint: enable errcheck linter

 func StartHTTPServer() { // getCA returns the caCertDER func getCA(w http.ResponseWriter, r *http.Request) { 	caCertDER := hubconfig.Config.Ca-	w.Write(caCertDER)+	_, _ = w.Write(caCertDER) }  //electionHandler returns the status whether the cloudcore is ready func electionHandler(w http.ResponseWriter, r *http.Request) { 	checker := hubconfig.Config.Checker 	if checker.Check(r) != nil { 		w.WriteHeader(http.StatusNotFound)-		w.Write([]byte(fmt.Sprintf("Cloudcore is not ready")))+		if _, err := w.Write([]byte(fmt.Sprintf("Cloudcore is not ready"))); err != nil {+			klog.Errorf("failed to write, err: %v", err)

write what? may be "failed to write http response with xxx" or just "write http response error"

daixiang0

comment created time in 15 days

Pull request review commentkubeedge/kubeedge

Lint: enable errcheck linter

 func StartHTTPServer() { // getCA returns the caCertDER func getCA(w http.ResponseWriter, r *http.Request) { 	caCertDER := hubconfig.Config.Ca-	w.Write(caCertDER)+	_, _ = w.Write(caCertDER)

can you explain why this change is needed?

daixiang0

comment created time in 15 days

Pull request review commentkubeedge/kubeedge

Lint: enable errcheck linter

 func (mh *MessageHandle) handleMessage(nodeQueue workqueue.RateLimitingInterface 	if msgType == "listMessage" { 		mh.send(hi, info, msg) 		// delete successfully sent events from the queue/store-		nodeStore.Delete(msg)+		if err := nodeStore.Delete(msg); err != nil {+			return

same here.

daixiang0

comment created time in 15 days

Pull request review commentkubeedge/kubeedge

Lint: enable errcheck linter

 func (q *ChannelMessageQueue) addMessageToQueue(nodeID string, msg *beehiveModel 		} 	} -	nodeStore.Add(msg)+	if err := nodeStore.Add(msg); err != nil {+		return

prefer to logging the error message or else we don't know what's happened.

daixiang0

comment created time in 15 days

pull request commentkubernetes/kubernetes

Skip `PreScore` when the `TopologySpreadConstraints` is specified

Thanks @kubernetes/enhancements

I think the trivial perf improvement is due to the workload characteristics - the SchedulingBasic suite doesn't have pods with TopologySpreadConstraints specified. So this sort of proves we don't need to backport it as it doesn't hit the critical path. But this PR is still valuable for workloads with TopologySpreadConstraints specified.

Obviously, SchedulingBasic doesn't suit in this case, I use it just because my misunderstanding when I firstly read the code. But I still don't understand why perf improvements show the difference, will take a look later.

chendave

comment created time in 15 days

pull request commentkubernetes/kubernetes

Skip `PreScore` when the `TopologySpreadConstraints` is specified

DefaultPodTopologySpread doesn't take into account default constraints and needn't score when the pod.Spec.TopologySpreadConstraints is not specified.

Should be: "when the pod.Spec.TopologySpreadConstraints is specified"

Note that this score function is not configured at all if default constraints are specified and the feature flag is enabled:

https://github.com/kubernetes/kubernetes/blob/d8febccacfc9d51a017be9531247689e0e36df04/pkg/scheduler/algorithmprovider/registry.go#L174

yes, I am still based on the code couple of days before, so this is not shown in my IDE, thank you!

chendave

comment created time in 16 days

more