profile
viewpoint
Aldo Culquicondor alculquicondor Canada

alculquicondor/aco_vrp 4

Solving Capacited Vehicle Routing Problem with Ant Colony Optimization

alculquicondor/GICSDR-PyLearning 3

Repo for the GICS Dennis Ritchie Python classes

alculquicondor/GICSDR-AndroidLearning 2

Demo Project for the GICS Dennis Ritchie Android classes

aculquicondor/HttpUdp 1

Proxies for doing HTTP over UDP

alculquicondor/Canicas 1

A simple marble balls game

adolfo1994/unipal-app 0

Repository for Unipal(unipal.ndev.tech) prototype app

adolfo1994/unipal-landing 0

Landing Page para Unipal

Pull request review commentkubernetes/website

Docs for new default PodTopologySpread functionality and gate

 profiles:       args:         defaultConstraints:           - maxSkew: 1-            topologyKey: failure-domain.beta.kubernetes.io/zone+            topologyKey: topology.kubernetes.io/zone             whenUnsatisfiable: ScheduleAnyway ```  {{< note >}} The score produced by default scheduling constraints might conflict with the score produced by the-[`DefaultPodTopologySpread` plugin](/docs/reference/scheduling/config/#scheduling-plugins).+[`SelectorSpread` plugin](/docs/reference/scheduling/config/#scheduling-plugins). It is recommended that you disable this plugin in the scheduling profile when using default constraints for `PodTopologySpread`. {{< /note >}} +#### Internal default constraints++{{< feature-state for_k8s_version="v1.19" state="alpha" >}}++When the feature gate `DefaultPodTopologySpreadPlugin` is enabled,

Oops. Thanks

alculquicondor

comment created time in 4 hours

push eventalculquicondor/website

Aldo Culquicondor

commit sha e02160c63c5d1f7e2881b39d15b825313a30ceaf

Docs for new default PodTopologySpread functionality and gate Signed-off-by: Aldo Culquicondor <acondor@google.com>

view details

push time in 4 hours

Pull request review commentkubernetes/website

scheduling-framework.md: update Reserve and Unreserve descriptions

 Plugins wishing to perform "pre-reserve" work should use the NormalizeScore extension point. {{< /note >}} -### Reserve--This is an informational extension point. Plugins which maintain runtime state-(aka "stateful plugins") should use this extension point to be notified by the-scheduler when resources on a node are being reserved for a given Pod. This-happens before the scheduler actually binds the Pod to the Node, and it exists-to prevent race conditions while the scheduler waits for the bind to succeed.--This is the last step in a scheduling cycle. Once a Pod is in the reserved-state, it will either trigger [Unreserve](#unreserve) plugins (on failure) or-[PostBind](#post-bind) plugins (on success) at the end of the binding cycle.+### Reserve {#reserve}++A Reserve plugin implements two methods, namely Reserve and Unreserve, to serve+as informational extension points. Plugins which maintain runtime state (aka

let's say something like (@Huang-Wei for thoughts):

The Reserve extension point backs two informational scheduling phases: Reserve and Unreserve. Plugins implement two methods, Reserve and Unreserve, that get called during these phases, respectively.

Sorry for the back and forth, but I don't find an easy way to express the fact that Unreserve is not an extension point.

adtac

comment created time in 6 hours

push eventalculquicondor/website

Enrique Medina Montenegro

commit sha edad9d9e8d669d79622d8765a64d8000e775891d

Spanish Translation

view details

Cheikhrouhou ines

commit sha a3f977aaec05b09b5fe20c749703c7ece1e8546c

translate service account fr

view details

Enrique Medina Montenegro

commit sha eab4f2199e6c8170fbb3bc9a7f175e8f9fd91be8

Spanish Translation

view details

Cheikhrouhou ines

commit sha 496fad77bbf9a1362c332fdfca26f7c876532c02

renaming and fix for service account fr

view details

Zhang Yong

commit sha 65fdb62395925e7cf70c5118e3dfc522ae05b7e5

Localize “email address” placeholder on home page

view details

icheikhrouhou

commit sha 40496c2b7d8e2dc9f47f2da21c1ede2db6b93420

fix translate configure service account

view details

Fernando Karnagi

commit sha 5a9aeb7d269acd4a20a70eb6056071be317a81b2

Added steps for normal user authentication

view details

Fernando Karnagi

commit sha 7bf6547bb0665178ab80b6c330638c2d750c1e8f

Updated instruction on Create Certificate Request Kubernetes Object

view details

Fernando Karnagi

commit sha eefe9b19bf7ada26762fbce603d8591aee5d68f2

Various updates

view details

Fernando Karnagi

commit sha b2d21616d8c527c7ad3c55d904f96b427a2af654

updated mistypo

view details

lou-lan

commit sha bdc18d99f053aa0c1a7db2d127cc8a59cc73c032

Fix minikube image 'project:hello-minikube-zero-install' has been suspended.

view details

dupengcheng

commit sha 4afaf21c6c045855293f6868c23b888455b7b6cf

Fix zh language misspell

view details

bryan

commit sha c3ea07d58e94dbdacbba6bced5becb862991e5aa

fix some format errors

view details

Alex Lin

commit sha 05b5e8d9f2c7dad9edfdccec75bbe3f4fad36f9f

Add info to `aws-load-balancer-security-groups` annotation Add warning to avoid sharing security groups between multiple services in the `service.beta.kubernetes.io/aws-load-balancer-security-groups` annotation

view details

Weiping Cai

commit sha cca324e4bd1949b4218ff446aa598e787172b84d

deprecated kubectl run command flag replicas for zh Signed-off-by: Weiping Cai <weiping.cai@daocloud.io>

view details

Enrique Medina Montenegro

commit sha 041daa48a3fa2fbbfc8db1414af48bd5019a955a

Merge branch 'master' of https://github.com/kubernetes/website into es/docs/concepts_workloads_controllers/jobs-run_and_replicaset

view details

ZhiFeng1993

commit sha 7512752ad0a68e1da6af6dc04a07994fbddba44b

Update installation info for kubelet and kubernetes-cni

view details

Hector Sam

commit sha 841db869b22296feeb711a9e82a0198947822899

Lang: FR, Removing announcement and deprecationwarning

view details

Jerry Park

commit sha 373e4cdfde527447f560ed07634dd723ab4d5756

Change the words for limit/request on the glossary

view details

Qiming Teng

commit sha f21baa1eb43053963d6ca146fe7cd9bde43e30d0

[zh] Fix typo in sample yaml

view details

push time in 6 hours

Pull request review commentkubernetes/website

scheduling-framework.md: update Reserve and Unreserve descriptions

 NormalizeScore extension point.  ### Reserve {#reserve} -This is an informational extension point. Plugins which maintain runtime state-(aka "stateful plugins") should use this extension point to be notified by the-scheduler when resources on a node are being reserved for a given Pod. This-happens before the scheduler actually binds the Pod to the designated node, and-it exists to prevent race conditions while the scheduler waits for the bind to-succeed. This is the penultimate step of the scheduling cycle.--Each enabled reserve plugin may individually succeed or fail. If a reserve-plugin fails, subsequent reserve plugins are not executed. All plugins must-succeed for the extension point to succeed. If the extension point succeeds, the-rest of the scheduling cycle and the binding cycle are executed. If the-extension point fails due to one or more plugins failing, the-[Unreserve](#unreserve) extension point is triggered.+A reserve plugin implements two methods, namely Reserve and Unreserve, to serve+as informational extension points. Plugins which maintain runtime state (aka+"stateful plugins") should use these extension points to be notified by the+scheduler when resources on a node are being reserved and unreserved for a given+Pod.++The Reserve extension point, which is the penultimate step of the scheduling+cycle, is triggered before the scheduler actually binds the Pod to the
The Reserve phase happens before the scheduler actually binds the Pod to the

I think there is no need to say penultimate, given that it can be deduced from the description of Permit.

adtac

comment created time in 6 hours

Pull request review commentkubernetes/website

scheduling-framework.md: update Reserve and Unreserve descriptions

 NormalizeScore extension point.  ### Reserve {#reserve} -This is an informational extension point. Plugins which maintain runtime state-(aka "stateful plugins") should use this extension point to be notified by the-scheduler when resources on a node are being reserved for a given Pod. This-happens before the scheduler actually binds the Pod to the designated node, and-it exists to prevent race conditions while the scheduler waits for the bind to-succeed. This is the penultimate step of the scheduling cycle.--Each enabled reserve plugin may individually succeed or fail. If a reserve-plugin fails, subsequent reserve plugins are not executed. All plugins must-succeed for the extension point to succeed. If the extension point succeeds, the-rest of the scheduling cycle and the binding cycle are executed. If the-extension point fails due to one or more plugins failing, the-[Unreserve](#unreserve) extension point is triggered.+A reserve plugin implements two methods, namely Reserve and Unreserve, to serve
A Reserve plugin implements two methods, namely Reserve and Unreserve, to serve
adtac

comment created time in 6 hours

Pull request review commentkubernetes/website

scheduling-framework.md: update Reserve and Unreserve descriptions

 to clean up associated resources.  ### Unreserve

so, UnReserve is a scheduling phase, but not an extension point.

adtac

comment created time in 7 hours

Pull request review commentkubernetes/website

scheduling-framework.md: update Reserve and Unreserve descriptions

 to clean up associated resources.  ### Unreserve

"extension" implies that you can change it. I agree with Wei. There is just no ~easy~ obvious way to express the difference.

adtac

comment created time in 7 hours

Pull request review commentkubernetes/website

scheduling-framework.md: update Reserve and Unreserve descriptions

 to clean up associated resources.  ### Unreserve

Maybe we can call them "calls" of the "Reserve extension point"

adtac

comment created time in 7 hours

pull request commentkubernetes/website

Add Scheduling Configuration reference doc

@savitharaghunathan or @sftim anything else for approval?

alculquicondor

comment created time in 9 hours

issue commentkubernetes/enhancements

Run multiple Scheduling Profiles

Correct, it is finalized for 1.19

alculquicondor

comment created time in 12 hours

pull request commentkubernetes/kubernetes

Add SIG storage owner aliases

/retest

alculquicondor

comment created time in 13 hours

pull request commentkubernetes/kubernetes

Fix ListZonesInRegion() after client BasePath change

/kind bug

jingxu97

comment created time in 13 hours

pull request commentkubernetes/kubernetes

Fix ListZonesInRegion() after client BasePath change

/priority critical-urgent

jingxu97

comment created time in 13 hours

pull request commentkubernetes/kubernetes

Remove DisablePreemption field from KubeSchedulerConfiguration

other than the nits, looks good.

/assign @liggitt for api review

Huang-Wei

comment created time in 13 hours

Pull request review commentkubernetes/kubernetes

Remove DisablePreemption field from KubeSchedulerConfiguration

 func initTest(t *testing.T, nsPrefix string, opts ...scheduler.Option) *testutil // initTestDisablePreemption initializes a test environment and creates master and scheduler with default // configuration but with pod preemption disabled. func initTestDisablePreemption(t *testing.T, nsPrefix string) *testutils.TestContext {+	prof := schedulerconfig.KubeSchedulerProfile{+		SchedulerName: v1.DefaultSchedulerName,+		Plugins: &schedulerconfig.Plugins{+			PostFilter: &schedulerconfig.PluginSet{+				Disabled: []schedulerconfig.Plugin{+					{Name: "*"},

Let's use the name of the plugin, in case we have another default PostFilter plugin in the future.

Huang-Wei

comment created time in 13 hours

Pull request review commentkubernetes/kubernetes

Remove DisablePreemption field from KubeSchedulerConfiguration

 func (sched *Scheduler) scheduleOne(ctx context.Context) { 		// into the resources that were preempted, but this is harmless. 		nominatedNode := "" 		if fitError, ok := err.(*core.FitError); ok {-			if sched.DisablePreemption || !prof.HasPostFilterPlugins() {+			if !prof.HasPostFilterPlugins() { 				klog.V(3).Infof("Pod priority feature is not enabled or preemption is disabled by scheduler configuration." +

Update the log line here. It doesn't seem like Pod priority affect this anymore.

Huang-Wei

comment created time in 13 hours

push eventalculquicondor/kubernetes

Caleb Woodbine

commit sha 1bdb854e7ebaecb7311381cb068349824b3c94ae

Add resource deleting if there wasn't a delete watch event found

view details

Caleb Woodbine

commit sha b9c934102b3a355fcd8af1603fcf40ed422cd269

Update default retrywatcher resource version

view details

Stephen Heywood

commit sha cd2ad2b98652a653dbb0be941a6f16afd9745f03

Removing extra boilerplate from test

view details

Stephen Heywood

commit sha 7622a794da8ba0fde1c1fe749011df5b32aa2760

Use polling while deleting the collection of events

view details

Caleb Woodbine

commit sha 9a77a00c7c85aeb506327ba0969123d2f0fc6ae1

Fix formatting

view details

Morgan Bauer

commit sha a9b999c00d620ab826644d6679ad3d4a82cc8656

remove out of date test config

view details

Stephen Heywood

commit sha b3baef5e052c3ef61badacee984d52d8b859c2a8

Fix gofmt issues

view details

Stephen Heywood

commit sha ecb68742e097a9ba02e906cdf8d6a40c09f174df

Fix golint issue

view details

Caleb Woodbine

commit sha 250bb35041ef885927e4d930a575946a522cdf22

Update documentation

view details

Caleb Woodbine

commit sha 1db0ca74a931edc10f82a856fe205359a888031a

Update correct resource version used, watch retry function to not close

view details

ZhiFeng1993

commit sha 4ad6ae83ae78c9127e862e6cccb568ec6902b025

Add usage in some hack/update scripts

view details

Wojciech Tyczynski

commit sha 7787ebc85b87eb7bc00f234f4e52eed3d987d5dd

Revert "Revert "Rely on default watch cache capacity and ignore its requested size""

view details

Caleb Woodbine

commit sha cd314c193c114efd9380e25989c51fb3c89e30d0

Fix linting

view details

Gaurav Singh

commit sha 6cc75ee72042190beeac30401e2484274ab42655

apiserver: remove duplicate imports Signed-off-by: Gaurav Singh <gaurav1086@gmail.com>

view details

tahsinrahman

commit sha 201f869c66391a14c39a73c8745789f24458b99f

Add --logging-format flag for kube-apiserver

view details

SataQiu

commit sha 17f3cd48a54483b4c6b7dc1d742194a1f41daf0a

add '--logging-format' flag to kube-controller-manager Signed-off-by: SataQiu <1527062125@qq.com>

view details

Marcin Maciaszczyk

commit sha e5af792ad29fdbd8f401b5d0d78e8fb0d64c1fe6

Bump Dashboard to v2.0.1

view details

lo24

commit sha cda593e822d2e03f621167f007c183faf5b1d910

fix TestValidateNodeIPParam: break was being used instead of continue

view details

Robin Cernin

commit sha f41cc12d35223ed0a93775c60721da19bc5e02b7

Bump up MacOS RAM requirement to 8GB As discussed in https://github.com/kubernetes/kubernetes/issues/11852#issuecomment-387868165 We need at least 8 GB RAM assigned to Docker for Mac. Otherwise the build will likely fail. Bumping this requirement saves time people who assigns 4.5 GB RAM and fails. Signed-off-by: Robin Cernin <cerninr@gmail.com>

view details

Gaurav Singh

commit sha adcd8909fbbfd399af3983ced92fc743347e96fe

cleanup: Remove_unnecessary_Sprintfs Removed unused fmt Signed-off-by: Gaurav Singh <gaurav1086@gmail.com>

view details

push time in a day

issue commentkubernetes/kubernetes

Multizone gce tests are failing

/sig storage

msau42

comment created time in a day

pull request commentkubernetes/kubernetes

Fix ListZonesInRegion() after client BasePath change

/lgtm

jingxu97

comment created time in a day

pull request commentkubernetes/kubernetes

selectorspread: access listers in plugin instantiation

/retest

adtac

comment created time in a day

issue commentkubernetes/kubernetes

Multizone gce tests are failing

/sig scheduling

msau42

comment created time in a day

issue commentkubernetes/kubernetes

Multizone gce tests are failing

scheduler fix is almost merged #92840

msau42

comment created time in a day

pull request commentkubernetes/kubernetes

selectorspread: access listers in plugin instantiation

/retest

adtac

comment created time in a day

pull request commentkubernetes/kubernetes

Add SIG storage owner aliases

/retest

alculquicondor

comment created time in a day

pull request commentkubernetes/kubernetes

selectorspread: access listers in plugin instantiation

/lgtm

adtac

comment created time in a day

push eventalculquicondor/website

Aldo Culquicondor

commit sha 9ed3f427a2e28dffe6635c26007ec29b4bd68769

Review comments 2 Signed-off-by: Aldo Culquicondor <acondor@google.com>

view details

push time in 2 days

Pull request review commentkubernetes/website

Add Scheduling Configuration reference doc

 that are not enabled by default:   Service across nodes.   Extension points: `PreFilter`, `Filter`, `Score`.   -## Multiple profiles+### Multiple profiles++You can configure `kube-scheduler` to run more than one profile.+Each profile has an associated scheduler name and can have a different set of+plugins configured in its [extension points](#extension-points). For example:++```yaml+apiVersion: kubescheduler.config.k8s.io/v1beta1+kind: KubeSchedulerConfiguration+profiles:+  - schedulerName: default-scheduler+  - schedulerName: no-scoring-scheduler+    plugins:+      score:+        disabled:+        - name: '*'+```

Good suggestion

alculquicondor

comment created time in 2 days

pull request commentkubernetes/kubernetes

Add SIG storage owner aliases

/hold cancel squashed

alculquicondor

comment created time in 2 days

push eventalculquicondor/kubernetes

Aldo Culquicondor

commit sha 27ec356d76390c40029333a6b53fcefda0b9a8b6

Add SIG storage owner aliases And give ownership to pkg/scheduler/framework/plugins/volumebinding Signed-off-by: Aldo Culquicondor <acondor@google.com> Change-Id: I4bd89b1745a2be0e458601056ab905bdd6692195

view details

push time in 2 days

pull request commentkubernetes/kubernetes

Add SIG storage owner aliases

/hold let me squash

alculquicondor

comment created time in 2 days

pull request commentkubernetes/kubernetes

Add label selector value validation

+1 to fixing this in kube-apiserver side. However, deferring to API-reviewers to judge if this breaking change is reasonable.

If this is not, in the scheduler side, the only thing we can do is mark the pods as unschedulable, which is also a breaking change (pods that used to get scheduled, don't get scheduled).

Is this the only instance of label selectors not being validated? What about the .spec.selector of the Deployment.

damemi

comment created time in 2 days

issue commentkubernetes/kubernetes

Init container resource usage not properly prioritized in scheduler

Correct. Setting appropriate requests/limits that maintain good node usage is users responsibility.

Considering the max(initContainers, sum(containers)) is the documented behavior.

smarterclayton

comment created time in 2 days

pull request commentkubernetes/kubernetes

add args for NodeResourcesFit plugin

/hold cancel /lgtm

angao

comment created time in 2 days

pull request commentkubernetes/kubernetes

Add SIG storage owner aliases

We didn't hear back from tagged people. Anything else from approvers?

alculquicondor

comment created time in 2 days

pull request commentkubernetes/website

Add Scheduling Configuration reference doc

@ahg-g mind giving lgtm?

alculquicondor

comment created time in 2 days

Pull request review commentkubernetes/website

Add Scheduling Configuration reference doc

 that are not enabled by default:   Service across nodes.   Extension points: `PreFilter`, `Filter`, `Score`.   -## Multiple profiles+### Multiple profiles++`kube-scheduler` can be configured to

Done

alculquicondor

comment created time in 2 days

Pull request review commentkubernetes/website

Add Scheduling Configuration reference doc

 ----title: Scheduling Profiles+title: Scheduler Configuration content_type: concept weight: 20 --- +{{< feature-state for_k8s_version="v1.19" state="beta" >}}++The `KubeSchedulerConfiguration` is a configuration API for `kube-scheduler`+that can be provided as a file via `--config` command line flag.+ <!-- overview --> -{{< feature-state for_k8s_version="v1.18" state="alpha" >}}+<!-- body --> -A scheduling Profile allows you to configure the different stages of scheduling-in the {{< glossary_tooltip text="kube-scheduler" term_id="kube-scheduler" >}}.-Each stage is exposed in a extension point. Plugins provide scheduling behaviors-by implementing one or more of these extension points.+## Minimal Configuration -You can specify scheduling profiles by running `kube-scheduler --config <filename>`,-using the component config APIs-([`v1alpha1`](https://pkg.go.dev/k8s.io/kube-scheduler@{{< param "fullversion" >}}/config/v1alpha1?tab=doc#KubeSchedulerConfiguration)-or [`v1alpha2`](https://pkg.go.dev/k8s.io/kube-scheduler@{{< param "fullversion" >}}/config/v1alpha2?tab=doc#KubeSchedulerConfiguration)).-The `v1alpha2` API allows you to configure kube-scheduler to run-[multiple profiles](#multiple-profiles).+A minimal configuration looks as follows: +```yaml+apiVersion: kubescheduler.config.k8s.io/v1beta1+kind: KubeSchedulerConfiguration+clientConnection:+  kubeconfig: /etc/srv/kubernetes/kube-scheduler/kubeconfig+``` +## Upgrading from `v1alpha2` to `v1beta1` {#beta-changes}

Done

alculquicondor

comment created time in 2 days

Pull request review commentkubernetes/website

Add Scheduling Configuration reference doc

 extension points:    filtering phase. The scheduler will then select the node with the highest    weighted scores sum. 1. `Reserve`: This is an informational extension point that notifies plugins-   when resources have being reserved for a given Pod.+   when resources have being reserved for a given Pod. Plugins also implement an+   `Unreserve` call, that gets called in the case of failure during of after+   `Reserve`. 1. `Permit`: These plugins can prevent or delay the binding of a Pod. 1. `PreBind`: These plugins perform any work required before a Pod is bound. 1. `Bind`: The plugins bind a Pod to a Node. Bind plugins are called in order    and once one has done the binding, the remaining plugins are skipped. At    least one bind plugin is required. 1. `PostBind`: This is an informational extension point that is called after    a Pod has been bound.-1. `UnReserve`: This is an informational extension point that is called if-   a Pod is rejected after being reserved and put on hold by a `Permit` plugin.+

See new commit

alculquicondor

comment created time in 2 days

Pull request review commentkubernetes/website

Add Scheduling Configuration reference doc

 ----title: Scheduling Profiles+title: Scheduler Configuration content_type: concept weight: 20 --- +{{< feature-state for_k8s_version="v1.19" state="beta" >}}++The `KubeSchedulerConfiguration` is a configuration API for `kube-scheduler`

Done

alculquicondor

comment created time in 2 days

push eventalculquicondor/website

Aldo Culquicondor

commit sha 34c3537a728a710855664d74bc9dcee90537dcfa

Review comments Signed-off-by: Aldo Culquicondor <acondor@google.com>

view details

push time in 2 days

push eventalculquicondor/website

Aldo Culquicondor

commit sha 72090c98af46398f2b3498a79ab5ff7b2b50254d

Add Scheduling Configuration reference doc Built from the existing Scheduling Profiles doc. Signed-off-by: Aldo Culquicondor <acondor@google.com>

view details

push time in 2 days

push eventalculquicondor/website

Aldo Culquicondor

commit sha c89022785445c60b1cc461bcc21139185c3d5716

Add Scheduling Configuration reference doc Built from the existing Scheduling Profiles doc. Signed-off-by: Aldo Culquicondor <acondor@google.com>

view details

push time in 2 days

Pull request review commentkubernetes/website

Add Scheduling Configuration reference doc

 extension points:    filtering phase. The scheduler will then select the node with the highest    weighted scores sum. 1. `Reserve`: This is an informational extension point that notifies plugins-   when resources have being reserved for a given Pod.+   when resources have being reserved for a given Pod. Plugins also implement an+   `Unreserve` call, that gets called in the case of failure during of after+   `Reserve`. 1. `Permit`: These plugins can prevent or delay the binding of a Pod. 1. `PreBind`: These plugins perform any work required before a Pod is bound. 1. `Bind`: The plugins bind a Pod to a Node. Bind plugins are called in order    and once one has done the binding, the remaining plugins are skipped. At    least one bind plugin is required. 1. `PostBind`: This is an informational extension point that is called after    a Pod has been bound.-1. `UnReserve`: This is an informational extension point that is called if-   a Pod is rejected after being reserved and put on hold by a `Permit` plugin.+

I think the description already is enough for users of component config. Such detail could go in https://kubernetes.io/docs/concepts/scheduling-eviction/scheduling-framework/

alculquicondor

comment created time in 2 days

pull request commentkubernetes/website

Add Scheduling Configuration reference doc

cc @cofyc for VolumeBinding changes

alculquicondor

comment created time in 2 days

Pull request review commentkubernetes/website

Add Scheduling Configuration reference doc

 ----title: Scheduling Profiles+title: Scheduling Configuration

Done

alculquicondor

comment created time in 2 days

push eventalculquicondor/website

Aldo Culquicondor

commit sha 86cbfe291e267aa3e4823021760fdffb86918d34

Add Scheduling Configuration reference doc Built from the existing Scheduling Profiles doc. Signed-off-by: Aldo Culquicondor <acondor@google.com>

view details

push time in 2 days

Pull request review commentkubernetes/website

Docs for new default PodTopologySpread functionality and gate

 There are some implicit conventions worth noting here:  ### Cluster-level default constraints -{{< feature-state for_k8s_version="v1.18" state="alpha" >}}-

I put the annotation back (but beta)

alculquicondor

comment created time in 2 days

push eventalculquicondor/website

Aldo Culquicondor

commit sha f2d775fb32e0981f6f3fd45250a538b3af7adee3

Docs for new default PodTopologySpread functionality and gate Signed-off-by: Aldo Culquicondor <acondor@google.com>

view details

push time in 2 days

pull request commentkubernetes/website

WIP: Add Scheduling Configuration reference doc

/assign @Huang-Wei

cc @pancernik

alculquicondor

comment created time in 2 days

push eventalculquicondor/website

Javi Sabalete

commit sha c5f188da73c00e602b2648ee1eb28c0b95b87c3f

Add content/es/docs/concepts/workloads/pods/pod.md

view details

inductor

commit sha 638eaf6f27e3fd32526632f16fff4075e00352ee

fix typo (#19878)

view details

Naoki Oketani

commit sha 109e8c5fa248abaf202967c0fa4b75442fa68fb3

replace http helm link with https (#19889)

view details

iaoiui

commit sha a315c38000773f8dccf25affffd64daa62492661

Fix typo kubectl exposed -> kubectl expose

view details

Naoki Oketani

commit sha 13274fbdd415580de18973846bc9557eb68c3585

update link to /ja/docs/concepts/services-networking/dns-pod-service/

view details

Naoki Oketani

commit sha 65b887f07023c5fda3d03c5848af6a6bd049bf10

update link to /ja/docs/concepts/workloads/pods/pod-lifecycle/

view details

Naoki Oketani

commit sha 0d8f82180d7ae72abc10eb7a7e8141cb71c151b7

update link to /ja/docs/concepts/overview/working-with-objects/labels/

view details

Vageesha17

commit sha 9e40a8fe983d9b45a945f10bed146e2621bd63e3

updates for PR #20131

view details

Naoki Oketani

commit sha 3e1f709cd3bb4fefab4e0c1a2fd0a1d3afd5a2cc

update link to /ja/docs/concepts/overview/kubernetes-api/

view details

Kubernetes Prow Robot

commit sha 9aa911b4063a94d0637d780eae8f6c9a127662bb

Merge pull request #20237 from oke-py/ja-link-4 update link to /ja/docs/concepts/overview/kubernetes-api/

view details

Naoki Oketani

commit sha 338763c74a064186200546f78ae4e2475f5c3781

update link to /ja/docs/tutorials/stateless-application/expose-external-ip-address/

view details

Kubernetes Prow Robot

commit sha db008fad2892a12839770f3a226317e9a765e3c0

Merge pull request #20124 from oke-py/ja-link-3 update link to /ja/docs/concepts/overview/working-with-objects/labels/

view details

Yury Tsarev

commit sha 18cca972a603ee8b0a637aeb0c03463f818f4617

Document pod DNS resolution schema * Currently documentation mentions resolvable FQDNs for services only * Documentation for pods is confusing in regards of local pod `hostname` wich actually does not match in-cluster DNS resolution * This PR clarifies FQDN schema that is used for pod DNS resolution

view details

Pick1a1username

commit sha f4eab243f60eebf99cafffe6ffabbc7eb46c793a

ja translation updated based on 1.17 of en.

view details

Pick1a1username

commit sha 867f35573a7272b53a08c24a3c95ff825d0e63a7

ja translation updated based on 1.17 of en.

view details

Guangze GAO

commit sha cb84e91c708051c1b91ab05908ecb5521b0eda04

Translation blog 2018-08-02-dynamically-expand-volume-csi-translation

view details

Guangze GAO

commit sha 11d714763e697a8de2c5eedc6ab005183352bf63

Blog translation Kubernetes-Community-Meeting-Notes

view details

Guangze GAO

commit sha 8ce1065b75f8082b0d8bb0a0788964119b4556b6

Translation-Appc-Support-For-Kubernetes-Through-Rkt

view details

Kubernetes Prow Robot

commit sha acaf534cab23fad323cec2bf73ba559aa1672ff3

Merge pull request #20100 from oke-py/ja-link-2 update link to /ja/docs/concepts/workloads/pods/pod-lifecycle/

view details

Kubernetes Prow Robot

commit sha 994b9e3204f1cb83f75d7798764c40e4caf3206a

Merge pull request #20559 from Pick1a1username/dev-1.17-ja.2 Update Outdated Japanese Translation

view details

push time in 2 days

pull request commentkubernetes/kubernetes

selectorspread: access listers in plugin instantiation

/ok-to-test

adtac

comment created time in 2 days

Pull request review commentkubernetes/kubernetes

Fix ListZonesInRegion() after client BasePath change

 func (g *Cloud) ListZonesInRegion(region string) ([]*compute.Zone, error) { 	defer cancel()  	mc := newZonesMetricContext("list", region)-	list, err := g.c.Zones().List(ctx, filter.Regexp("region", g.getRegionLink(region)))+	list, err := g.c.Zones().List(ctx, filter.Regexp("region", fmt.Sprintf(".*/regions/%s", region)))

why not fix the function getRegionLink instead. Also a comment is worth.

jkaniuk

comment created time in 2 days

Pull request review commentkubernetes/website

WIP: Add Scheduling Configuration reference doc

 ----title: Scheduling Profiles+title: Scheduling Configuration

Can you refer me to an example of how to do a redirect?

alculquicondor

comment created time in 2 days

Pull request review commentkubernetes/kubernetes

Bypass PreFilter in ServiceAfffinity if AffinityLabels arg is not present

 func TestPreFilterStateAddRemovePod(t *testing.T) { 				p := &ServiceAffinity{ 					sharedLister:  snapshot, 					serviceLister: fakeframework.ServiceLister(test.services),+					args: config.ServiceAffinityArgs{+						AffinityLabels: []string{"region", "zone"},

I think Dave's request is to add a new test for the other scenario.

Huang-Wei

comment created time in 2 days

Pull request review commentkubernetes/website

Docs for new default PodTopologySpread functionality and gate

 There are some implicit conventions worth noting here:  ### Cluster-level default constraints -{{< feature-state for_k8s_version="v1.18" state="alpha" >}}-

In the sense that the configuration API is beta, then yes, this could be considered beta as well. @Huang-Wei do you agree?

alculquicondor

comment created time in 2 days

pull request commentkubernetes/website

Docs for new default PodTopologySpread functionality and gate

/assign @Huang-Wei

alculquicondor

comment created time in 2 days

pull request commentkubernetes/website

Docs for new default PodTopologySpread functionality and gate

This is ready for review

alculquicondor

comment created time in 2 days

push eventalculquicondor/website

Aldo Culquicondor

commit sha 8b45f00a703a59b1518d967c48d514cf9a81954a

Docs for new default PodTopologySpread functionality and gate Signed-off-by: Aldo Culquicondor <acondor@google.com>

view details

push time in 2 days

push eventalculquicondor/website

Javi Sabalete

commit sha c5f188da73c00e602b2648ee1eb28c0b95b87c3f

Add content/es/docs/concepts/workloads/pods/pod.md

view details

inductor

commit sha 638eaf6f27e3fd32526632f16fff4075e00352ee

fix typo (#19878)

view details

Naoki Oketani

commit sha 109e8c5fa248abaf202967c0fa4b75442fa68fb3

replace http helm link with https (#19889)

view details

iaoiui

commit sha a315c38000773f8dccf25affffd64daa62492661

Fix typo kubectl exposed -> kubectl expose

view details

Naoki Oketani

commit sha 13274fbdd415580de18973846bc9557eb68c3585

update link to /ja/docs/concepts/services-networking/dns-pod-service/

view details

Naoki Oketani

commit sha 65b887f07023c5fda3d03c5848af6a6bd049bf10

update link to /ja/docs/concepts/workloads/pods/pod-lifecycle/

view details

Naoki Oketani

commit sha 0d8f82180d7ae72abc10eb7a7e8141cb71c151b7

update link to /ja/docs/concepts/overview/working-with-objects/labels/

view details

Vageesha17

commit sha 9e40a8fe983d9b45a945f10bed146e2621bd63e3

updates for PR #20131

view details

Naoki Oketani

commit sha 3e1f709cd3bb4fefab4e0c1a2fd0a1d3afd5a2cc

update link to /ja/docs/concepts/overview/kubernetes-api/

view details

Kubernetes Prow Robot

commit sha 9aa911b4063a94d0637d780eae8f6c9a127662bb

Merge pull request #20237 from oke-py/ja-link-4 update link to /ja/docs/concepts/overview/kubernetes-api/

view details

Naoki Oketani

commit sha 338763c74a064186200546f78ae4e2475f5c3781

update link to /ja/docs/tutorials/stateless-application/expose-external-ip-address/

view details

Kubernetes Prow Robot

commit sha db008fad2892a12839770f3a226317e9a765e3c0

Merge pull request #20124 from oke-py/ja-link-3 update link to /ja/docs/concepts/overview/working-with-objects/labels/

view details

Yury Tsarev

commit sha 18cca972a603ee8b0a637aeb0c03463f818f4617

Document pod DNS resolution schema * Currently documentation mentions resolvable FQDNs for services only * Documentation for pods is confusing in regards of local pod `hostname` wich actually does not match in-cluster DNS resolution * This PR clarifies FQDN schema that is used for pod DNS resolution

view details

Pick1a1username

commit sha f4eab243f60eebf99cafffe6ffabbc7eb46c793a

ja translation updated based on 1.17 of en.

view details

Pick1a1username

commit sha 867f35573a7272b53a08c24a3c95ff825d0e63a7

ja translation updated based on 1.17 of en.

view details

Kubernetes Prow Robot

commit sha acaf534cab23fad323cec2bf73ba559aa1672ff3

Merge pull request #20100 from oke-py/ja-link-2 update link to /ja/docs/concepts/workloads/pods/pod-lifecycle/

view details

Kubernetes Prow Robot

commit sha 994b9e3204f1cb83f75d7798764c40e4caf3206a

Merge pull request #20559 from Pick1a1username/dev-1.17-ja.2 Update Outdated Japanese Translation

view details

jqmichael

commit sha c94259e781a1eb23735f04f6ec08a9504898a68f

Update disruptions.md

view details

Naoki Oketani

commit sha 56dd658391d565d4ebe6b5eaba38466854fe73ec

update link to /ja/docs/concepts/overview/working-with-objects/kubernetes-objects/

view details

Naoki Oketani

commit sha bc0d46bc91a78cf0444b634bdde38629c6451711

update link to /ja/docs/concepts/extend-kubernetes/api-extension/custom-resources/

view details

push time in 2 days

issue commentkubernetes/kubernetes

Add swagger docs for kube-scheduler APIs

/unassign @angao

alculquicondor

comment created time in 3 days

issue commentkubernetes/kubernetes

Add swagger docs for kube-scheduler APIs

/help

We should handle this in 1.20

alculquicondor

comment created time in 3 days

issue commentkubernetes/enhancements

Graduate the kube-scheduler ComponentConfig to v1beta1

They were labelled as nice-to-have. We won't be completing those for 1.19

  • The feature gate changes had pending discussions
  • The reference docs generation is not a trivial effort. Sadly, we didn't get a contributor to handle it.

But the core functionality is completed.

luxas

comment created time in 3 days

pull request commentkubernetes/kubernetes

Return a FitError when PreFilter fails with unschedulable status

lgtm, but leaving it to @Huang-Wei

ahg-g

comment created time in 3 days

pull request commentkubernetes/kubernetes

Add back anti-affinity to CoreDNS pods

Topology Spreading is going GA in 1.19. Please update.

rajansandeep

comment created time in 3 days

issue commentkubernetes/org

REQUEST: New membership for pancernik

+1 Thanks @pancernik for your contributions to scheduler component config!

pancernik

comment created time in 3 days

issue openedomegaup/omegaup

[BUG] Link "Editar" problema mal generado

Comportamiento Esperado

Click en editar problema desde la vista del problema

Comportamiento Actual

Link errado

https://omegaup.com/problem/%7B$problem_alias%7D/edit/

Tu ambiente

Incluye los detalles relevantes acerca del ambiente en el que reprodujiste este bug

  • Nombre del navegador (e.g. Chrome 39, node.js 5.4): Chrome 81
  • Sistema Operativo y versión (escritorio o móvil): Linux

created time in 3 days

Pull request review commentkubernetes/kubernetes

Change the exception to avoid the cost of preemption

 func TestMultipleConstraints(t *testing.T) { 		nodes        []*v1.Node 		existingPods []*v1.Pod 		fits         map[string]bool+		wantErrCode  map[string]framework.Code

Now that I think of it, we only need wantStatusCode map[string]framework.Code to replace both fits and wantErrCode

chendave

comment created time in 8 days

Pull request review commentkubernetes/kubernetes

breakdown PodSchedulingDuration by number of attempts

 func (sched *Scheduler) scheduleOne(ctx context.Context) { 			if klog.V(2).Enabled() { 				klog.InfoS("Successfully bound pod to node", "pod", klog.KObj(pod), "node", scheduleResult.SuggestedHost, "evaluatedNodes", scheduleResult.EvaluatedNodes, "feasibleNodes", scheduleResult.FeasibleNodes) 			}- 			metrics.PodScheduled(prof.Name, metrics.SinceInSeconds(start)) 			metrics.PodSchedulingAttempts.Observe(float64(podInfo.Attempts))-			metrics.PodSchedulingDuration.Observe(metrics.SinceInSeconds(podInfo.InitialAttemptTimestamp))++			// We breakdown the pod scheduling duration by attempts (capped to a limit).+			attempts := podInfo.Attempts+			if attempts > 50 {+				attempts = 50+			}+			metrics.PodSchedulingDuration.WithLabelValues(string(attempts)).Observe(metrics.SinceInSeconds(podInfo.InitialAttemptTimestamp))

Could you send a parent PR that changes the variable name InitialAttemptTimestamp to QueueuingTimestamp or something like that?

ahg-g

comment created time in 8 days

issue commentkubernetes/kubernetes

Daemonset does not provision to all nodes, 0 nodes available

If possible, also run 1.18.5, in case we inadvertently fixed the issue.

zetaab

comment created time in 8 days

issue commentkubernetes/kubernetes

Daemonset does not provision to all nodes, 0 nodes available

Are you able to look at your kube-scheduler logs? Anything that seems relevant?

zetaab

comment created time in 8 days

Pull request review commentkubernetes/kubernetes

Use NodeWrapper to directly initialize nodes with labels

 func initTestSchedulerForFrameworkTest(t *testing.T, testCtx *testutils.TestCont 	go testCtx.Scheduler.Run(testCtx.Ctx)  	if nodeCount > 0 {-		_, err := createNodes(testCtx.ClientSet, "test-node", nil, nodeCount)+		_, err := createNodes(testCtx.ClientSet, "test-node", st.MakeNode(), nodeCount)

Shouldn't MakeNode call Capacity(nil) or something like that?

nodo

comment created time in 8 days

Pull request review commentkubernetes/kubernetes

Use NodeWrapper to directly initialize nodes with labels

 func (n *NodeWrapper) Label(k, v string) *NodeWrapper { 	n.Labels[k] = v 	return n }++// Capacity sets the capacity and the allocatable resources of the inner node.+// Each entry in `resources` corresponds to a resource name and its quantity.

Indicate that sets 32 pods limit by default

nodo

comment created time in 8 days

Pull request review commentkubernetes/kubernetes

Change the exception to avoid the cost of preemption

 func TestSingleConstraint(t *testing.T) { 		nodes        []*v1.Node 		existingPods []*v1.Pod 		fits         map[string]bool+		wantErrCode  map[string]int

please do map[string]status.Code and use constants instead of numbers.

chendave

comment created time in 8 days

issue commentkubernetes/kubernetes

Daemonset does not provision to all nodes, 0 nodes available

Understood. I was trying to confirm your setup to make sure I'm not missing something.

I don't think the node is "unseen" by the scheduler. Given that we see 0/9 nodes are available, we can conclude that the node is indeed in the cache. It's more like the unschedulable reason is lost somewhere, so we don't include it in the event.

zetaab

comment created time in 9 days

pull request commentkubernetes/kubernetes

Change the exception to avoid the cost of preemption

/approve

Also fix golint

chendave

comment created time in 9 days

pull request commentkubernetes/kubernetes

Change the exception to avoid the cost of preemption

cause the pod is pending on scheduling, the scheduler will recheck whether there is a node available for binding after a period of time, then the preemption will happen there again.

Well, that's normal behavior for any pending pod. Please remove that note in your PR description so we don't get confused about the bug in the future.

chendave

comment created time in 9 days

issue commentkubernetes/kubernetes

Daemonset does not provision to all nodes, 0 nodes available

I'm confused now... Does your daemonset tolerate the taint for master nodes? In other words... is the bug for you just the scheduling event or also the fact that the pods should have been scheduled?

zetaab

comment created time in 9 days

pull request commentkubernetes/kubernetes

Rename DefaultPodTopologySpread plugin #91994

Thank you for the contribution 😃

rakeshreddybandi

comment created time in 9 days

pull request commentkubernetes/kubernetes

kube-scheduler: allow deprecated options to be set with configfile

oh, don't forget to update the release note with the algorithm provider detail

damemi

comment created time in 9 days

pull request commentkubernetes/kubernetes

kube-scheduler: allow deprecated options to be set with configfile

/approve

@ahg-g anything to add?

damemi

comment created time in 9 days

issue commentkubernetes/kubernetes

Daemonset does not provision to all nodes, 0 nodes available

I'm posting question as I think of possible scenarios:

  • Do you have other master nodes in your cluster?
zetaab

comment created time in 9 days

issue commentkubernetes/kubernetes

Daemonset does not provision to all nodes, 0 nodes available

Long shot, but if you run into it again... could you check if there are any nominated pods to the node that doesn't show up?

zetaab

comment created time in 9 days

Pull request review commentkubernetes/kubernetes

kube-scheduler: allow deprecated options to be set with configfile

 func (o *DeprecatedOptions) AddFlags(fs *pflag.FlagSet, cfg *kubeschedulerconfig 		return 	} -	fs.StringVar(&o.AlgorithmProvider, "algorithm-provider", o.AlgorithmProvider, "DEPRECATED: the scheduling algorithm provider to use, one of: "+algorithmprovider.ListAlgorithmProviders())-	fs.StringVar(&o.PolicyConfigFile, "policy-config-file", o.PolicyConfigFile, "DEPRECATED: file with scheduler policy configuration. This file is used if policy ConfigMap is not provided or --use-legacy-policy-config=true")-	usage := fmt.Sprintf("DEPRECATED: name of the ConfigMap object that contains scheduler's policy configuration. It must exist in the system namespace before scheduler initialization if --use-legacy-policy-config=false. The config must be provided as the value of an element in 'Data' map with the key='%v'", kubeschedulerconfig.SchedulerPolicyConfigMapKey)+	fs.StringVar(&o.AlgorithmProvider, "algorithm-provider", o.AlgorithmProvider, "DEPRECATED: the scheduling algorithm provider to use, this overrides component config profiles. Choose one of: "+algorithmprovider.ListAlgorithmProviders())

It doesn't override them. It sets the default plugins. So you can still enable/disable plugins from them.

damemi

comment created time in 9 days

Pull request review commentkubernetes/kubernetes

Use NodeWrapper to directly initialize nodes with labels

 func createNodes(cs clientset.Interface, prefix string, res *v1.ResourceList, nu 	return nodes[:], nil } +func defaultNodeWrapper() *st.NodeWrapper {

Sg, we can just do res[pods] = 32 at the beginning of the function, and the loop would override it.

nodo

comment created time in 9 days

pull request commentkubernetes/kubernetes

kube-scheduler: allow deprecated options to be set with configfile

Are there any adverse effects you can think of by allowing algorithmProvider and CC? It seems like algorithmProvider takes precedence over CC so we could document that if it isn't already

Yes, that should work fine. Please document the semantics in --algorithm-provider help string.

damemi

comment created time in 9 days

Pull request review commentkubernetes/kubernetes

kube-scheduler: allow deprecated options to be set with configfile

 func (o *DeprecatedOptions) Validate() []error { 	return errs

Document in the applicable help strings above that scheduler would fail if the flags are used in combination with scheduling plugins.

damemi

comment created time in 9 days

Pull request review commentkubernetes/kubernetes

kube-scheduler: allow deprecated options to be set with configfile

 func (o *Options) ApplyTo(c *schedulerappconfig.Config) error { 			return err 		} -		// use the loaded config file only, with the exception of --address and --port. This means that-		// none of the deprecated flags in o.Deprecated are taken into consideration. This is the old-		// behaviour of the flags we have to keep.+		// use the loaded config file only, with the exception of --address and --port.

This comment refers to o.CombinedInsecureServing.ApplyToFrmLoadedConfig below

damemi

comment created time in 9 days

Pull request review commentkubernetes/kubernetes

kube-scheduler: allow deprecated options to be set with configfile

 func newDefaultComponentConfig() (*kubeschedulerconfig.KubeSchedulerConfiguratio // Flags returns flags for a specific scheduler by section name func (o *Options) Flags() (nfs cliflag.NamedFlagSets) { 	fs := nfs.FlagSet("misc")-	fs.StringVar(&o.ConfigFile, "config", o.ConfigFile, "The path to the configuration file. Flags override values in this file.")+	fs.StringVar(&o.ConfigFile, "config", o.ConfigFile, "The path to the configuration file. The following flags can overwrite fields in this file:\n --address\n --port\n --use-legacy-policy-config\n --policy-configmap\n --algorithm-provider")

you missed --policy-config-file

damemi

comment created time in 9 days

issue commentkubernetes/kubernetes

Daemonset does not provision to all nodes, 0 nodes available

"NodeName" try was to highligh, that node is usable and pod gets there if wanted. So thing is not node's unability to start pods.

Note that nothing guards against over-committing a node, but the scheduler. So this doesn't really show much.

At the end test pod wasn't started at the matching node because of taints, but that's other story (and should have been the case already at the 1st event).

My question is: was the 9th node tainted from the beginning? I'm trying to look for (1) reproducible steps to reach the state or (2) where the bug could be.

zetaab

comment created time in 9 days

Pull request review commentkubernetes/kubernetes

kube-scheduler: allow deprecated options to be set with configfile

 func (o *DeprecatedOptions) Validate() []error { 	return errs } -// ApplyTo sets cfg.AlgorithmSource from flags passed on the command line in the following precedence order:+// PolicyConfig returns whether a legacy policy config has been set+func (o *DeprecatedOptions) PolicyConfig() bool {+	if o == nil {+		return false+	}+	return o.UseLegacyPolicyConfig || len(o.PolicyConfigFile) > 0 || len(o.PolicyConfigMapName) > 0 || len(o.AlgorithmProvider) > 0+}++// ApplyAlgorithmSourceTo sets cfg.AlgorithmSource from flags passed on the command line in the following precedence order: // // 1. --use-legacy-policy-config to use a policy file. // 2. --policy-configmap to use a policy config map value. // 3. --algorithm-provider to use a named algorithm provider.-//-// This function is only called when no config file is provided.-func (o *DeprecatedOptions) ApplyTo(cfg *kubeschedulerconfig.KubeSchedulerConfiguration) error {+func (o *DeprecatedOptions) ApplyAlgorithmSourceTo(cfg *kubeschedulerconfig.KubeSchedulerConfiguration) {

I think I prefer we extract ApplyAlgorithmSource, but leave an ApplyTo function which calls it.

damemi

comment created time in 9 days

Pull request review commentkubernetes/kubernetes

kube-scheduler: allow deprecated options to be set with configfile

 func (o *Options) ApplyTo(c *schedulerappconfig.Config) error { 			return err 		} -		// use the loaded config file only, with the exception of --address and --port. This means that-		// none of the deprecated flags in o.Deprecated are taken into consideration. This is the old-		// behaviour of the flags we have to keep.+		// use the loaded config file only, with the exception of --address and --port. 		c.ComponentConfig = *cfg +		// if the user has set CC profiles and is trying to use a Policy config, error out

You could call ApplyAlgorithmSourceTo before this and check that .AlgorithmSource.Policy is empty

damemi

comment created time in 9 days

Pull request review commentkubernetes/kubernetes

kube-scheduler: allow deprecated options to be set with configfile

 func (o *DeprecatedOptions) Validate() []error { 	return errs } -// ApplyTo sets cfg.AlgorithmSource from flags passed on the command line in the following precedence order:+// PolicyConfig returns whether a legacy policy config has been set

Document the usage of the policy-related command line flags.

damemi

comment created time in 9 days

Pull request review commentkubernetes/kubernetes

kube-scheduler: allow deprecated options to be set with configfile

 func newDefaultComponentConfig() (*kubeschedulerconfig.KubeSchedulerConfiguratio // Flags returns flags for a specific scheduler by section name func (o *Options) Flags() (nfs cliflag.NamedFlagSets) { 	fs := nfs.FlagSet("misc")-	fs.StringVar(&o.ConfigFile, "config", o.ConfigFile, "The path to the configuration file. Flags override values in this file.")+	fs.StringVar(&o.ConfigFile, "config", o.ConfigFile, "The path to the configuration file. The following flags can overwrite fields in this file:\n --use-legacy-policy-config\n --policy-configmap\n --algorithm-provider")

Don't forget --adress and --port

damemi

comment created time in 9 days

Pull request review commentkubernetes/kubernetes

kube-scheduler: allow deprecated options to be set with configfile

 func (o *Options) ApplyTo(c *schedulerappconfig.Config) error { 	return nil } +// emptySchedulerProfileConfig returns true if the list of profiles passed to it contains only+// the "default-scheduler" profile with no plugins or pluginconfigs registered+// (this is the default empty profile initialized by defaults.go)+func emptySchedulerProfileConfig(profiles []kubeschedulerconfig.KubeSchedulerProfile) bool {+	return len(profiles) == 1 &&+		len(profiles[0].PluginConfig) == 0 &&+		profiles[0].Plugins == nil &&+		profiles[0].SchedulerName == "default-scheduler"

You should still be able to set a schedulerName

damemi

comment created time in 9 days

Pull request review commentkubernetes/kubernetes

kube-scheduler: allow deprecated options to be set with configfile

 func (o *Options) Flags() (nfs cliflag.NamedFlagSets) { func (o *Options) ApplyTo(c *schedulerappconfig.Config) error { 	if len(o.ConfigFile) == 0 { 		c.ComponentConfig = o.ComponentConfig+		o.Deprecated.ApplyProfileTo(&c.ComponentConfig)  		// only apply deprecated flags if no config file is loaded (this is the old behaviour).

Still call the function after the comment, but remove the word "only" from it.

damemi

comment created time in 9 days

issue commentkubernetes/kubernetes

Daemonset does not provision to all nodes, 0 nodes available

"nodeName" is not a selector. Using nodeName would bypass scheduling.

Fourth came when the node came back up. The node that had an issue was master, so node was not going there (but it shows that node was not found at 3 earlier events). Interesting thing with fourth event is that there's still information from one node missing. Event says there's 0/9 nodes available, but description is given only from 8.

You are saying that the reason why the pod shouldn't have been scheduled in the missing node is because it was a master?

We are seeing 8 node(s) didn't match node selector going to 7. I assume no nodes were removed at this point, correct?

zetaab

comment created time in 9 days

more