profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/ialidzhikov/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.
Ismail Alidzhikov ialidzhikov Sofia, Bulgaria

fmi/java-course 79

Материали към курса "Съвременни Java технологии" :running:

gardener/logging 10

Components needed for Gardener logging

GeekyCamp/geeky-camp-4 2

SAP GeekyCamp 4.0

gardener/machine-controller-manager-provider-openstack 1

Out of tree implementation for machine-controller-manager's provider-openstack

devilchild656/Bushido 0

Hack Fmi Team Bushido 2014

ialidzhikov/afero 0

A FileSystem Abstraction System for Go

ialidzhikov/apiserver-proxy 0

SNI Passthrough proxy for kube-apiservers

Pull request review commentgardener/gardener

[GEP-15] Rotate SSH Keys

 Currently, the `ssh` key pair for the shoot nodes are created once during shoot         - The "execution" script includes also the original user data script, which it writes to `PATH_CLOUDCONFIG`, compares it against the previous cloud config and runs the script in case it has changed         - Running the [original user data](https://github.com/gardener/gardener/tree/master/pkg/operation/botanist/component/extensions/operatingsystemconfig/original) script will also run the `gardeneruser` component, where the `authorized_keys` file will be updated         - After the most recent cloud-config user data was applied, the "execution" script annotates the node with `checksum/cloud-config-data: <cloud-config-checksum>` to indicate the success++### Limitations++For some cloud providers, like GCP, SSH keypairs are managed at the provider side and not on the seed cluster as Kubernetes

This is not correct. The SSH keys are always stored in the seed clusters as secrets and also synced to the garden cluster.

Simply, on GCP the VMs do not have any static users and there is an agent on the nodes that syncs the users with their ssh keys from the GCP IAM service.

xrstf

comment created time in an hour

push eventgardener/machine-controller-manager

prashanth26

commit sha 16e66dee97a32118bdc41694bdfd7326ed703202

Avoids blocking of drain call - There was a deadlock condition where the buffer is full and no more requests could be processed. This change fixes it. - It also makes sure to deleteWorker in failure/negative cases

view details

prashanth26

commit sha 423962aba5504c18d2604bf534ab4bb5dd4053cb

Machine rollout is now improved - Machine rollouts are now more as desired with the number of replicas always maintained to `desired + maxSurge`. - Earlier machines in termination were left out of this calculation but now is considered with this change.

view details

prashanth26

commit sha c1249cff6b541dd2848b2f7137fa4976080e944e

Moved logs to higher levels and a minor NIT

view details

Prashanth

commit sha 73054235125ff300ec7fee2a3c76a4e756cc9ebf

Merge pull request #627 from prashanth26/regressions-2 - Machine rollouts are now more as desired with the number of replicas always maintained to `desired + maxSurge`. Earlier machines in termination were left out of this calculation but now is considered. - Avoids blocking of drain call when the buffer is full for the `volumeAttachmentHandlers`. - `volumeAttachmentHandlers` are always closed now even on failure cases.

view details

push time in an hour

PR merged gardener/machine-controller-manager

Allow controlled machine drain and rollout needs/ok-to-test needs/review size/s

What this PR does / why we need it:

  • Machine rollouts are now more as desired with the number of replicas always maintained to desired + maxSurge. Earlier machines in termination were left out of this calculation but now is considered.
  • Avoids blocking of drain call when the buffer is full for the volumeAttachmentHandlers.
  • volumeAttachmentHandlers are always closed now even on failure cases.

Which issue(s) this PR fixes: Fixes #

Special notes for your reviewer:

Release note: <!-- Write your release note:

  1. Enter your release note in the below block.
  2. If no release note is required, just write "NONE" within the block.

Format of block header: <category> <target_group> Possible values:

  • category: breaking|feature|bugfix|doc|other
  • target_group: user|operator|developer|dependency -->
Machine rollouts are now more as desired with the number of replicas always maintained to `desired + maxSurge`. Earlier machines in termination were left out of this calculation but now is considered with this change.
Avoids blocking of drain call when the buffer is full for the volumeAttachmentHandlers. 
+38 -16

2 comments

6 changed files

prashanth26

pr closed time in an hour

Pull request review commentgardener/gardener

Refactor dependency-watchdog Helm chart into component

+// Copyright (c) 2021 SAP SE or an SAP affiliate company. All rights reserved. This file is licensed under the Apache Software License, v. 2 except as noted otherwise in the LICENSE file+//+// Licensed under the Apache License, Version 2.0 (the "License");+// you may not use this file except in compliance with the License.+// You may obtain a copy of the License at+//+//      http://www.apache.org/licenses/LICENSE-2.0+//+// Unless required by applicable law or agreed to in writing, software+// distributed under the License is distributed on an "AS IS" BASIS,+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+// See the License for the specific language governing permissions and+// limitations under the License.++package kubeapiserver++import (+	v1beta1constants "github.com/gardener/gardener/pkg/apis/core/v1beta1/constants"++	restarterapi "github.com/gardener/dependency-watchdog/pkg/restarter/api"+	scalerapi "github.com/gardener/dependency-watchdog/pkg/scaler/api"+	appsv1 "k8s.io/api/apps/v1"+	autoscalingv1 "k8s.io/api/autoscaling/v1"+	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"+	"k8s.io/utils/pointer"+)++const (+	// DependencyWatchdogExternalProbeSecretName is the name of the kubecfg secret with internal DNS for external access.+	DependencyWatchdogExternalProbeSecretName = "dependency-watchdog-external-probe"+	// DependencyWatchdogInternalProbeSecretName is the name of the kubecfg secret with cluster IP access.+	DependencyWatchdogInternalProbeSecretName = "dependency-watchdog-internal-probe"+)++// DependencyWatchdogEndpointConfiguration returns the configuration for the dependency watchdog (endpoint role)+// ensuring that its dependant pods are restarted as soon as it recovers from a crash loop.+func DependencyWatchdogEndpointConfiguration() (map[string]restarterapi.Service, error) {+	return map[string]restarterapi.Service{+		v1beta1constants.DeploymentNameKubeAPIServer: {+			Dependants: []restarterapi.DependantPods{+				{+					Name: v1beta1constants.GardenRoleControlPlane,+					Selector: &metav1.LabelSelector{+						MatchExpressions: []metav1.LabelSelectorRequirement{+							{+								Key:      v1beta1constants.GardenRole,+								Operator: metav1.LabelSelectorOpIn,+								Values:   []string{v1beta1constants.GardenRoleControlPlane},+							},+							{+								Key:      v1beta1constants.LabelRole,+								Operator: metav1.LabelSelectorOpNotIn,+								Values:   []string{v1beta1constants.ETCDRoleMain},+							},+							{+								Key:      v1beta1constants.LabelRole,+								Operator: metav1.LabelSelectorOpNotIn,+								Values:   []string{v1beta1constants.LabelAPIServer},+							},

What do you think about:

								Values:   []string{v1beta1constants.ETCDRoleMain, v1beta1constants.LabelAPIServer},
							},

It was not configured like this previously, but should have the same effect.

rfranzke

comment created time in 3 hours

push eventgardener/gardener

Vladimir Nachev

commit sha 9e4a943b046ae523bda93b6ea9a3615d98f41457

Allow gardenlet to list and watch priorityclasses.scheduling.k8s.io in the seeed cluster (#4261)

view details

push time in 2 hours

PR merged gardener/gardener

Allow gardenlet to list and watch priorityclasses.scheduling.k8s.io in the seeed cluster kind/bug needs/ok-to-test reviewed/lgtm size/xs

How to categorize this PR? <!-- Please select area, kind, and priority for this pull request. This helps the community categorizing it. Replace below TODOs or exchange the existing identifiers with those that fit best in your opinion. If multiple identifiers make sense you can also state the commands multiple times, e.g. /area control-plane /area auto-scaling ...

"/area" identifiers: audit-logging|auto-scaling|backup|certification|control-plane-migration|control-plane|cost|delivery|dev-productivity|disaster-recovery|documentation|high-availability|logging|metering|monitoring|networking|open-source|ops-productivity|os|performance|quality|robustness|scalability|security|storage|testing|usability|user-management "/kind" identifiers: api-change|bug|cleanup|discussion|enhancement|epic|impediment|poc|post-mortem|question|regression|task|technical-debt|test

For Gardener Enhancement Proposals (GEPs), please check the following documentation before submitting this pull request. --> /kind bug

What this PR does / why we need it: Fix errors in the gardenlet like

E0624 14:51:52.028638       1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:241: Failed to watch *v1.PriorityClass: failed to list *v1.PriorityClass: priorityclasses.scheduling.k8s.io is forbidden: User "system:serviceaccount:garden:gardenlet" cannot list resource "priorityclasses" in API group "scheduling.k8s.io" at the cluster scope

Which issue(s) this PR fixes: Follow up after #4129

Special notes for your reviewer:

Release note: <!-- Write your release note:

  1. Enter your release note in the below block.
  2. If no release note is required, just write "NONE" within the block.

Format of block header: <category> <target_group> Possible values:

  • category: breaking|feature|bugfix|doc|other
  • target_group: user|operator|developer|dependency -->
NONE
+3 -1

0 comment

2 changed files

vpnachev

pr closed time in 2 hours

PR opened gardener/gardener

Allow gardenlet to list and watch priorityclasses.scheduling.k8s.io i…

…n the seeed cluster

How to categorize this PR? <!-- Please select area, kind, and priority for this pull request. This helps the community categorizing it. Replace below TODOs or exchange the existing identifiers with those that fit best in your opinion. If multiple identifiers make sense you can also state the commands multiple times, e.g. /area control-plane /area auto-scaling ...

"/area" identifiers: audit-logging|auto-scaling|backup|certification|control-plane-migration|control-plane|cost|delivery|dev-productivity|disaster-recovery|documentation|high-availability|logging|metering|monitoring|networking|open-source|ops-productivity|os|performance|quality|robustness|scalability|security|storage|testing|usability|user-management "/kind" identifiers: api-change|bug|cleanup|discussion|enhancement|epic|impediment|poc|post-mortem|question|regression|task|technical-debt|test

For Gardener Enhancement Proposals (GEPs), please check the following documentation before submitting this pull request. --> /kind bug

What this PR does / why we need it: Fix errors in the gardenlet like

E0624 14:51:52.028638       1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:241: Failed to watch *v1.PriorityClass: failed to list *v1.PriorityClass: priorityclasses.scheduling.k8s.io is forbidden: User "system:serviceaccount:garden:gardenlet" cannot list resource "priorityclasses" in API group "scheduling.k8s.io" at the cluster scope

Which issue(s) this PR fixes: Follow up after #4129

Special notes for your reviewer:

Release note: <!-- Write your release note:

  1. Enter your release note in the below block.
  2. If no release note is required, just write "NONE" within the block.

Format of block header: <category> <target_group> Possible values:

  • category: breaking|feature|bugfix|doc|other
  • target_group: user|operator|developer|dependency -->
NONE
+3 -1

0 comment

2 changed files

pr created time in 2 hours

pull request commentgardener/gardener

Added option for switching kube-proxy on/off per shoot.

Yes, this is the other option. The networking extensions need to implement validating admission webhooks that can prevent such wrong shoot configurations, just like the provider extensions are doing it.

ScheererJ

comment created time in 3 hours

pull request commentgardener/gardener

Refactor dependency-watchdog Helm chart into component

/cc @amshuman-kr

rfranzke

comment created time in 3 hours

pull request commentgardener/gardener

Added option for switching kube-proxy on/off per shoot.

@vpnachev What are your concerns making it configurable in the Shoot? We can still have some validation in the calico extension preventing disabling kube-proxy, i.e., it would only be allowed when using cilium.

ScheererJ

comment created time in 3 hours

pull request commentgardener/gardener

Added option for switching kube-proxy on/off per shoot.

What will happen if the shoot is set to use calico and kube-proxy is disabled? I think the feature of disabling kube-proxy should stay in the networking extension, e.g. expose the flag in https://github.com/gardener/gardener-extension-networking-cilium/blob/master/pkg/apis/cilium/v1alpha1/types_network.go#L113-L136 and let the cilium extension take care to about it (I have no beautiful idea in my mind, but one ugly solution would be to add a node selector in the kube-proxy daemonset that always select no node).

You are absolutely correct that you can shoot yourself into the foot if you just disable kube-proxy. This is true for both calico and cilium. The issue is that there needs to be something done in gardener, i.e. deploy/undeploy kube-proxy, and in the networking extension, i.e. handle service resolution or not. After discussion with @rfranzke this was the proposed solution, i.e. have the shoot specification as leading entity and let the networking extension react to it. The setting in the networking extension (https://github.com/gardener/gardener-extension-networking-cilium/blob/be964c6cad05d7f80373219ef05b517bdbc2ed86/pkg/apis/cilium/v1alpha1/types_network.go#L123) should be removed in a follow-up step.

@vpnachev If you have a better idea I am all for preventing users to let them shoot themselves in the foot.

@rfranzke Feel free to comment.

ScheererJ

comment created time in 3 hours

Pull request review commentgardener/gardener

Refactored support for node local dns.

 spec:     - podSelector:         matchExpressions:         - {key: k8s-app, operator: In, values: [kube-dns]}-    {{- if .Values.nodeLocalDNS.enabled }}     - ipBlock:         cidr: {{ .Values.nodeLocalDNS.kubeDNSClusterIP }}/32-    {{- end }}

Ok, I reverted the changes related to the network policy in the kube-system namespace. As a positive side effect, it reduces the amount of files being changed.

ScheererJ

comment created time in 3 hours

pull request commentgardener/gardener

Added option for switching kube-proxy on/off per shoot.

What will happen if the shoot is set to use calico and kube-proxy is disabled? I think the feature of disabling kube-proxy should stay in the networking extension, e.g. expose the flag in https://github.com/gardener/gardener-extension-networking-cilium/blob/master/pkg/apis/cilium/v1alpha1/types_network.go#L113-L136 and let the cilium extension take care to about it (I have no beautiful idea in my mind, but one ugly solution would be to add a node selector in the kube-proxy daemonset that always select no node).

ScheererJ

comment created time in 4 hours

pull request commentgardener/gardener-extension-provider-aws

Switch landscaper deployment to ControllerDeployment

we identified already one project with no dependency to gardener so the link approach via the vendor folder does not work here.

It must not be https://github.com/gardener/gardener, it can be https://github.com/gardener/landscaper or some other repo, but for sure whatever can be implemented as library should be done so.

robertgraeff

comment created time in 4 hours

Pull request review commentgardener/gardener

Refactored support for node local dns.

 spec:     - podSelector:         matchExpressions:         - {key: k8s-app, operator: In, values: [kube-dns]}-    {{- if .Values.nodeLocalDNS.enabled }}     - ipBlock:         cidr: {{ .Values.nodeLocalDNS.kubeDNSClusterIP }}/32-    {{- end }}

To my understanding, we only discussed about always allowing both CIDRs for the network policies created by gardenlet's seed controller due to the fact that we don't know which DNS component (coredns/node-local-dns) is running in the seed. For the shoot, however, we definitely know it because it's configured in the Shoot resource. Hence, I would indeed only allow required network traffic if we have the possibility to shrink it down.

ScheererJ

comment created time in 4 hours

Pull request review commentgardener/gardener

Refactored support for node local dns.

 spec:     - podSelector:         matchExpressions:         - {key: k8s-app, operator: In, values: [kube-dns]}-    {{- if .Values.nodeLocalDNS.enabled }}     - ipBlock:         cidr: {{ .Values.nodeLocalDNS.kubeDNSClusterIP }}/32-    {{- end }}

As discussed with you and @DockToFuture , I adapted all three network policies (garden namespace in seed, control plane(s) in seed, kube-system in shoot) to always allow the node local dns entries, i.e. remove the if checks. You can argue though that while it is required to remove them from the network policies in the garden namespace and the control plane(s) unless you want to live with cascading effects, it is possible to let this if block stay in the kube-system network policy as this should not affect anything apart from the cluster itself. I think removing the node local checks on all three network policies is more consistent, but this is not a hard opinion. So if you insist, I can re-add the if check here.

ScheererJ

comment created time in 4 hours

Pull request review commentgardener/gardener

feat: add helm chart releasing

 v1.33   | Week 39-40  | September 27, 2021     | October 10, 2021   | [@voelzmo] Apart from the release of the next version, the release responsible is also taking care of potential hotfix releases of the last three minor versions. The release responsible is the main contact person for coordinating new feature PRs for the next minor versions or cherry-pick PRs for the last three minor versions. +### Release of helm charts++The helm charts are released in lockstep with gardener releases. Check [the discussion in #4123](https://github.com/gardener/gardener/issues/4123) for more information about this decision.++Therefore, the helm chart versions for the [controlplane](../../charts/gardener/controlplane/Chart.yaml) and [gardenlet](../../charts/gardener/gardenlet/Chart.yaml) charts have to be bumped during the release process.++Set the version to the same as the gardener version that is to be released, but remove the `v` in front of the version number for a valid semver version.++Once the changes are commited/merged to master, the [Chart releaser action](../../.github/workflows/release-charts.yaml) will release the new version and publish it to the repository at https://gardener.github.io/gardener/.+

Yes, I think it's alpine. Not sure if this is configurable meanwhile.

morremeyer

comment created time in 4 hours

pull request commentgardener/gardener

Added option for switching kube-proxy on/off per shoot.

/assign

ScheererJ

comment created time in 4 hours

pull request commentgardener/gardener

Added option for switching kube-proxy on/off per shoot.

/invite @DockToFuture @rfranzke

ScheererJ

comment created time in 4 hours

pull request commentgardener/gardener

Added option for switching kube-proxy on/off per shoot.

@ScheererJ Thank you for your contribution.

ScheererJ

comment created time in 4 hours

PR opened gardener/gardener

Added option for switching kube-proxy on/off per shoot.

How to categorize this PR? <!-- Please select area, kind, and priority for this pull request. This helps the community categorizing it. Replace below TODOs or exchange the existing identifiers with those that fit best in your opinion. If multiple identifiers make sense you can also state the commands multiple times, e.g. /area control-plane /area auto-scaling ...

"/area" identifiers: audit-logging|auto-scaling|backup|certification|control-plane-migration|control-plane|cost|delivery|dev-productivity|disaster-recovery|documentation|high-availability|logging|metering|monitoring|networking|open-source|ops-productivity|os|performance|quality|robustness|scalability|security|storage|testing|usability|user-management "/kind" identifiers: api-change|bug|cleanup|discussion|enhancement|epic|impediment|poc|post-mortem|question|regression|task|technical-debt|test

For Gardener Enhancement Proposals (GEPs), please check the following documentation before submitting this pull request. --> /area networking /kind enhancement

What this PR does / why we need it: This pull requests adds a flag to the shoot specification to enable/disable kube-proxy. Per default kube-proxy is enabled, but with this change it can be disabled as well. Depending on the CNI being used, kube-proxy is not necessary. For example, cilium can take over the service routing out of the box. With calico it is possible to run without kube-proxy if the ebpf data plane is used. This change list is only the prerequisite for adapting the networking extensions. The shoot specification will be the leading entity to decide whether kube-proxy should be deployed or not.

Which issue(s) this PR fixes: None

Special notes for your reviewer:

Release note: <!-- Write your release note:

  1. Enter your release note in the below block.
  2. If no release note is required, just write "NONE" within the block.

Format of block header: <category> <target_group> Possible values:

  • category: breaking|feature|bugfix|doc|other
  • target_group: user|operator|developer|dependency -->
Makes it possible to disable deploying kube-proxy
+1306 -1157

0 comment

18 changed files

pr created time in 4 hours

release gardener/test-infra

0.205.0

released time in 5 hours

push eventgardener/gardener

gardener-robot-ci-1

commit sha 9c2a2942a889917f6033cdf5ba264c47a496a5cb

Prepare next Dev Cycle v1.25.2-dev

view details

push time in 5 hours

created taggardener/gardener

tagv1.25.1

Kubernetes-native system managing the full lifecycle of conformant Kubernetes clusters as a service on Alicloud, AWS, Azure, GCP, OpenStack, Packet, vSphere, MetalStack, and Kubevirt with minimal TCO.

created time in 5 hours

push eventgardener/gardener

gardener-robot-ci-1

commit sha b5bbaf8e5ad294dba80241c8beb4fffc73488df2

Release v1.25.1

view details

push time in 5 hours

Pull request review commentgardener/gardener

feat: add helm chart releasing

 v1.33   | Week 39-40  | September 27, 2021     | October 10, 2021   | [@voelzmo] Apart from the release of the next version, the release responsible is also taking care of potential hotfix releases of the last three minor versions. The release responsible is the main contact person for coordinating new feature PRs for the next minor versions or cherry-pick PRs for the last three minor versions. +### Release of helm charts++The helm charts are released in lockstep with gardener releases. Check [the discussion in #4123](https://github.com/gardener/gardener/issues/4123) for more information about this decision.++Therefore, the helm chart versions for the [controlplane](../../charts/gardener/controlplane/Chart.yaml) and [gardenlet](../../charts/gardener/gardenlet/Chart.yaml) charts have to be bumped during the release process.++Set the version to the same as the gardener version that is to be released, but remove the `v` in front of the version number for a valid semver version.++Once the changes are commited/merged to master, the [Chart releaser action](../../.github/workflows/release-charts.yaml) will release the new version and publish it to the repository at https://gardener.github.io/gardener/.+

Do I understand it correctly that those scripts are run in an alpine docker container? If not, which environment do I have to assume?

morremeyer

comment created time in 5 hours

Pull request review commentgardener/gardener

Enh/stable worker hash

 func WorkerPoolHash(pool extensionsv1alpha1.WorkerPool, cluster *extensionscontr 	for _, w := range cluster.Shoot.Spec.Provider.Workers { 		if pool.Name == w.Name { 			if w.CRI != nil {+				if w.CRI.Name == gardencorev1beta1.CRINameDocker {+					continue+				}
			if w.CRI != nil && w.CRI.Name != gardencorev1beta1.CRINameDocker {
voelzmo

comment created time in 5 hours

PR opened gardener/virtual-garden

Landscaper component for virtual-garden

How to categorize this PR? <!-- Please select area, kind, and priority for this pull request. This helps the community categorizing it. Replace below TODOs or exchange the existing identifiers with those that fit best in your opinion. If multiple identifiers make sense you can also state the commands multiple times, e.g. /area control-plane /area auto-scaling ...

"/area" identifiers: audit-logging|auto-scaling|backup|certification|control-plane-migration|control-plane|cost|delivery|dev-productivity|disaster-recovery|documentation|high-availability|logging|metering|monitoring|networking|open-source|ops-productivity|os|performance|quality|robustness|scalability|security|storage|testing|usability|user-management "/kind" identifiers: api-change|bug|cleanup|discussion|enhancement|epic|impediment|poc|post-mortem|question|regression|task|technical-debt|test

For Gardener Enhancement Proposals (GEPs), please check the following documentation before submitting this pull request. --> /area delivery /kind enhancement

What this PR does / why we need it: This PR contains the deploy logic for the virtual-garden that can be executed by the Landscaper container deployer.

It also contains an E2E test for the installation and deletion of the virtual garden. The following features are not included in the test:

  • backup bucket
  • hvpa
  • audit webhook
  • seed authorizer

Currently there is only an implementation of the backup provider for gcp.

Which issue(s) this PR fixes: Fixes #

Special notes for your reviewer:

Release note: <!-- Write your release note:

  1. Enter your release note in the below block.
  2. If no release note is required, just write "NONE" within the block.

Format of block header: <category> <target_group> Possible values:

  • category: breaking|feature|bugfix|doc|other
  • target_group: user|operator|developer|dependency -->
- First implementation of a landscaper based deployment of the virtual-garden.
+48443 -1179

0 comment

394 changed files

pr created time in 5 hours

Pull request review commentgardener/gardener

feat: add helm chart releasing

 v1.33   | Week 39-40  | September 27, 2021     | October 10, 2021   | [@voelzmo] Apart from the release of the next version, the release responsible is also taking care of potential hotfix releases of the last three minor versions. The release responsible is the main contact person for coordinating new feature PRs for the next minor versions or cherry-pick PRs for the last three minor versions. +### Release of helm charts++The helm charts are released in lockstep with gardener releases. Check [the discussion in #4123](https://github.com/gardener/gardener/issues/4123) for more information about this decision.++Therefore, the helm chart versions for the [controlplane](../../charts/gardener/controlplane/Chart.yaml) and [gardenlet](../../charts/gardener/gardenlet/Chart.yaml) charts have to be bumped during the release process.++Set the version to the same as the gardener version that is to be released, but remove the `v` in front of the version number for a valid semver version.++Once the changes are commited/merged to master, the [Chart releaser action](../../.github/workflows/release-charts.yaml) will release the new version and publish it to the repository at https://gardener.github.io/gardener/.+

I tried to find out what triggers the „release commit“ in g/g,

This is implemented deep down in the https://github.com/gardener/cc-utils repository, @ccwienk or @AndreasBurger can explain better.

Can you point me to where I need to incorporate that change?

The only thing you need to provide is such a prepare_release script which performs your changes. Once the PR is merged, a Gardener maintainer has to enhance the pipeline definitions to make the CI/CD system executing your prepare_release script when a new Gardener release is triggered.

morremeyer

comment created time in 5 hours

push eventgardener/gardener

Vladimir Nachev

commit sha 1030e1326cc23936c23e34463a26afc26cbd19cc

RequestLimitExceeded is now treated as ERR_INFRA_RATE_LIMITS_EXCEEDED (#4256) Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>

view details

push time in 5 hours