profile
viewpoint

composer-php52/composer-php52 15

PHP :five:.:two: :back: compatibility for composer

xrstf/hosty 11

Minimal text/image/file selfhosting app written in Go

loodse/prow-dashboards 5

A collection of Prometheus rules and Grafana dashboards for monitoring Prow clusters

xrstf/babelcache 2

A slim caching framework that can talk to XCache/APC/Memcache/eAccelerator/Filesystem/ZendServer. It supports namespaces and partial flushes for all systems. Requires PHP 5.1 or greater.

xrstf/bloaty 1

Continuously allocates memory over time.

xrstf/boxer 1

NaCl in Go for humans

xrstf/cpe-net 1

Experimental neural network for string matching

xrstf/github_exporter 1

Prometheus GitHub exporter with a focus on Pull Request/Issue/Milestone metrics

xrstf/grafana-influxdb-monitor 1

Basic setup for a super simple Docker-based logging service

PullRequestReviewEvent

Pull request review commentkubermatic/kubermatic

Improve initial node deployment creation

 func (r *Reconciler) Reconcile(request reconcile.Request) (reconcile.Result, err }  func (r *Reconciler) reconcile(ctx context.Context, cluster *kubermaticv1.Cluster) (*reconcile.Result, error) {-+	// If cluster is not healthy yet there is nothing to do.+	// If it gets healthy we'll get notified by the event. No need to requeue. 	if !cluster.Status.ExtendedHealth.AllHealthy() {-		// Cluster not healthy yet. Nothing to do.-		// If it gets healthy we'll get notified by the event. No need to requeue 		return nil, nil 	} -	clusterType := v1.KubernetesClusterType-	if cluster.IsOpenshift() {-		clusterType = v1.OpenShiftClusterType+	clusterType := getClusterType(cluster)++	// Check if cluster has non-empty initial machine deployment request serialized+	// into its annotations and create it if its there.+	if request, ok := cluster.Annotations[v1.InitialMachineDeploymentRequestAnnotation]; ok && request != "" {+		var nodeDeployment *v1.NodeDeployment+		if err := json.Unmarshal([]byte(request), &nodeDeployment); err != nil {+			return nil, fmt.Errorf("cannot unmarshal initial machine deployment request for %s cluster: %v", cluster.Name, err)+		}++		if !r.isInitialMachineDeploymentCreated() {+			if err := r.createInitialMachineDeployment(nodeDeployment); err != nil {+				return nil, fmt.Errorf("cannot create initial machine deployment for %s cluster: %v", cluster.Name, err)+			} else {+				r.recorder.Eventf(cluster, corev1.EventTypeNormal, "InitialMachineDeployment", "Created initial machine deployment")+				if err := r.removeInitialMachineDeploymentRequestAnnotation(cluster); err != nil {+					return nil, fmt.Errorf("cannot remove initial machine deployment request from %s cluster: %v", cluster.Name, err)+				}+			}+		} else {+			if err := r.removeInitialMachineDeploymentRequestAnnotation(cluster); err != nil {+				return nil, fmt.Errorf("cannot remove initial machine deployment request from %s cluster: %v", cluster.Name, err)+			}+		}

What made you choose this controller to add your logic to? Because it deals with nodes during the cluster upgrades? What would you say to writing a dedicated controller just for this purpose (turning the initial-node annotation into a MachineDeployment)?

maciaszczykm

comment created time in a day

PullRequestReviewEvent

Pull request review commentkubermatic/kubermatic

Improve initial node deployment creation

 func (r *Reconciler) Reconcile(request reconcile.Request) (reconcile.Result, err }  func (r *Reconciler) reconcile(ctx context.Context, cluster *kubermaticv1.Cluster) (*reconcile.Result, error) {-+	// If cluster is not healthy yet there is nothing to do.+	// If it gets healthy we'll get notified by the event. No need to requeue. 	if !cluster.Status.ExtendedHealth.AllHealthy() {-		// Cluster not healthy yet. Nothing to do.-		// If it gets healthy we'll get notified by the event. No need to requeue 		return nil, nil 	} -	clusterType := v1.KubernetesClusterType-	if cluster.IsOpenshift() {-		clusterType = v1.OpenShiftClusterType+	clusterType := getClusterType(cluster)++	// Check if cluster has non-empty initial machine deployment request serialized+	// into its annotations and create it if its there.

it's

maciaszczykm

comment created time in a day

PullRequestReviewEvent
PullRequestReviewEvent

delete branch xrstf/kubermatic

delete branch : addons-image-tag

delete time in 2 days

pull request commentkubermatic/kubermatic

Allow to adjust the Docker image tag for addons

/cherrypick release/v2.15

xrstf

comment created time in 2 days

created tagxrstf/aws_exporter

tagv0.1.0

Very basic AWS exporter

created time in 2 days

push eventxrstf/aws_exporter

xrstf

commit sha b3076ba484a88605c928375b4e1e60c0ee3fbe6d

version 0.1.0

view details

xrstf

commit sha fd7e58ef589dc3717ca9cfb9126899e0d9c5e46e

back to dev

view details

push time in 2 days

push eventxrstf/aws_exporter

xrstf

commit sha 6fe34e9d0392d9861b213c3af8f5d02fe87f0139

go mod tidy

view details

push time in 2 days

push eventxrstf/aws_exporter

xrstf

commit sha 667887c43eaaebdcb099987e417aa66d07c0342a

add Github Actions

view details

push time in 2 days

push eventxrstf/aws_exporter

xrstf

commit sha 6c7574a24dea1ffefbc2cdc5e730b4b977d108ea

use goroutines

view details

push time in 2 days

create barnchxrstf/aws_exporter

branch : master

created branch time in 2 days

created repositoryxrstf/aws_exporter

Very basic AWS exporter

created time in 2 days

GollumEvent
GollumEvent

pull request commentkubermatic/kubermatic

Constraint template controller

This comes from the proposal: https://github.com/kubermatic/kubermatic/blob/master/docs/proposals/opa-integration.md#implementation

I'm sorry, but I must have missed that during review, but

High level overview on how it would work is that a seed(master) cluster has a list of some default ConstraintTemplates deployed

(emphasis mine) is not really clear as to where things are living.

Can you explain how this code violates the entire KKP architecture?

The master cluster manages the seeds, the seeds then manage the user clusters. We have Seeds to keep the latency between control planes and the worker nodes small, and to spread the control plane load across multiple seed clusters. As detailed in https://docs.kubermatic.com/kubermatic/v2.15/concepts/architecture/

projects, users, RBACs, extern cluster, datacenters, global settings, user settings.

Not 100% sure about the RBAC stuff, but all the rest has nothing to do with reconciling user clusters. External clusters do not have a control plane attached, so they don't impose any load (yet, still unsure about the future), projects and users do not trigger reconcilings and neither do global settings or user settings.

But all other things that we manage inside the user cluster are managed on the Seed (in either the seed-ctrlmgr or the usercluster-ctrlmgr).

lsviben

comment created time in 2 days

pull request commentkubermatic/kubermatic

Constraint template controller

First, the Kubermatic admin installs ConstraintTemplates on the master.

Why? Apparently they influence User Clusters, so why not create them on Seeds?

They are global and manage by the admin in the admin panel or manually.

So the admin panel (i.e. the Kubermatic API) can then just create them in each Seed. Or have a controller that just syncs them over (we already have https://github.com/kubermatic/kubermatic/tree/master/pkg/controller/master-controller-manager/seed-sync).

The goal for this controller is to manage resource which is installed on master so it's very natural to have this logic on the master.

By that logic we can just get rid of Seeds entirely and manage everything from the master, no? We have Seeds to spread out the reconciling logic and keep latencies short. Keeping a map of connections to each and every user cluster in every Seed is just not the best approach.

I appreciate that this "is easier", but that's not a good reason to violate the entire KKP architecture.

lsviben

comment created time in 2 days

PullRequestReviewEvent

Pull request review commentkubermatic/kubermatic

Constraint template controller

+/*+Copyright 2020 The Kubermatic Kubernetes Platform contributors.++Licensed under the Apache License, Version 2.0 (the "License");+you may not use this file except in compliance with the License.+You may obtain a copy of the License at++    http://www.apache.org/licenses/LICENSE-2.0++Unless required by applicable law or agreed to in writing, software+distributed under the License is distributed on an "AS IS" BASIS,+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+See the License for the specific language governing permissions and+limitations under the License.+*/++package constrainttemplatecontroller++import (+	"context"+	"fmt"+	"net"+	"os"+	"time"++	"github.com/open-policy-agent/frameworks/constraint/pkg/apis/templates/v1beta1"+	"go.uber.org/zap"++	kubermaticapiv1 "k8c.io/kubermatic/v2/pkg/api/v1"+	clusterclient "k8c.io/kubermatic/v2/pkg/cluster/client"+	controllerutil "k8c.io/kubermatic/v2/pkg/controller/util"+	kubermaticv1 "k8c.io/kubermatic/v2/pkg/crd/kubermatic/v1"+	kuberneteshelper "k8c.io/kubermatic/v2/pkg/kubernetes"+	"k8c.io/kubermatic/v2/pkg/resources/reconciling"+	"k8c.io/kubermatic/v2/pkg/util/workerlabel"++	kerrors "k8s.io/apimachinery/pkg/api/errors"+	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"+	"k8s.io/apimachinery/pkg/labels"+	"k8s.io/apimachinery/pkg/types"+	utilruntime "k8s.io/apimachinery/pkg/util/runtime"+	"k8s.io/client-go/rest"+	"sigs.k8s.io/controller-runtime/pkg/client"+	ctrlruntimeclient "sigs.k8s.io/controller-runtime/pkg/client"+	"sigs.k8s.io/controller-runtime/pkg/controller"+	"sigs.k8s.io/controller-runtime/pkg/handler"+	"sigs.k8s.io/controller-runtime/pkg/manager"+	"sigs.k8s.io/controller-runtime/pkg/reconcile"+	"sigs.k8s.io/controller-runtime/pkg/source"+)++const (+	// This controller syncs the kubermatic constraint templates to gatekeeper constraint templates on the user cluster.+	ControllerName = "gatekeeper_constraint_template_controller"+)++// UserClusterClientProvider provides functionality to get a user cluster client+type UserClusterClientProvider interface {+	GetClient(c *kubermaticv1.Cluster, options ...clusterclient.ConfigOption) (ctrlruntimeclient.Client, error)+}++type reconciler struct {+	ctx                     context.Context+	log                     *zap.SugaredLogger+	workerNameLabelSelector labels.Selector+	masterClient            ctrlruntimeclient.Client+	seedClientProviders     map[string]*SeedClientProvider+	userClusterClients      map[string]ctrlruntimeclient.Client+}++type SeedClientProvider struct {+	seedClient                ctrlruntimeclient.Client+	userClusterClientProvider UserClusterClientProvider+}++func Add(ctx context.Context,+	mgr manager.Manager,+	seedManagers map[string]manager.Manager,+	log *zap.SugaredLogger,+	workerName string,+	numWorkers int) error {++	workerSelector, err := workerlabel.LabelSelector(workerName)+	if err != nil {+		return fmt.Errorf("failed to build worker-name selector: %v", err)+	}++	reconciler := &reconciler{+		ctx:                     ctx,+		log:                     log.Named(ControllerName),+		workerNameLabelSelector: workerSelector,+		masterClient:            mgr.GetClient(),+		seedClientProviders:     map[string]*SeedClientProvider{},+		userClusterClients:      map[string]ctrlruntimeclient.Client{},+	}++	c, err := controller.New(ControllerName, mgr, controller.Options{Reconciler: reconciler, MaxConcurrentReconciles: numWorkers})+	if err != nil {+		return fmt.Errorf("failed to construct controller: %v", err)+	}++	for seedName, seedManager := range seedManagers {+		seedClient := seedManager.GetClient()++		userClusterClientProvider, err := getUserClusterClientProvider(seedManager)+		if err != nil {+			return fmt.Errorf("failed to get cluster client provider for seed %s: %w", seedName, err)+		}+		reconciler.seedClientProviders[seedName] = &SeedClientProvider{+			seedClient:                seedClient,+			userClusterClientProvider: userClusterClientProvider,+		}++		clusterSource := &source.Kind{Type: &kubermaticv1.Cluster{}}+		if err := clusterSource.InjectCache(mgr.GetCache()); err != nil {+			return fmt.Errorf("failed to inject cache into clusterSource for seed %s: %v", seedName, err)+		}+		if err := c.Watch(+			clusterSource,+			enqueueAllConstraintTemplates(reconciler.masterClient, reconciler.log),+			workerlabel.Predicates(workerName),+		); err != nil {+			return fmt.Errorf("failed to establish watch for clusters in seed %s: %v", seedName, err)+		}+	}++	if err := c.Watch(+		&source.Kind{Type: &kubermaticv1.ConstraintTemplate{}},+		&handler.EnqueueRequestForObject{},+	); err != nil {+		return fmt.Errorf("failed to create watch for constraintTemplates: %v", err)+	}++	return nil+}++// Reconcile reconciles the kubermatic constraint template on the master cluster to all seed cluster user clusters+// which have opa integration enabled+func (r *reconciler) Reconcile(request reconcile.Request) (reconcile.Result, error) {+	log := r.log.With("ConstraintTemplate", request)

Please call this field request to be consistent with other controllers.

lsviben

comment created time in 2 days

Pull request review commentkubermatic/kubermatic

Constraint template controller

+/*+Copyright 2020 The Kubermatic Kubernetes Platform contributors.++Licensed under the Apache License, Version 2.0 (the "License");+you may not use this file except in compliance with the License.+You may obtain a copy of the License at++    http://www.apache.org/licenses/LICENSE-2.0++Unless required by applicable law or agreed to in writing, software+distributed under the License is distributed on an "AS IS" BASIS,+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+See the License for the specific language governing permissions and+limitations under the License.+*/++package constrainttemplatecontroller++import (+	"context"+	"fmt"+	"net"+	"os"+	"time"++	"github.com/open-policy-agent/frameworks/constraint/pkg/apis/templates/v1beta1"+	"go.uber.org/zap"++	kubermaticapiv1 "k8c.io/kubermatic/v2/pkg/api/v1"+	clusterclient "k8c.io/kubermatic/v2/pkg/cluster/client"+	controllerutil "k8c.io/kubermatic/v2/pkg/controller/util"+	kubermaticv1 "k8c.io/kubermatic/v2/pkg/crd/kubermatic/v1"+	kuberneteshelper "k8c.io/kubermatic/v2/pkg/kubernetes"+	"k8c.io/kubermatic/v2/pkg/resources/reconciling"+	"k8c.io/kubermatic/v2/pkg/util/workerlabel"++	kerrors "k8s.io/apimachinery/pkg/api/errors"+	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"+	"k8s.io/apimachinery/pkg/labels"+	"k8s.io/apimachinery/pkg/types"+	utilruntime "k8s.io/apimachinery/pkg/util/runtime"+	"k8s.io/client-go/rest"+	"sigs.k8s.io/controller-runtime/pkg/client"+	ctrlruntimeclient "sigs.k8s.io/controller-runtime/pkg/client"+	"sigs.k8s.io/controller-runtime/pkg/controller"+	"sigs.k8s.io/controller-runtime/pkg/handler"+	"sigs.k8s.io/controller-runtime/pkg/manager"+	"sigs.k8s.io/controller-runtime/pkg/reconcile"+	"sigs.k8s.io/controller-runtime/pkg/source"+)++const (+	// This controller syncs the kubermatic constraint templates to gatekeeper constraint templates on the user cluster.+	ControllerName = "gatekeeper_constraint_template_controller"+)++// UserClusterClientProvider provides functionality to get a user cluster client+type UserClusterClientProvider interface {+	GetClient(c *kubermaticv1.Cluster, options ...clusterclient.ConfigOption) (ctrlruntimeclient.Client, error)+}++type reconciler struct {+	ctx                     context.Context+	log                     *zap.SugaredLogger+	workerNameLabelSelector labels.Selector+	masterClient            ctrlruntimeclient.Client+	seedClientProviders     map[string]*SeedClientProvider+	userClusterClients      map[string]ctrlruntimeclient.Client+}++type SeedClientProvider struct {+	seedClient                ctrlruntimeclient.Client+	userClusterClientProvider UserClusterClientProvider+}++func Add(ctx context.Context,+	mgr manager.Manager,+	seedManagers map[string]manager.Manager,+	log *zap.SugaredLogger,+	workerName string,+	numWorkers int) error {++	workerSelector, err := workerlabel.LabelSelector(workerName)+	if err != nil {+		return fmt.Errorf("failed to build worker-name selector: %v", err)+	}++	reconciler := &reconciler{+		ctx:                     ctx,+		log:                     log.Named(ControllerName),+		workerNameLabelSelector: workerSelector,+		masterClient:            mgr.GetClient(),+		seedClientProviders:     map[string]*SeedClientProvider{},+		userClusterClients:      map[string]ctrlruntimeclient.Client{},+	}++	c, err := controller.New(ControllerName, mgr, controller.Options{Reconciler: reconciler, MaxConcurrentReconciles: numWorkers})+	if err != nil {+		return fmt.Errorf("failed to construct controller: %v", err)+	}++	for seedName, seedManager := range seedManagers {+		seedClient := seedManager.GetClient()++		userClusterClientProvider, err := getUserClusterClientProvider(seedManager)+		if err != nil {+			return fmt.Errorf("failed to get cluster client provider for seed %s: %w", seedName, err)+		}+		reconciler.seedClientProviders[seedName] = &SeedClientProvider{+			seedClient:                seedClient,+			userClusterClientProvider: userClusterClientProvider,+		}++		clusterSource := &source.Kind{Type: &kubermaticv1.Cluster{}}+		if err := clusterSource.InjectCache(mgr.GetCache()); err != nil {+			return fmt.Errorf("failed to inject cache into clusterSource for seed %s: %v", seedName, err)+		}+		if err := c.Watch(+			clusterSource,+			enqueueAllConstraintTemplates(reconciler.masterClient, reconciler.log),+			workerlabel.Predicates(workerName),+		); err != nil {+			return fmt.Errorf("failed to establish watch for clusters in seed %s: %v", seedName, err)+		}+	}++	if err := c.Watch(+		&source.Kind{Type: &kubermaticv1.ConstraintTemplate{}},+		&handler.EnqueueRequestForObject{},+	); err != nil {+		return fmt.Errorf("failed to create watch for constraintTemplates: %v", err)+	}++	return nil+}++// Reconcile reconciles the kubermatic constraint template on the master cluster to all seed cluster user clusters+// which have opa integration enabled+func (r *reconciler) Reconcile(request reconcile.Request) (reconcile.Result, error) {+	log := r.log.With("ConstraintTemplate", request)+	log.Debug("Reconciling")++	constraintTemplate := &kubermaticv1.ConstraintTemplate{}+	if err := r.masterClient.Get(r.ctx, request.NamespacedName, constraintTemplate); err != nil {+		if kerrors.IsNotFound(err) {+			log.Debugw("constraint template not found, returning", "constraint-name", constraintTemplate.Name)

If it's not found, then constraintTemplate will be empty. Also, the name has already been assigned to the log in the r.log.With() line, no need to add it here again.

lsviben

comment created time in 2 days

PullRequestReviewEvent

PR opened kubermatic/kubermatic

add 2.15.2 changelog

What this PR does / why we need it: What it says on the tin.

Does this PR introduce a user-facing change?:

NONE
+12 -0

0 comment

1 changed file

pr created time in 3 days

push eventxrstf/kubermatic

Christoph Mewes

commit sha 1332fbc1ec8bb9e9bfb030fc6ccaa71bebf0c421

add 2.15.2 changelog

view details

push time in 3 days

create barnchxrstf/kubermatic

branch : 2152-changelog

created branch time in 3 days

created tagkubermatic/kubermatic

tagv2.15.2

Kubermatic Kubernetes Platform - the Central Kubernetes Management Platform For Any Infrastructure

created time in 3 days

delete branch xrstf/kubermatic

delete branch : kind-canary-deployments

delete time in 3 days

pull request commentkubermatic/kubermatic

Split kind/Kubermatic logic during CI

/cherrypick release/v2.15

xrstf

comment created time in 3 days

created tagkubermatic/dashboard

tagv2.15.2

Dashboard For The Kubermatic Kubernetes Platform

created time in 3 days

pull request commentkubermatic/kubermatic

Split kind/Kubermatic logic during CI

/retest

xrstf

comment created time in 3 days

PullRequestReviewEvent

delete branch xrstf/kubermatic

delete branch : convert-serviceaccounts-command

delete time in 3 days

pull request commentkubermatic/kubermatic

Split kind/Kubermatic logic during CI

/cherrpick release/v2.15

xrstf

comment created time in 3 days

push eventxrstf/kubermatic

Christoph Mewes

commit sha 147d86d2ebcd1324c621c1bc8cce0e139f5af4ee

PR feedback

view details

push time in 3 days

Pull request review commentkubermatic/kubermatic

Split kind/Kubermatic logic during CI

 else fi pushElapsed kind_kubermatic_setup_duration_milliseconds $beforeKubermaticSetup -echodate "Done setting up Kubermatic in kind"- echodate "Running conformance tests"-KUBERMATIC_NO_WORKER_NAME=true ./hack/ci/run-conformance-tests.sh

No, it was only used in the one if statement that I removed in this PR as well.

xrstf

comment created time in 3 days

PullRequestReviewEvent

pull request commentkubermatic/kubermatic

Add convert-kubeconfig command to installer

/cherrypick release/v2.15

xrstf

comment created time in 3 days

delete branch xrstf/kubermatic-docs

delete branch : fix-link

delete time in 3 days

PR opened kubermatic/kubermatic

Allow to adjust the Docker image tag for addons

What this PR does / why we need it: This is a proposal to help with the current problem surrounding addons in KKP.

It is currently very har to impossible to develop custom addons, as new Docker images cannot be easily refered to in userclusters. This is because the KKP Operator did not allow changing any Docker tags, because it wants to keep its own internal migration logic simple (so it does not allow to override the KKP version). However, this had a significant fallout for the addons, where there are legitimate usecases for overriding the Docker image.

This proposal tries to combine the best of both worlds. We still want to make sure admins don't accidentally upgrade KKP and continue to use their old addons image (which would be possible if we allowed to completely override the image tag), but we also want to allow to "bump" the tag to make development easier.

To achieve this, this PR simply introduces the concept of a "version suffix" that is, if set, appended to the KKP version.

Does this PR introduce a user-facing change?:

Allow to customize the Docker image tag for Cluster Addons
+26 -6

0 comment

3 changed files

pr created time in 3 days

create barnchxrstf/kubermatic

branch : addons-image-tag

created branch time in 3 days

issue commentkubermatic/kubermatic

Seed Controller Manager - imagePullPolicy for Addons needs to be "Always"

The pull policy doesn't need to be Always. There are other, possibly better options to solve this.

toschneck

comment created time in 3 days

PR opened kubermatic/docs

fix links to serviceaccount helper script
+4 -4

0 comment

4 changed files

pr created time in 3 days

create barnchxrstf/kubermatic-docs

branch : fix-link

created branch time in 3 days

PullRequestReviewEvent
PullRequestReviewEvent

startedpterm/pterm

started time in 4 days

push eventxrstf/kubermatic-docs

Christoph Mewes

commit sha cbf5279e2a413893fa31f27cbe49a23498df719c

update 2.13 docs

view details

push time in 4 days

push eventxrstf/kubermatic-docs

Christoph Mewes

commit sha 4c48de0d6e2b3fe5b284c71d9e4cb51d9161f244

update 2.14 docs

view details

push time in 4 days

PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commentkubermatic/docs

Kubeone apidoc regen

+++++title = "Custom Registry"+date = 2020-10-27T12:00:00+02:00+weight = 4++++++## RegistryConfiguration++It's possible to configure the central docker image registry to pull images from using `registryConfiguration` config.

pull images from using

kron4eg

comment created time in 4 days

PullRequestReviewEvent

pull request commentkubermatic/kubermatic

add anexia provider interface

/retest

zreigz

comment created time in 4 days

PR opened kubermatic/kubermatic

Fix GitHub Release

What this PR does / why we need it: When backporting the changes from 2.15, I overlooked that the paths are different in 2.14. This PR fixes that.

This time I ran the script locally to produce the binaries for the 2.14.8 release.

Does this PR introduce a user-facing change?:

NONE
+7 -7

0 comment

1 changed file

pr created time in 4 days

create barnchxrstf/kubermatic

branch : fix-214-release

created branch time in 4 days

PullRequestReviewEvent
PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commentkubermatic/kubeone

Add the image-loader Bash script

+#!/usr/bin/env bash++# Copyright 2019 The KubeOne Authors.+#+# Licensed under the Apache License, Version 2.0 (the "License");+# you may not use this file except in compliance with the License.+# You may obtain a copy of the License at+#+#     http://www.apache.org/licenses/LICENSE-2.0+#+# Unless required by applicable law or agreed to in writing, software+# distributed under the License is distributed on an "AS IS" BASIS,+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+# See the License for the specific language governing permissions and+# limitations under the License.++# The image-loader script is used to pull all Docker images used by KubeOne,+# Kubeadm, and Kubernetes, and push them to the the specified private registry.++# WARNING: This script heavily depends on KubeOne and Kubernetes versions.+# You must use the script coming the KubeOne release you've downloaded.++set -euo pipefail++KUBERNETES_VERSION=${KUBERNETES_VERSION:-}++TARGET_REGISTRY=${TARGET_REGISTRY:-127.0.0.1:5000}+PULL_OPTIONAL_IMAGES=${PULL_OPTIONAL_IMAGES:-true}++# Wrapper around echo to include time+function echodate() {+  # do not use -Is to keep this compatible with macOS+  echo "[$(date +%Y-%m-%dT%H:%M:%S%:z)]" "$@"+}++function fail() {+  echodate "$@"+  exit 1+}++function retag() {+  local image="$1"++  # Trim registry+  local local_image+  local name+  local tag++  local_image="$(echo "${image}" | cut -d/ -f1 --complement)"+  # Split into name and tag+  name="$(echo "${local_image}" | cut -d: -f1)"+  tag="$(echo "${local_image}" | cut -d: -f2)"++  # Build target image name+  local target_image="${TARGET_REGISTRY}/${name}:${tag}"++  echodate "Retagging \"${image}\" => \"${target_image}\"..."++  docker pull "${image}"+  docker tag "${image}" "${target_image}"+  docker push "${target_image}"++  echodate "Done retagging \"${image}\"."+}++# The script is only supported on Linux because it depends on Kubeadm.+# You can run this script in a Docker container.+if [[ "$OSTYPE" != "linux-gnu"* ]]; then

Oh nevermind, this seems to be predefined somewhere. I only ever saw people explicitly calling uname to get the OS.

xmudrii

comment created time in 4 days

PullRequestReviewEvent

Pull request review commentkubermatic/kubeone

Add the image-loader Bash script

+#!/usr/bin/env bash++# Copyright 2019 The KubeOne Authors.+#+# Licensed under the Apache License, Version 2.0 (the "License");+# you may not use this file except in compliance with the License.+# You may obtain a copy of the License at+#+#     http://www.apache.org/licenses/LICENSE-2.0+#+# Unless required by applicable law or agreed to in writing, software+# distributed under the License is distributed on an "AS IS" BASIS,+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+# See the License for the specific language governing permissions and+# limitations under the License.++# The image-loader script is used to pull all Docker images used by KubeOne,+# Kubeadm, and Kubernetes, and push them to the the specified private registry.++# WARNING: This script heavily depends on KubeOne and Kubernetes versions.+# You must use the script coming the KubeOne release you've downloaded.++set -euo pipefail++KUBERNETES_VERSION=${KUBERNETES_VERSION:-}++TARGET_REGISTRY=${TARGET_REGISTRY:-127.0.0.1:5000}+PULL_OPTIONAL_IMAGES=${PULL_OPTIONAL_IMAGES:-true}++# Wrapper around echo to include time+function echodate() {+  # do not use -Is to keep this compatible with macOS+  echo "[$(date +%Y-%m-%dT%H:%M:%S%:z)]" "$@"+}++function fail() {+  echodate "$@"+  exit 1+}++function retag() {+  local image="$1"++  # Trim registry+  local local_image+  local name+  local tag++  local_image="$(echo "${image}" | cut -d/ -f1 --complement)"+  # Split into name and tag+  name="$(echo "${local_image}" | cut -d: -f1)"+  tag="$(echo "${local_image}" | cut -d: -f2)"++  # Build target image name+  local target_image="${TARGET_REGISTRY}/${name}:${tag}"++  echodate "Retagging \"${image}\" => \"${target_image}\"..."++  docker pull "${image}"+  docker tag "${image}" "${target_image}"+  docker push "${target_image}"++  echodate "Done retagging \"${image}\"."+}++# The script is only supported on Linux because it depends on Kubeadm.+# You can run this script in a Docker container.+if [[ "$OSTYPE" != "linux-gnu"* ]]; then

This will error if the variable is not defined, no?

xmudrii

comment created time in 4 days

PullRequestReviewEvent

Pull request review commentkubermatic/kubeone

Add the image-loader Bash script

+#!/usr/bin/env bash++# Copyright 2019 The KubeOne Authors.+#+# Licensed under the Apache License, Version 2.0 (the "License");+# you may not use this file except in compliance with the License.+# You may obtain a copy of the License at+#+#     http://www.apache.org/licenses/LICENSE-2.0+#+# Unless required by applicable law or agreed to in writing, software+# distributed under the License is distributed on an "AS IS" BASIS,+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+# See the License for the specific language governing permissions and+# limitations under the License.++# The image-loader script is used to pull all Docker images used by KubeOne,+# Kubeadm, and Kubernetes, and push them to the the specified private registry.

the the

xmudrii

comment created time in 4 days

PullRequestReviewEvent

delete branch xrstf/kubermatic

delete branch : make-cert-manager-optional

delete time in 4 days

delete branch xrstf/kubermatic

delete branch : remove-cmd-util

delete time in 4 days

delete branch xrstf/kubermatic

delete branch : backport-6043

delete time in 4 days

delete branch xrstf/kubermatic

delete branch : add-changelogs

delete time in 4 days

delete branch xrstf/kubermatic

delete branch : fix-github-release

delete time in 4 days

delete branch xrstf/kubermatic

delete branch : image-loader-backports-213

delete time in 4 days

delete branch xrstf/kubermatic

delete branch : fix-offline-deployment

delete time in 4 days

delete branch xrstf/kubermatic

delete branch : backport-6066-215

delete time in 4 days

delete branch xrstf/kubermatic

delete branch : helm-3-3-4

delete time in 4 days

delete branch xrstf/kubermatic

delete branch : backport-6063-214

delete time in 4 days

delete branch xrstf/kubermatic

delete branch : backport-6063-213

delete time in 4 days

delete branch xrstf/kubermatic

delete branch : backport-6052-213

delete time in 4 days

delete branch xrstf/kubermatic

delete branch : backport-6052-214

delete time in 4 days

delete branch xrstf/kubermatic

delete branch : backport-6092-214

delete time in 4 days

created tagkubermatic/kubermatic

tagv2.14.8

Kubermatic Kubernetes Platform - the Central Kubernetes Management Platform For Any Infrastructure

created time in 4 days

push eventxrstf/kubermatic-docs

Christoph Mewes

commit sha a7173986a6f53e651ab394de25bc1deb506bc13c

apply the same changes to the 2.15 docs

view details

push time in 4 days

push eventxrstf/kubermatic-docs

Christoph Mewes

commit sha a1bd8abe1071c5be1f528a096c69eaf63e6b50d1

mention KubermaticConfiguration

view details

push time in 4 days

PR opened kubermatic/docs

WIP - Improve image-loader Docs

The image-loader in KKP has been extended and the documentation should reflect that.

+97 -19

0 comment

10 changed files

pr created time in 4 days

create barnchxrstf/kubermatic-docs

branch : image-loader

created branch time in 4 days

pull request commentkubermatic/kubermatic

Update LICENSE correct company

/retest

xmulligan

comment created time in 4 days

Pull request review commentkubermatic/kubermatic

[release/v2.14] ship image-loader as new archive for each GitHub release (#6092)

 set -euo pipefail cd $(dirname $0)/../../.. source api/hack/lib.sh -GITHUB_TOKEN="${GITHUB_TOKEN:-$(cat /etc/github/oauth | tr -d '\n')}"+DRY_RUN=${DRY_RUN:-false} -# err can be used to print logs to stderr-err(){-  echo "E: $*" >>/dev/stderr-}+GITHUB_TOKEN="${GITHUB_TOKEN:-$(cat /etc/github/oauth | tr --delete '\n')}"+export GITHUB_AUTH="Authorization: token $GITHUB_TOKEN"++# this stops execution when GIT_TAG is not overriden and+# we are not on a tagged revision+export GIT_TAG="${GIT_TAG:-$(git describe --tags --exact-match)}"+export GIT_BRANCH="${GIT_BRANCH:-$(git rev-parse --abbrev-ref HEAD)}"+export GIT_HEAD="${GIT_HEAD:-$(git rev-parse HEAD)}"+export GIT_REPO="${GIT_REPO:-kubermatic/kubermatic}"+export RELEASE_PLATFORMS="${RELEASE_PLATFORMS:-linux-amd64 darwin-amd64 windows-amd64}"++# By default, this script is used to released tagged revisions,+# for which a matching tag must exist in the dashboard repository.+export DASHBOARD_GIT_TAG="${DASHBOARD_GIT_TAG:-$GIT_TAG}"++# RELEASE_NAME allows to customize the tag that is used to create the+# Github release for, while the Helm charts and things will still+# point to GIT_TAG+export RELEASE_NAME="${RELEASE_NAME:-$GIT_TAG}"  # utility function setting some curl default values for calling the github API # first argument is the URL, the rest of the arguments is used as curl # arguments. function github_cli {-  local url=${1}+  local url="$1"   curl \     --retry 5 \     --connect-timeout 10 \-    -H "Authorization: token ${GITHUB_TOKEN}" \-    "${@:2}" "${url}"+    --header "$GITHUB_AUTH" \+    "${@:2}" "$url" }  # creates a new github release function create_release {-  local tag="${1}"-  local name="${2}"-  local prerelease="${3}"-  data=$(cat << EOF-{-  "tag_name": "$tag",-  "name": "$name",-  "prerelease": $prerelease-}-EOF-)+  local tag="$1"+  local name="$2"+  local prerelease="$3"+  local body="${4:-}"+  local data++  # using named arguments is a nice way to ensure special+  # characters in the body survive the JSON encoding+  data="$(+    jq --null-input \+       --arg tag_name "$tag" \+       --arg name "$name" \+       --argjson prerelease "$prerelease" \+       --arg body "$body" \+       '{"tag_name":$tag_name,"name":$name,"prerelease":$prerelease,"body":$body}'+  )"+   github_cli \-    "https://api.github.com/repos/${repo}/releases" \-    -f --data "${data}"+    "https://api.github.com/repos/$GIT_REPO/releases" \+    --fail --data "$data" }  # upload an archive from a file function upload_archive {-  local file="${1}"+  local file="$1"   res=$(github_cli \-    "https://uploads.github.com/repos/$repo/releases/$releaseID/assets?name=${file}" \-    -H "Accept: application/json" \-    -H 'Content-Type: application/gzip' \-    -s --data-binary "@${file}")-  if echo "${res}" | jq -e '.'; then-    # it the response contain errors-    if echo "${res}" | jq -e '.errors[0]'; then-      for err in $(echo "${res}" | jq -r '.errors[0].code'); do+    "https://uploads.github.com/repos/$GIT_REPO/releases/$releaseID/assets?name=$(basename "$file")" \

Yeah but we don't have it installed in our Docker images. And just for wrapping a single curl call, I don't see much value in adding the gh binary to our Docker images. ^^

xrstf

comment created time in 4 days

PullRequestReviewEvent

created tagkubermatic/kubermatic

tagv2.15.1

Kubermatic Kubernetes Platform - the Central Kubernetes Management Platform For Any Infrastructure

created time in 5 days

PullRequestReviewEvent

created tagkubermatic/kubermatic

tagv2.13.10

Kubermatic Kubernetes Platform - the Central Kubernetes Management Platform For Any Infrastructure

created time in 5 days

pull request commentkubermatic/kubermatic

fix typo in github release script

/cherrypick release/v2.15

xrstf

comment created time in 5 days

push eventxrstf/kubermatic

Christoph Mewes

commit sha 21aee79fbcb4aa7c287b8048b4b3567a8b9f8329

add changelogs for 2.15.1, 2.14.8 and 2.13.10

view details

push time in 5 days

PR opened kubermatic/kubermatic

[release/v2.14] ship image-loader as new archive for each GitHub release (#6092)

What this PR does / why we need it: What it says on the tin.

Does this PR introduce a user-facing change?:

Ship image-loader as new archive for each GitHub release
+158 -66

0 comment

1 changed file

pr created time in 5 days

PR opened kubermatic/kubermatic

fix typo in github release script

What this PR does / why we need it: I accidentally overwrote upload_archive. I should have gone with my initial name.

Does this PR introduce a user-facing change?:

NONE
+4 -4

0 comment

1 changed file

pr created time in 5 days

create barnchxrstf/kubermatic

branch : fix-github-release

created branch time in 5 days

create barnchxrstf/kubermatic

branch : backport-6092-214

created branch time in 5 days

delete branch xrstf/kubermatic

delete branch : image-loader-backports-214

delete time in 5 days

PullRequestReviewEvent
more