profile
viewpoint
Paul Weil pweil- Red Hat Charleston, SC

Azure/acs-engine 1048

WE HAVE MOVED: Please join us at Azure/aks-engine!

pweil-/hello-nginx-docker 3

Test repository for OpenShift v3 TLS testing

pweil-/acs-engine 0

Azure Container Service Engine - a place for community to collaborate and build the best open Docker container infrastructure for Azure.

pweil-/cakephp-ex 0

CakePHP Example

pweil-/clam-scanner 0

Scan files using clamd

pweil-/cluster-kube-apiserver-operator 0

The kube-apiserver operator installs and maintains the kube-apiserver on a cluster

pweil-/distribution 0

The Docker toolset to pack, ship, store, and deliver content

Pull request review commentkube-reporting/metering-operator

test/e2e: Add support for running post-install tests against a Metering installation using s3.

 func DecodeMeteringConfigManifest(basePath, manifestPath, manifestFilename strin  	return mc, nil }++func s3InstallFunc(ctx *deployframework.DeployerCtx) error {+	// The default ctx.TargetPodsCount value assumes that HDFS+	// is being used a storage backend, so we need to decrement+	// that value to ensure we're not going to poll forever waiting+	// for Pods that will not be created by the metering-ansible-operator+	ctx.TargetPodsCount = nonHDFSTargetPodCount++	// Before we can create the AWS credentials secret in the ctx.Namespace, we need to ensure+	// that namespace has been created before attempting to query for a resource that does not exist.+	_, err := ctx.Client.CoreV1().Namespaces().Get(context.Background(), ctx.Namespace, metav1.GetOptions{})+	if apierrors.IsNotFound(err) {

do you need to handle if the error is non-nil and NOT IsNotFound? Same on line 162 (which, if I'm reading correctly, could overwrite this err object since it is shadowing the var).

timflannagan1

comment created time in 10 hours

Pull request review commentkube-reporting/metering-operator

test: Add a post-install test for checking if the service-serving CA bundle has been properly mounted.

+package e2e++import (+	"bytes"+	"context"+	"strings"+	"testing"++	"github.com/stretchr/testify/assert"+	"github.com/stretchr/testify/require"++	corev1 "k8s.io/api/core/v1"+	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"+	"k8s.io/client-go/kubernetes/scheme"+	"k8s.io/client-go/tools/remotecommand"++	"github.com/kube-reporting/metering-operator/test/reportingframework"+)++func testReportOperatorServiceCABundleExists(t *testing.T, rf *reportingframework.ReportingFramework) {+	// Attempt to grab the reporting-operator Pod in the rf.Namespace.+	// Because this Pod is spun up using a Deployment instead of a Statefulset+	// we need to list any Pods matching the app=reporting-operator label as we+	// don't know the name ahead of time.+	podList, err := rf.KubeClient.CoreV1().Pods(rf.Namespace).List(context.Background(), metav1.ListOptions{+		LabelSelector: "app=reporting-operator",+	})+	require.NoError(t, err, "failed to list the reporting-operator Pod in the %s namespace", rf.Namespace)+	require.Len(t, podList.Items, 1, "expected the list of app=reporting-operator Pods in the %s namespace to match a length of 1", rf.Namespace)++	// We expect there's only going to be one reporting-operator+	// in a single namespace, so hardcode the reporting-operator Pod+	// object to be the first item in the list returned.+	pod := podList.Items[0]

ah, I see now. Thank you.

timflannagan1

comment created time in 8 days

Pull request review commentopenshift/openshift-docs

update docs after changes to hive secrets

 The default installation of metering configures Hive to use an embedded Java dat There are 4 configuration options you can use to control the database used by Hive metastore: url, driver, username, and password.  -You will create your MySQL or Postgres instance with a username and secret. Then either in the Kubernetes CLI or by making a yaml file, you will make a secret. The secretName you use for this secret needs to map to you Metering CR spec.hive.spec.config.db.secretName.+You will create your MySQL or Postgres instance with a username and secret. Then either in the Kubernetes CLI or by making a yaml file, you will make a secret. The secretName you use for this secret needs to map to you Metering CR `spec.hive.spec.config.db.secretName`.

Should probably be referencing the OpenShift CLI here, not Kube.

EmilyM1

comment created time in 8 days

Pull request review commentkube-reporting/metering-operator

test: Add a post-install test for checking if the service-serving CA bundle has been properly mounted.

+package e2e++import (+	"bytes"+	"context"+	"strings"+	"testing"++	"github.com/stretchr/testify/assert"+	"github.com/stretchr/testify/require"++	corev1 "k8s.io/api/core/v1"+	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"+	"k8s.io/client-go/kubernetes/scheme"+	"k8s.io/client-go/tools/remotecommand"++	"github.com/kube-reporting/metering-operator/test/reportingframework"+)++func testReportOperatorServiceCABundleExists(t *testing.T, rf *reportingframework.ReportingFramework) {+	// Attempt to grab the reporting-operator Pod in the rf.Namespace.+	// Because this Pod is spun up using a Deployment instead of a Statefulset+	// we need to list any Pods matching the app=reporting-operator label as we+	// don't know the name ahead of time.+	podList, err := rf.KubeClient.CoreV1().Pods(rf.Namespace).List(context.Background(), metav1.ListOptions{+		LabelSelector: "app=reporting-operator",+	})+	require.NoError(t, err, "failed to list the reporting-operator Pod in the %s namespace", rf.Namespace)+	require.Len(t, podList.Items, 1, "expected the list of app=reporting-operator Pods in the %s namespace to match a length of 1", rf.Namespace)++	// We expect there's only going to be one reporting-operator+	// in a single namespace, so hardcode the reporting-operator Pod+	// object to be the first item in the list returned.+	pod := podList.Items[0]

is it more meaningful to test the expectation in the comment than actually have a constant. If there should only be one maybe we should fail if there is more than one thing returned.

timflannagan1

comment created time in 8 days

Pull request review commentkube-reporting/metering-operator

test/e2e: Add a wrapper function around the top-level TestMain function.

 func init() { }  func TestMain(m *testing.M) {+	os.Exit(testMainWrapper(m))+}++// testMainWrapper is a wrapper function around the+// top-level TestMain function and this pattern is+// needed as os.Exit() doesn't respect any defer calls+// that may occur during the TestMain workflow. If we+// we instead doing the heavy-lifting in this function,+// and then return an integer code that os.Exit can correctly+// interpret, then the defer call will work.+//+// Link to article that explains this behavior:+// http://blog.englund.nu/golang,/testing/2017/03/12/using-defer-in-testmain.html
  • http://blog.englund.nu/golang,/testing/2017/03/12/using-defer-in-testmain.html

This is a normal part of the language and defined in the spec. I'd link there if you want folks to have official reference.

For reviewers: Because the function never reaches the end of its body the defer semantics do not apply. Simple example. For those with a Java background, think of this like throwing System.exit() somewhere in a program. 😄

timflannagan1

comment created time in 14 days

Pull request review commentkube-reporting/metering-operator

pkg/deploy: Cleanup conditional structure in the top-level deploy methods.

 import (  func (deploy *Deployer) installNamespace() error { 	namespace, err := deploy.client.CoreV1().Namespaces().Get(context.TODO(), deploy.config.Namespace, metav1.GetOptions{})+	if err != nil && !apierrors.IsNotFound(err) {+		return err+	}+	if apierrors.IsAlreadyExists(err) {+		deploy.logger.Infof("The %s namespace already exists", deploy.config.Namespace)+	} 	if apierrors.IsNotFound(err) { 		namespaceObjectMeta := metav1.ObjectMeta{ 			Name: deploy.config.Namespace, 		}--		labels := make(map[string]string)--		for key, val := range deploy.config.ExtraNamespaceLabels {-			labels[key] = val-			deploy.logger.Infof("Labeling the %s namespace with '%s=%s'", deploy.config.Namespace, key, val)-		}--		/*-			In the case where the platform is set to Openshift (the default value),-			we need to make a few modifications to the namespace metadata.--			The 'openshift.io/cluster-monitoring' labels tells the cluster-monitoring-			operator to scrape Prometheus metrics for the installed Metering namespace.--			The 'openshift.io/node-selector' annotation is a way to control where Pods-			get scheduled in a specific namespace. If this annotation is set to an empty-			label, that means that Pods for this namespace can be scheduled on any nodes.--			In the case where a cluster administrator has configured a value for the-			defaultNodeSelector field in the cluster's Scheduler object, we need to set-			this namespace annotation in order to avoid a collision with what the user-			has supplied in their MeteringConfig custom resource. This implies that whenever-			a cluster has been configured to schedule Pods using a default node selector,-			those changes must also be propogated to the MeteringConfig custom resource, else-			the Pods in Metering namespace will be scheduled on any available node.-		*/-		if deploy.config.Platform == "openshift" {-			labels["openshift.io/cluster-monitoring"] = "true"-			deploy.logger.Infof("Labeling the %s namespace with 'openshift.io/cluster-monitoring=true'", deploy.config.Namespace)-			namespaceObjectMeta.Annotations = map[string]string{-				"openshift.io/node-selector": "",-			}-			deploy.logger.Infof("Annotating the %s namespace with 'openshift.io/node-selector=''", deploy.config.Namespace)-		}--		namespaceObjectMeta.Labels = labels 		namespaceObj := &v1.Namespace{ 			ObjectMeta: namespaceObjectMeta, 		} -		_, err := deploy.client.CoreV1().Namespaces().Create(context.TODO(), namespaceObj, metav1.CreateOptions{})+		namespace, err = deploy.client.CoreV1().Namespaces().Create(context.TODO(), namespaceObj, metav1.CreateOptions{}) 		if err != nil { 			return err 		} 		deploy.logger.Infof("Created the %s namespace", deploy.config.Namespace)-	} else if err == nil {-		// check if we need to add/update the cluster-monitoring label for Openshift installs.-		if deploy.config.Platform == "openshift" {-			if namespace.ObjectMeta.Labels != nil {-				namespace.ObjectMeta.Labels["openshift.io/cluster-monitoring"] = "true"-				deploy.logger.Infof("Updated the 'openshift.io/cluster-monitoring' label to the %s namespace", deploy.config.Namespace)-			} else {-				namespace.ObjectMeta.Labels = map[string]string{-					"openshift.io/cluster-monitoring": "true",-				}-				deploy.logger.Infof("Added the 'openshift.io/cluster-monitoring' label to the %s namespace", deploy.config.Namespace)-			}-			if namespace.ObjectMeta.Annotations != nil {-				namespace.ObjectMeta.Annotations["openshift.io/node-selector"] = ""-				deploy.logger.Infof("Updated the 'openshift.io/node-selector' annotation to the %s namespace", deploy.config.Namespace)-			} else {-				namespace.ObjectMeta.Annotations = map[string]string{-					"openshift.io/node-selector": "",-				}-				deploy.logger.Infof("Added the empty 'openshift.io/node-selector' annotation to the %s namespace", deploy.config.Namespace)-			}--			_, err := deploy.client.CoreV1().Namespaces().Update(context.TODO(), namespace, metav1.UpdateOptions{})-			if err != nil {-				return fmt.Errorf("failed to add the 'openshift.io/cluster-monitoring' label to the %s namespace: %v", deploy.config.Namespace, err)-			}-		} else {-			deploy.logger.Infof("The %s namespace already exists", deploy.config.Namespace)-		}-	} else {+	}++	// handle adding or updating any of the namespace labels or annotations+	// that we may need separately from the creation of the namespace+	// with the intention of always treating this as an update.+	labels := namespace.ObjectMeta.Labels+	if labels == nil {+		labels = make(map[string]string)+	}+	annotations := namespace.ObjectMeta.Annotations+	if annotations == nil {+		annotations = make(map[string]string)+	}++	for key, val := range deploy.config.ExtraNamespaceLabels {+		labels[key] = val+		deploy.logger.Infof("Labeling the %s namespace with '%s=%s'", deploy.config.Namespace, key, val)+	}++	/*+		In the case where the platform is set to Openshift (the default value),+		we need to make a few modifications to the namespace metadata.++		The 'openshift.io/cluster-monitoring' labels tells the cluster-monitoring+		operator to scrape Prometheus metrics for the installed Metering namespace.++		The 'openshift.io/node-selector' annotation is a way to control where Pods+		get scheduled in a specific namespace. If this annotation is set to an empty+		label, that means that Pods for this namespace can be scheduled on any nodes.++		In the case where a cluster administrator has configured a value for the+		defaultNodeSelector field in the cluster's Scheduler object, we need to set+		this namespace annotation in order to avoid a collision with what the user+		has supplied in their MeteringConfig custom resource. This implies that whenever+		a cluster has been configured to schedule Pods using a default node selector,+		those changes must also be propogated to the MeteringConfig custom resource, else+		the Pods in Metering namespace will be scheduled on any available node.+	*/+	if deploy.config.Platform == "openshift" {+		labels["openshift.io/cluster-monitoring"] = "true"+		deploy.logger.Infof("Added the 'openshift.io/cluster-monitoring' label to the %s namespace", deploy.config.Namespace)++		annotations["openshift.io/node-selector"] = ""+		deploy.logger.Infof("Added the empty 'openshift.io/node-selector' annotation to the %s namespace", deploy.config.Namespace)+	}++	// in order to avoid updating a stale namespace resourceVersion, grab

and to be clear, I'm not suggesting you do a guaranteed update loop, reverting, or committing the suggesting - only providing some clarity on why this, in and of itself, probably isn't what you were shooting for. :-)

timflannagan1

comment created time in 15 days

Pull request review commentkube-reporting/metering-operator

pkg/deploy: Cleanup conditional structure in the top-level deploy methods.

 import (  func (deploy *Deployer) installNamespace() error { 	namespace, err := deploy.client.CoreV1().Namespaces().Get(context.TODO(), deploy.config.Namespace, metav1.GetOptions{})+	if err != nil && !apierrors.IsNotFound(err) {+		return err+	}+	if apierrors.IsAlreadyExists(err) {+		deploy.logger.Infof("The %s namespace already exists", deploy.config.Namespace)+	} 	if apierrors.IsNotFound(err) { 		namespaceObjectMeta := metav1.ObjectMeta{ 			Name: deploy.config.Namespace, 		}--		labels := make(map[string]string)--		for key, val := range deploy.config.ExtraNamespaceLabels {-			labels[key] = val-			deploy.logger.Infof("Labeling the %s namespace with '%s=%s'", deploy.config.Namespace, key, val)-		}--		/*-			In the case where the platform is set to Openshift (the default value),-			we need to make a few modifications to the namespace metadata.--			The 'openshift.io/cluster-monitoring' labels tells the cluster-monitoring-			operator to scrape Prometheus metrics for the installed Metering namespace.--			The 'openshift.io/node-selector' annotation is a way to control where Pods-			get scheduled in a specific namespace. If this annotation is set to an empty-			label, that means that Pods for this namespace can be scheduled on any nodes.--			In the case where a cluster administrator has configured a value for the-			defaultNodeSelector field in the cluster's Scheduler object, we need to set-			this namespace annotation in order to avoid a collision with what the user-			has supplied in their MeteringConfig custom resource. This implies that whenever-			a cluster has been configured to schedule Pods using a default node selector,-			those changes must also be propogated to the MeteringConfig custom resource, else-			the Pods in Metering namespace will be scheduled on any available node.-		*/-		if deploy.config.Platform == "openshift" {-			labels["openshift.io/cluster-monitoring"] = "true"-			deploy.logger.Infof("Labeling the %s namespace with 'openshift.io/cluster-monitoring=true'", deploy.config.Namespace)-			namespaceObjectMeta.Annotations = map[string]string{-				"openshift.io/node-selector": "",-			}-			deploy.logger.Infof("Annotating the %s namespace with 'openshift.io/node-selector=''", deploy.config.Namespace)-		}--		namespaceObjectMeta.Labels = labels 		namespaceObj := &v1.Namespace{ 			ObjectMeta: namespaceObjectMeta, 		} -		_, err := deploy.client.CoreV1().Namespaces().Create(context.TODO(), namespaceObj, metav1.CreateOptions{})+		namespace, err = deploy.client.CoreV1().Namespaces().Create(context.TODO(), namespaceObj, metav1.CreateOptions{}) 		if err != nil { 			return err 		} 		deploy.logger.Infof("Created the %s namespace", deploy.config.Namespace)-	} else if err == nil {-		// check if we need to add/update the cluster-monitoring label for Openshift installs.-		if deploy.config.Platform == "openshift" {-			if namespace.ObjectMeta.Labels != nil {-				namespace.ObjectMeta.Labels["openshift.io/cluster-monitoring"] = "true"-				deploy.logger.Infof("Updated the 'openshift.io/cluster-monitoring' label to the %s namespace", deploy.config.Namespace)-			} else {-				namespace.ObjectMeta.Labels = map[string]string{-					"openshift.io/cluster-monitoring": "true",-				}-				deploy.logger.Infof("Added the 'openshift.io/cluster-monitoring' label to the %s namespace", deploy.config.Namespace)-			}-			if namespace.ObjectMeta.Annotations != nil {-				namespace.ObjectMeta.Annotations["openshift.io/node-selector"] = ""-				deploy.logger.Infof("Updated the 'openshift.io/node-selector' annotation to the %s namespace", deploy.config.Namespace)-			} else {-				namespace.ObjectMeta.Annotations = map[string]string{-					"openshift.io/node-selector": "",-				}-				deploy.logger.Infof("Added the empty 'openshift.io/node-selector' annotation to the %s namespace", deploy.config.Namespace)-			}--			_, err := deploy.client.CoreV1().Namespaces().Update(context.TODO(), namespace, metav1.UpdateOptions{})-			if err != nil {-				return fmt.Errorf("failed to add the 'openshift.io/cluster-monitoring' label to the %s namespace: %v", deploy.config.Namespace, err)-			}-		} else {-			deploy.logger.Infof("The %s namespace already exists", deploy.config.Namespace)-		}-	} else {+	}++	// handle adding or updating any of the namespace labels or annotations+	// that we may need separately from the creation of the namespace+	// with the intention of always treating this as an update.+	labels := namespace.ObjectMeta.Labels+	if labels == nil {+		labels = make(map[string]string)+	}+	annotations := namespace.ObjectMeta.Annotations+	if annotations == nil {+		annotations = make(map[string]string)+	}++	for key, val := range deploy.config.ExtraNamespaceLabels {+		labels[key] = val+		deploy.logger.Infof("Labeling the %s namespace with '%s=%s'", deploy.config.Namespace, key, val)+	}++	/*+		In the case where the platform is set to Openshift (the default value),+		we need to make a few modifications to the namespace metadata.++		The 'openshift.io/cluster-monitoring' labels tells the cluster-monitoring+		operator to scrape Prometheus metrics for the installed Metering namespace.++		The 'openshift.io/node-selector' annotation is a way to control where Pods+		get scheduled in a specific namespace. If this annotation is set to an empty+		label, that means that Pods for this namespace can be scheduled on any nodes.++		In the case where a cluster administrator has configured a value for the+		defaultNodeSelector field in the cluster's Scheduler object, we need to set+		this namespace annotation in order to avoid a collision with what the user+		has supplied in their MeteringConfig custom resource. This implies that whenever+		a cluster has been configured to schedule Pods using a default node selector,+		those changes must also be propogated to the MeteringConfig custom resource, else+		the Pods in Metering namespace will be scheduled on any available node.+	*/+	if deploy.config.Platform == "openshift" {+		labels["openshift.io/cluster-monitoring"] = "true"+		deploy.logger.Infof("Added the 'openshift.io/cluster-monitoring' label to the %s namespace", deploy.config.Namespace)++		annotations["openshift.io/node-selector"] = ""+		deploy.logger.Infof("Added the empty 'openshift.io/node-selector' annotation to the %s namespace", deploy.config.Namespace)+	}++	// in order to avoid updating a stale namespace resourceVersion, grab

always going to referencing an out-of-date resource version when the namespace doesn't already exist

I'm more pointing out that the comment here indicates some level of safety but that's not correct. You could re-fetch after the create, if necessary, to get the latest resource version but the resource version conflict issue still exists. The current code has it, this new code has it. The issue is a "normal" problem of a system where there are multiple produces of updates and no transaction locks (like a select for update type mechanism).

The only true way to program around the issue is to loop until successful by continuing to update a refreshed fetch of the object until you don't have a resource version conflict.

timflannagan1

comment created time in 15 days

Pull request review commentkube-reporting/metering-operator

pkg/deploy: Cleanup conditional structure in the top-level deploy methods.

 import (  func (deploy *Deployer) installNamespace() error { 	namespace, err := deploy.client.CoreV1().Namespaces().Get(context.TODO(), deploy.config.Namespace, metav1.GetOptions{})+	if err != nil && !apierrors.IsNotFound(err) {+		return err+	}+	if apierrors.IsAlreadyExists(err) {+		deploy.logger.Infof("The %s namespace already exists", deploy.config.Namespace)+	} 	if apierrors.IsNotFound(err) { 		namespaceObjectMeta := metav1.ObjectMeta{ 			Name: deploy.config.Namespace, 		}--		labels := make(map[string]string)--		for key, val := range deploy.config.ExtraNamespaceLabels {-			labels[key] = val-			deploy.logger.Infof("Labeling the %s namespace with '%s=%s'", deploy.config.Namespace, key, val)-		}--		/*-			In the case where the platform is set to Openshift (the default value),-			we need to make a few modifications to the namespace metadata.--			The 'openshift.io/cluster-monitoring' labels tells the cluster-monitoring-			operator to scrape Prometheus metrics for the installed Metering namespace.--			The 'openshift.io/node-selector' annotation is a way to control where Pods-			get scheduled in a specific namespace. If this annotation is set to an empty-			label, that means that Pods for this namespace can be scheduled on any nodes.--			In the case where a cluster administrator has configured a value for the-			defaultNodeSelector field in the cluster's Scheduler object, we need to set-			this namespace annotation in order to avoid a collision with what the user-			has supplied in their MeteringConfig custom resource. This implies that whenever-			a cluster has been configured to schedule Pods using a default node selector,-			those changes must also be propogated to the MeteringConfig custom resource, else-			the Pods in Metering namespace will be scheduled on any available node.-		*/-		if deploy.config.Platform == "openshift" {-			labels["openshift.io/cluster-monitoring"] = "true"-			deploy.logger.Infof("Labeling the %s namespace with 'openshift.io/cluster-monitoring=true'", deploy.config.Namespace)-			namespaceObjectMeta.Annotations = map[string]string{-				"openshift.io/node-selector": "",-			}-			deploy.logger.Infof("Annotating the %s namespace with 'openshift.io/node-selector=''", deploy.config.Namespace)-		}--		namespaceObjectMeta.Labels = labels 		namespaceObj := &v1.Namespace{ 			ObjectMeta: namespaceObjectMeta, 		} -		_, err := deploy.client.CoreV1().Namespaces().Create(context.TODO(), namespaceObj, metav1.CreateOptions{})+		namespace, err = deploy.client.CoreV1().Namespaces().Create(context.TODO(), namespaceObj, metav1.CreateOptions{}) 		if err != nil { 			return err 		} 		deploy.logger.Infof("Created the %s namespace", deploy.config.Namespace)-	} else if err == nil {-		// check if we need to add/update the cluster-monitoring label for Openshift installs.-		if deploy.config.Platform == "openshift" {-			if namespace.ObjectMeta.Labels != nil {-				namespace.ObjectMeta.Labels["openshift.io/cluster-monitoring"] = "true"-				deploy.logger.Infof("Updated the 'openshift.io/cluster-monitoring' label to the %s namespace", deploy.config.Namespace)-			} else {-				namespace.ObjectMeta.Labels = map[string]string{-					"openshift.io/cluster-monitoring": "true",-				}-				deploy.logger.Infof("Added the 'openshift.io/cluster-monitoring' label to the %s namespace", deploy.config.Namespace)-			}-			if namespace.ObjectMeta.Annotations != nil {-				namespace.ObjectMeta.Annotations["openshift.io/node-selector"] = ""-				deploy.logger.Infof("Updated the 'openshift.io/node-selector' annotation to the %s namespace", deploy.config.Namespace)-			} else {-				namespace.ObjectMeta.Annotations = map[string]string{-					"openshift.io/node-selector": "",-				}-				deploy.logger.Infof("Added the empty 'openshift.io/node-selector' annotation to the %s namespace", deploy.config.Namespace)-			}--			_, err := deploy.client.CoreV1().Namespaces().Update(context.TODO(), namespace, metav1.UpdateOptions{})-			if err != nil {-				return fmt.Errorf("failed to add the 'openshift.io/cluster-monitoring' label to the %s namespace: %v", deploy.config.Namespace, err)-			}-		} else {-			deploy.logger.Infof("The %s namespace already exists", deploy.config.Namespace)-		}-	} else {+	}++	// handle adding or updating any of the namespace labels or annotations+	// that we may need separately from the creation of the namespace+	// with the intention of always treating this as an update.+	labels := namespace.ObjectMeta.Labels+	if labels == nil {+		labels = make(map[string]string)+	}+	annotations := namespace.ObjectMeta.Annotations+	if annotations == nil {+		annotations = make(map[string]string)+	}++	for key, val := range deploy.config.ExtraNamespaceLabels {+		labels[key] = val+		deploy.logger.Infof("Labeling the %s namespace with '%s=%s'", deploy.config.Namespace, key, val)+	}++	/*+		In the case where the platform is set to Openshift (the default value),+		we need to make a few modifications to the namespace metadata.++		The 'openshift.io/cluster-monitoring' labels tells the cluster-monitoring+		operator to scrape Prometheus metrics for the installed Metering namespace.++		The 'openshift.io/node-selector' annotation is a way to control where Pods+		get scheduled in a specific namespace. If this annotation is set to an empty+		label, that means that Pods for this namespace can be scheduled on any nodes.++		In the case where a cluster administrator has configured a value for the+		defaultNodeSelector field in the cluster's Scheduler object, we need to set+		this namespace annotation in order to avoid a collision with what the user+		has supplied in their MeteringConfig custom resource. This implies that whenever+		a cluster has been configured to schedule Pods using a default node selector,+		those changes must also be propogated to the MeteringConfig custom resource, else+		the Pods in Metering namespace will be scheduled on any available node.+	*/+	if deploy.config.Platform == "openshift" {+		labels["openshift.io/cluster-monitoring"] = "true"+		deploy.logger.Infof("Added the 'openshift.io/cluster-monitoring' label to the %s namespace", deploy.config.Namespace)++		annotations["openshift.io/node-selector"] = ""+		deploy.logger.Infof("Added the empty 'openshift.io/node-selector' annotation to the %s namespace", deploy.config.Namespace)+	}++	// in order to avoid updating a stale namespace resourceVersion, grab

If the object has changed since creation/fetch and the update on line 92 then you're risking overwriting labels or annotations that could've been added with lines 89 & 90. You'd have to put this into a control loop if this is really an issue - and you're still running the risk of resource version change between 85 & 92 anyway.

Probably better to just let it fail out then try to program around it without a guaranteed update loop.

(also shouldn't ignore the NotFound on line 86 if this is really necessary - means someone deleted it from under us)

timflannagan1

comment created time in 16 days

Pull request review commentkube-reporting/metering-operator

pkg/deploy: Cleanup conditional structure in the top-level deploy methods.

 func (deploy *Deployer) installNamespace() error { 			return err 		} 		deploy.logger.Infof("Created the %s namespace", deploy.config.Namespace)-	} else if err == nil {-		// check if we need to add/update the cluster-monitoring label for Openshift installs.-		if deploy.config.Platform == "openshift" {-			if namespace.ObjectMeta.Labels != nil {-				namespace.ObjectMeta.Labels["openshift.io/cluster-monitoring"] = "true"-				deploy.logger.Infof("Updated the 'openshift.io/cluster-monitoring' label to the %s namespace", deploy.config.Namespace)-			} else {-				namespace.ObjectMeta.Labels = map[string]string{-					"openshift.io/cluster-monitoring": "true",-				}-				deploy.logger.Infof("Added the 'openshift.io/cluster-monitoring' label to the %s namespace", deploy.config.Namespace)-			}-			if namespace.ObjectMeta.Annotations != nil {-				namespace.ObjectMeta.Annotations["openshift.io/node-selector"] = ""-				deploy.logger.Infof("Updated the 'openshift.io/node-selector' annotation to the %s namespace", deploy.config.Namespace)-			} else {-				namespace.ObjectMeta.Annotations = map[string]string{-					"openshift.io/node-selector": "",-				}-				deploy.logger.Infof("Added the empty 'openshift.io/node-selector' annotation to the %s namespace", deploy.config.Namespace)-			}+		return nil+	}+	if err != nil && !apierrors.IsAlreadyExists(err) {+		return err+	} -			_, err := deploy.client.CoreV1().Namespaces().Update(context.TODO(), namespace, metav1.UpdateOptions{})-			if err != nil {-				return fmt.Errorf("failed to add the 'openshift.io/cluster-monitoring' label to the %s namespace: %v", deploy.config.Namespace, err)+	// check if we need to add/update the cluster-monitoring label for Openshift installs.+	if deploy.config.Platform == "openshift" {+		if namespace.ObjectMeta.Labels != nil {+			namespace.ObjectMeta.Labels["openshift.io/cluster-monitoring"] = "true"+			deploy.logger.Infof("Updated the 'openshift.io/cluster-monitoring' label to the %s namespace", deploy.config.Namespace)+		} else {+			namespace.ObjectMeta.Labels = map[string]string{+				"openshift.io/cluster-monitoring": "true", 			}+			deploy.logger.Infof("Added the 'openshift.io/cluster-monitoring' label to the %s namespace", deploy.config.Namespace)+		}+		if namespace.ObjectMeta.Annotations != nil {+			namespace.ObjectMeta.Annotations["openshift.io/node-selector"] = ""+			deploy.logger.Infof("Updated the 'openshift.io/node-selector' annotation to the %s namespace", deploy.config.Namespace) 		} else {-			deploy.logger.Infof("The %s namespace already exists", deploy.config.Namespace)+			namespace.ObjectMeta.Annotations = map[string]string{+				"openshift.io/node-selector": "",+			}+			deploy.logger.Infof("Added the empty 'openshift.io/node-selector' annotation to the %s namespace", deploy.config.Namespace) 		}-	} else {-		return err++		_, err := deploy.client.CoreV1().Namespaces().Update(context.TODO(), namespace, metav1.UpdateOptions{})+		if err != nil {+			return fmt.Errorf("failed to add the 'openshift.io/cluster-monitoring' label to the %s namespace: %v", deploy.config.Namespace, err)+		}+		return nil 	} +	// TODO: handle updating the namespace for non-openshift installations

You can eliminate a few more of the redundancies and this TODO if you think about this method in two pieces: the ns itself and the labels/annotations. Ie. you always treat the labels/annotations as an update.

What you get: the updates for existing namespaces, removal of the duplicate code to annotate and label in the openshift deployment case, ensuring the ExtraNamespaceLabels are applied uniformly (looks like they're only applied here in the creation case)

What you lose: logging the add vs update but that could be done with some additional logic if it really is meaningful. I guess the same for the labels/annotations cased for the openshift deployment type. Does it harm anything to keep it on there in all cases? (Not looking for an answer, just something to consider)

So something like this: (note, I didn't try to write this in an IDE or check it for formatting, it may not even work. Demo purposes only 😄 )

func (deploy *Deployer) installNamespace() error {
	namespace, err := deploy.client.CoreV1().Namespaces().Get(context.TODO(), deploy.config.Namespace, metav1.GetOptions{})
	if err != nil && (!apierrors.IsNotFound(err) || !apierrors.IsAlreadyExists(err)) {
		return err
	}

	if apierrors.IsAlreadyExists(err) {
		deploy.logger.Infof("The %s namespace already exists", deploy.config.Namespace)
	}

	// not found - create the ns
	if apierrors.IsNotFound(err) {
		namespaceObjectMeta := metav1.ObjectMeta{
			Name: deploy.config.Namespace,
		}

		namespaceObj := &v1.Namespace{
			ObjectMeta: namespaceObjectMeta,
		}

		namespace, err = deploy.client.CoreV1().Namespaces().Create(context.TODO(), namespaceObj, metav1.CreateOptions{})
		if err != nil {
			return err
		}
		deploy.logger.Infof("Created the %s namespace", deploy.config.Namespace)
	}

	// add extra namespace labels and annotations
	labels := namespace.ObjectMeta.Labels
	if labels == nil {
		labels = make(map[string]string)
	}

	annotations := namespace.ObjectMeta.Annotations
	if annotations == nil {
		annotations = make(map[string]string)
	}

	for key, val := range deploy.config.ExtraNamespaceLabels {
		labels[key] = val
		deploy.logger.Infof("Labeling the %s namespace with '%s=%s'", deploy.config.Namespace, key, val)
	}

	/*
		In the case where the platform is set to Openshift (the default value),
		we need to make a few modifications to the namespace metadata.

		The 'openshift.io/cluster-monitoring' labels tells the cluster-monitoring
		operator to scrape Prometheus metrics for the installed Metering namespace.

		The 'openshift.io/node-selector' annotation is a way to control where Pods
		get scheduled in a specific namespace. If this annotation is set to an empty
		label, that means that Pods for this namespace can be scheduled on any nodes.

		In the case where a cluster administrator has configured a value for the
		defaultNodeSelector field in the cluster's Scheduler object, we need to set
		this namespace annotation in order to avoid a collision with what the user
		has supplied in their MeteringConfig custom resource. This implies that whenever
		a cluster has been configured to schedule Pods using a default node selector,
		those changes must also be propogated to the MeteringConfig custom resource, else
		the Pods in Metering namespace will be scheduled on any available node.
	*/
	if deploy.config.Platform == "openshift" {
		labels["openshift.io/cluster-monitoring"] = "true"
		deploy.logger.Infof("Added the 'openshift.io/cluster-monitoring' label to the %s namespace", deploy.config.Namespace)

		annotations["openshift.io/node-selector"] = ""
		deploy.logger.Infof("Added the empty 'openshift.io/node-selector' annotation to the %s namespace", deploy.config.Namespace)
	}

	namespace.ObjectMeta.Labels = labels
	namespace.ObjectMeta.Annotations = annotations

	_, err = deploy.client.CoreV1().Namespaces().Update(context.TODO(), namespace, metav1.UpdateOptions{})
	if err != nil {
		return fmt.Errorf("failed to update labels and annotations on the %s namespace: %v", deploy.config.Namespace, err)
	}
	return nil
}
timflannagan1

comment created time in 16 days

pull request commentopenshift/console

[release-4.5] Bug 1843661: cloudshell: fix loading loop for user with multiple projects

/assign @rohitkrai03

@rohitkrai03 tagging you since you LGTM'd the original. PTAL asap so we can get this off the blocker list. Thank you!

@jhadvig FYI in case you want to go ahead and tag

openshift-cherrypick-robot

comment created time in a month

Pull request review commentopenshift/openshift-docs

Bug 1823457, added AWS Private Link information

 Your VPC must meet the following characteristics:  If you use a cluster with public access, you must create a public and a private subnet for each availability zone that your cluster uses. The installation program modifies your subnets to add the `kubernetes.io/cluster/.*: shared` tag, so your subnets must have at least one free tag slot available for it. Review the current link:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html#tag-restrictions[Tag Restrictions] in the AWS documentation to ensure that the installation program can add a tag to each subnet that you specify. +If you are working in a disconnected environment, you are unable to reach the+public IPs for EC2 and ELB endpoints. To resolve this, you must create a VPC+endpoint and attach it to the subnet that the clusters are using. The endpoints+should be named as follows:++* `ec2.<region>.amazonaws.com`

The list of regions will change over time and I'd expect an AWS admin to be able to extrapolate from what you have already.

+1. If we want to do anything I'd just link to Amazon documentation and not try to maintain the canonical list here.

ahardin-rh

comment created time in 2 months

Pull request review commentkube-reporting/metering-operator

hack,charts,manifests: Update the manual OLM metering manifests.

 func (deploy *Deployer) uninstallMeteringCSV() error { 	// and hope that the user is re-running the olm-uninstall command. 	sub, err := deploy.olmV1Alpha1Client.Subscriptions(deploy.config.Namespace).Get(deploy.config.SubscriptionName, metav1.GetOptions{}) 	if apierrors.IsNotFound(err) {-		deploy.logger.Warnf("The metering subscription does not exist")
// in the case where the subscription resource does not already exist, exit early
// and hope that the user is re-running the olm-uninstall command.

maybe not the clearest, but I think that's the intent 👍

timflannagan1

comment created time in 2 months

pull request commentopenshift/openshift-docs

[WIP]Bug 1824840, Document manual creation of IAM for AWS

Thanks for putting it together!

We'll want to hold merging until we get the jira linked in the bug worked. Overall looks good to me but we'll need to include info from Devan's work that prevents clusters from upgrading if this is used as well as any tooling we have to help customers curate the permissions and ack that they're ready to upgrade.

ahardin-rh

comment created time in 2 months

Pull request review commentkube-reporting/metering-operator

hack,charts,manifests: Update the manual OLM metering manifests.

 func (deploy *Deployer) uninstallMeteringCSV() error { 	// and hope that the user is re-running the olm-uninstall command. 	sub, err := deploy.olmV1Alpha1Client.Subscriptions(deploy.config.Namespace).Get(deploy.config.SubscriptionName, metav1.GetOptions{}) 	if apierrors.IsNotFound(err) {-		deploy.logger.Warnf("The metering subscription does not exist")

Not found is usually an acceptable condition during an uninstall since the intent is to remove the object.

timflannagan1

comment created time in 2 months

Pull request review commentkube-reporting/metering-operator

hack,charts,manifests: Update the manual OLM metering manifests.

 if [ "$METERING_NAMESPACE" != "openshift-metering" ]; then      "$FAQ_BIN" -f yaml -o yaml -M -c -r \         --kwargs "namespace=$METERING_NAMESPACE" \-        '.spec.targetNamespace=$namespace' \-        "$OLM_MANIFESTS_DIR/metering.catalogsourceconfig.yaml" \-        > "$TMPDIR/metering.catalogsourceconfig.yaml"+        '.metadata.namespace=$namespace' \+        "$OLM_MANIFESTS_DIR/metering.catalogsource.yaml" \+        > "$TMPDIR/metering.catalogsource.yaml"      "$FAQ_BIN" -f yaml -o yaml -M -c -r \         --kwargs "namespace=$METERING_NAMESPACE" \-        '.spec.targetNamespaces[0]=$namespace | .metadata.name=$namespace + "-" + .metadata.name' \+        '.spec.targetNamespaces[0]=$namespace | .metadata.namespace=$namespace | .metadata.name=$namespace + "-" + .metadata.name' \         "$OLM_MANIFESTS_DIR/metering.operatorgroup.yaml" \         > "$TMPDIR/metering.operatorgroup.yaml"      "$FAQ_BIN" -f yaml -o yaml -M -c -r \         --kwargs "namespace=$METERING_NAMESPACE" \-        '.spec.sourceNamespace=$namespace' \+        '.spec.sourceNamespace=$namespace | .metadata.namespace=$namespace' \         "$OLM_MANIFESTS_DIR/metering.subscription.yaml" \         > "$TMPDIR/metering.subscription.yaml" -        export OLM_MANIFESTS_DIR="$TMPDIR"+    export OLM_MANIFESTS_DIR="$TMPDIR"+    export NAMESPACE=$METERING_NAMESPACE fi -msg "Installing Metering Catalog Source Config"+msg "Creating the Metering ConfigMap"+"$ROOT_DIR/hack/create-upgrade-configmap.sh" \

trailing \ not necessary here?

timflannagan1

comment created time in 2 months

Pull request review commentopenshift/openshift-docs

Bug 1823457, added AWS Private Link information

 To ensure that the subnets that you provide are suitable, the installation progr * The subnet CIDRs belong to the machine CIDR. * You must provide a subnet to deploy the cluster control plane and compute machines to. You can use the same subnet for both machine types. +If you are working in a disconnected environment, you are unable to reach the+public IPs for EC2 and ELB endpoints. To resolve this, you must create a VPC+endpoint and attach it to the subnet that the clusters are using. The endpoints+should be named as follows:++* `ec2.<region>.amazonaws.com`

this wouldn't apply to GCP, it is AWS specific. I'm not aware of how it works in GCP, maybe @sdodson can help us out.

ahardin-rh

comment created time in 2 months

pull request commentopenshift/enhancements

CLI in-cluster management

/cc @benjaminapetersen @spadgett

sallyom

comment created time in 2 months

delete branch pweil-/SecurityDemos

delete branch : fix-typo

delete time in 2 months

PR opened RedHatDemos/SecurityDemos

fix typo

rouge should be rogue 😄 👍

+3 -3

0 comment

1 changed file

pr created time in 2 months

create barnchpweil-/SecurityDemos

branch : fix-typo

created branch time in 2 months

pull request commentopenshift/cluster-kube-apiserver-operator

Fix kube-apiserver-to-kubelet-signer's refresh date

@sttts part of the motivation here is exactly what you just stated: by just reading this code it appears we have a cert with 1 year validity that we will never rotate 😄 That's confusing to readers unless they happen to also know the code in signer.go. Though a comment could suffice if folks object to putting a more realistic date in there.

tnozicka

comment created time in 3 months

more