profile
viewpoint
Aleksey Dukhovniy zen-dog mesosphere Hamburg alexdukhovniy

jeschkies/unit 3

Unit is an Overview Page for your Unit Tests Results.

zen-dog/airflow 0

Apache Airflow

zen-dog/dcos 0

DC/OS Build and Release tools

zen-dog/dcos-cli 0

The command line for your datacenter!

zen-dog/dcos-core-cli 0

Core plugin for the DC/OS CLI

zen-dog/functional-scala 0

Functional Programming Principles in Scala (coursera)

zen-dog/mesos 0

Mirror of Apache Mesos

push eventkudobuilder/kudo

Aleksey Dukhovniy

commit sha 09583c91c1fc46b898cdae0d99983edb34dbb995

making linter happy Signed-off-by: Aleksey Dukhovniy <alex.dukhovniy@googlemail.com>

view details

push time in 11 hours

Pull request review commentkudobuilder/kudo

Simplify instance controller reconciliation

 func (i *Instance) PlanStatus(plan string) *PlanStatus { 	return nil } -// annotateSnapshot stores the current spec of Instance into the snapshot annotation-// this information is used when executing update/upgrade plans, this overrides any snapshot that existed before-func (i *Instance) AnnotateSnapshot() error {

Fun fact: we don't need to save instance spec in the annotation anymore

zen-dog

comment created time in 12 hours

PR opened kudobuilder/kudo

Reviewers
Simplify instance controller reconciliation

Summary: current instance controller reconciliation logic with regard to detecting currently active plan can be much simpler now that we have IAW (instance admission webhook) controlling the scheduled plan.

Signed-off-by: Aleksey Dukhovniy alex.dukhovniy@googlemail.com

+146 -323

0 comment

5 changed files

pr created time in 12 hours

create barnchkudobuilder/kudo

branch : ad/simplify-ic-reconcile

created branch time in 12 hours

Pull request review commentkudobuilder/kudo

Create NS Annotation label Made a Constant

 const (  	// Last applied state for three way merges 	LastAppliedConfigAnnotation = "kudo.dev/last-applied-configuration"++	CreatedByAnnotation = "kudo.dev/created-by"

Is there a reason to split this functionality over multiple PRs?

kensipe

comment created time in 12 hours

Pull request review commentkudobuilder/kudo

Create NS Annotation label Made a Constant

 const (  	// Last applied state for three way merges 	LastAppliedConfigAnnotation = "kudo.dev/last-applied-configuration"++	CreatedByAnnotation = "kudo.dev/created-by"

I've thought about it, and I like this annotation. Should we ever introduce kudo uninstall it might become very useful. However, let's do it consistently: we should annotate all resources created by the CLI with it:

  • all package resources such as Instance, Operator and OperatorVersion
  • all kudo init resources /cc @kensipe
kensipe

comment created time in 15 hours

Pull request review commentkudobuilder/kudo

Make it clear that CLI install target is not for end-users.

 cli-clean:  # Install CLI cli-install:+	@echo "Please installed a released version of KUDO."

If you have source code checked out then you're probably not a typical end-user, right? I'm not sure what the benefit of this rename should be.

porridge

comment created time in 18 hours

PR opened kudobuilder/kudo

Reviewers
Remove duplicated `setControllerReference` method

Summary: controller-runtime already validate the namespace ownership rules which was presumably not always the case (and why we had to implement our own).

Signed-off-by: Aleksey Dukhovniy alex.dukhovniy@googlemail.com

+1 -28

0 comment

1 changed file

pr created time in 2 days

create barnchkudobuilder/kudo

branch : ad/controller-reference

created branch time in 2 days

Pull request review commentkudobuilder/kudo

Create NS Annotation label Made a Constant

 const (  	// Last applied state for three way merges 	LastAppliedConfigAnnotation = "kudo.dev/last-applied-configuration"++	CreatedByAnnotation = "kudo.dev/created-by"

However in these cases, there is less confusion over who created them... or more importantly who manages them

Fair point though I disagree about "managing" part as things are generally created by the CLI and managed by the manager. In this case, we should stay consistent and also annotate I/O/OV resources.

kensipe

comment created time in 2 days

push eventkudobuilder/kudo

Aleksey Dukhovniy

commit sha f71e81c51024ff0b19710dddc3ebad4bc5b818ff

KEP-29: Add `KudoOperatorTask` implementation (#1541) Summary: implemented `KudoTaskOperator` which, given a `KudoOperator` task in the operator will create the `Instance` object and wait for it to become healthy. Additionally added `paramsFile` to the `KudoOperatorTaskSpec`. Fixes: #1509 Signed-off-by: Aleksey Dukhovniy <alex.dukhovniy@googlemail.com>

view details

push time in 2 days

delete branch kudobuilder/kudo

delete branch : ad/kudo-operator-task

delete time in 2 days

PR merged kudobuilder/kudo

Reviewers
KEP-29: Add `KudoOperatorTask` implementation

Summary: implemented KudoTaskOperator which, given a KudoOperator task in the operator will create the Instance object and wait for it to become healthy. Additionally added paramsFile to the KudoOperatorTaskSpec.

Fixes: #1509

Signed-off-by: Aleksey Dukhovniy alex.dukhovniy@googlemail.com

+649 -85

1 comment

30 changed files

zen-dog

pr closed time in 2 days

Pull request review commentkudobuilder/kudo

Detect different versions of cert-manager

 import ( var _ kudoinit.Step = &KudoWebHook{}  type KudoWebHook struct {-	opts        kudoinit.Options-	issuer      unstructured.Unstructured-	certificate unstructured.Unstructured+	opts kudoinit.Options++	certManagerGroup      string+	certManagerAPIVersion string++	issuer      *unstructured.Unstructured+	certificate *unstructured.Unstructured }  const (-	certManagerAPIVersion        = "v1alpha2"-	certManagerControllerVersion = "v0.12.0"+	certManagerOldGroup = "certmanager.k8s.io"

So even easier to check all permutatations 👍

ANeumann82

comment created time in 2 days

Pull request review commentkudobuilder/kudo

Detect different versions of cert-manager

 import ( var _ kudoinit.Step = &KudoWebHook{}  type KudoWebHook struct {-	opts        kudoinit.Options-	issuer      unstructured.Unstructured-	certificate unstructured.Unstructured+	opts kudoinit.Options++	certManagerGroup      string+	certManagerAPIVersion string++	issuer      *unstructured.Unstructured+	certificate *unstructured.Unstructured }  const (-	certManagerAPIVersion        = "v1alpha2"-	certManagerControllerVersion = "v0.12.0"+	certManagerOldGroup = "certmanager.k8s.io"

What about a group -> api mapping instead like?

certManagerGroupAPI := map[string]string{
  "v1alpha1": "certmanager.k8s.io",
  "v1alpha2": "cert-manager.io",
}

ANeumann82

comment created time in 2 days

Pull request review commentkudobuilder/kudo

Detect different versions of cert-manager

 func (initCmd *initCmd) run() error { 		} 	} +	installer := setup.NewInstaller(opts, initCmd.crdOnly)

I still don't know why crdOnly is not part of the opts. But that's another story

ANeumann82

comment created time in 2 days

push eventkudobuilder/kudo

Aleksey Dukhovniy

commit sha 2f11c9413c4814f90f4ce0c789c8c47633c67df2

fixed `operator-with-dependencies` integration test Signed-off-by: Aleksey Dukhovniy <alex.dukhovniy@googlemail.com>

view details

push time in 2 days

Pull request review commentkudobuilder/kudo

KEP-29: Add `KudoOperatorTask` implementation

 func fatalExecutionError(cause error, eventName string, meta renderer.Metadata)  func newKudoOperator(task *v1beta1.Task) (Tasker, error) { 	// validate KudoOperatorTask-	if len(task.Spec.KudoOperatorTaskSpec.Package) == 0 {+	if task.Spec.KudoOperatorTaskSpec.Package == "" { 		return nil, fmt.Errorf("task validation error: kudo operator task '%s' has an empty package name", task.Name) 	} -	if len(task.Spec.KudoOperatorTaskSpec.OperatorVersion) == 0 {+	if task.Spec.KudoOperatorTaskSpec.OperatorVersion == "" { 		return nil, fmt.Errorf("task validation error: kudo operator task '%s' has an empty operatorVersion", task.Name) 	}  	return KudoOperatorTask{ 		Name:            task.Name,-		Package:         task.Spec.KudoOperatorTaskSpec.Package,+		OperatorName:    task.Spec.KudoOperatorTaskSpec.Package,

Actually, KudoOperatorTaskSpec needs both:

  • Package is set by the operator developer which KUDO will try to resolve as usual (local, URL, community repo)
  • OperatorName is the result of resolving the package, set by the CLI used by the manager

See the commit below

zen-dog

comment created time in 2 days

push eventkudobuilder/kudo

Aleksey Dukhovniy

commit sha 241eff57c3c83f5e53a10238594a4b9375c1ae82

Added `operatorName` to the `KudoOperatorTaskSpec` Signed-off-by: Aleksey Dukhovniy <alex.dukhovniy@googlemail.com>

view details

push time in 2 days

Pull request review commentkudobuilder/kudo

Refactor operator package installation

+package install++import (+	"fmt"++	"github.com/thoas/go-funk"++	"github.com/kudobuilder/kudo/pkg/kudoctl/clog"+	"github.com/kudobuilder/kudo/pkg/kudoctl/packages"+	"github.com/kudobuilder/kudo/pkg/kudoctl/util/kudo"+)++func installOperatorAndOperatorVersion(client *kudo.Client, resources packages.Resources) error {+	if !client.OperatorExistsInCluster(resources.Operator.Name, resources.Operator.Namespace) {+		if _, err := client.InstallOperatorObjToCluster(resources.Operator, resources.Operator.Namespace); err != nil {+			return fmt.Errorf(+				"failed to install %s-operator.yaml in namespace %s: %v",+				resources.Operator.Name,+				resources.Operator.Namespace,+				err)+		}+		clog.Printf(+			"operator.%s/%s created in namespace %s",

Definitely lack of consistency: I tried to use <namespace>/<name> at least everywhere in the KUDO manager.

nfnt

comment created time in 2 days

Pull request review commentkudobuilder/kudo

simplified diagnostics

+package diagnostics++import (+	"github.com/kudobuilder/kudo/pkg/kudoctl/env"+	"github.com/kudobuilder/kudo/pkg/kudoctl/util/kudo"+	"github.com/kudobuilder/kudo/pkg/version"+)++func diagForInstance(instance string, options *Options, c *kudo.Client, info version.Info, s *env.Settings, p *nonFailingPrinter) error {+	ir, err := newInstanceResources(instance, options, c, s)+	if err != nil {+		p.printError(err, DiagDir, "instance")+		return err+	}++	ctx := &processingContext{root: DiagDir, instanceName: instance}++	runner := runnerForInstance(ir, ctx)+	runner.addObjDump(info, ctx.rootDirectory, "version")+	runner.addObjDump(s, ctx.rootDirectory, "settings")++	if err := runner.run(p); err != nil {+		return err+	}++	return nil+}++func diagForKudoManager(options *Options, c *kudo.Client, p *nonFailingPrinter) error {+	kr, err := newKudoResources(options, c)+	if err != nil {+		return err+	}+	ctx := &processingContext{root: KudoDir}++	runner := runnerForKudoManager(kr, ctx)++	if err := runner.run(p); err != nil {+		return err+	}++	return nil+}++func runnerForInstance(ir *resourceFuncsConfig, ctx *processingContext) *runner {+	r := &runner{}++	instance := resourceCollectorGroup{[]resourceCollector{+		{+			loadResourceFn: ir.instance,+			name:           "instance",+			parentDir:      ctx.operatorDirectory,+			failOnError:    true,+			callback:       ctx.setOperatorVersionNameFromInstance,+			printMode:      ObjectWithDir},+		{+			loadResourceFn: ir.operatorVersion(ctx.operatorVersionName),+			name:           "operatorversion",+			parentDir:      ctx.operatorDirectory,+			failOnError:    true,+			callback:       ctx.setOperatorNameFromOperatorVersion,+			printMode:      ObjectWithDir},+		{+			loadResourceFn: ir.operator(ctx.operatorName),+			name:           "operator",+			parentDir:      ctx.rootDirectory,+			failOnError:    true,+			printMode:      ObjectWithDir},+	}}+	r.addCollector(instance)++	r.addCollector(&resourceCollector{+		loadResourceFn: ir.pods,+		name:           "pod",+		parentDir:      ctx.instanceDirectory,+		callback:       ctx.setPods,+		printMode:      ObjectListWithDirs})+	r.addCollector(&resourceCollector{+		loadResourceFn: ir.services,+		name:           "service",+		parentDir:      ctx.instanceDirectory,+		printMode:      RuntimeObject})+	r.addCollector(&resourceCollector{+		loadResourceFn: ir.deployments,+		name:           "deployment",+		parentDir:      ctx.instanceDirectory,+		printMode:      RuntimeObject})+	r.addCollector(&resourceCollector{+		loadResourceFn: ir.statefulSets,+		name:           "statefulset",+		parentDir:      ctx.instanceDirectory,+		printMode:      RuntimeObject})+	r.addCollector(&resourceCollector{+		loadResourceFn: ir.replicaSets,+		name:           "replicaset",+		parentDir:      ctx.instanceDirectory,+		printMode:      RuntimeObject})+	r.addCollector(&resourceCollector{+		loadResourceFn: ir.statefulSets,+		name:           "statefulset",+		parentDir:      ctx.instanceDirectory,+		printMode:      RuntimeObject})+	r.addCollector(&resourceCollector{+		loadResourceFn: ir.serviceAccounts,+		name:           "serviceaccount",+		parentDir:      ctx.instanceDirectory,+		printMode:      RuntimeObject})+	r.addCollector(&resourceCollector{+		loadResourceFn: ir.clusterRoleBindings,+		name:           "clusterrolebinding",+		parentDir:      ctx.instanceDirectory,+		printMode:      RuntimeObject})+	r.addCollector(&resourceCollector{+		loadResourceFn: ir.roleBindings,+		name:           "rolebinding",+		parentDir:      ctx.instanceDirectory,+		printMode:      RuntimeObject})+	r.addCollector(&resourceCollector{+		loadResourceFn: ir.clusterRoles,+		name:           "clusterrole",+		parentDir:      ctx.instanceDirectory,+		printMode:      RuntimeObject})+	r.addCollector(&resourceCollector{+		loadResourceFn: ir.roles,+		name:           "role",+		parentDir:      ctx.instanceDirectory,+		printMode:      RuntimeObject})+	r.addCollector(&logsCollector{

This is another reason why a dedicated PodCollector might be better: right now the complexity is hidden in the fact that logCollector has to be last and it depends on the ctx.podList. A PodCollector would collect both: pod.yaml and pod.log as an "atomic" unit.

vemelin-epm

comment created time in 2 days

Pull request review commentkudobuilder/kudo

simplified diagnostics

+package diagnostics++import (+	"github.com/kudobuilder/kudo/pkg/kudoctl/env"+	"github.com/kudobuilder/kudo/pkg/kudoctl/util/kudo"+	"github.com/kudobuilder/kudo/pkg/version"+)++func diagForInstance(instance string, options *Options, c *kudo.Client, info version.Info, s *env.Settings, p *nonFailingPrinter) error {+	ir, err := newInstanceResources(instance, options, c, s)+	if err != nil {+		p.printError(err, DiagDir, "instance")+		return err+	}++	ctx := &processingContext{root: DiagDir, instanceName: instance}++	runner := runnerForInstance(ir, ctx)+	runner.addObjDump(info, ctx.rootDirectory, "version")+	runner.addObjDump(s, ctx.rootDirectory, "settings")++	if err := runner.run(p); err != nil {+		return err+	}++	return nil+}++func diagForKudoManager(options *Options, c *kudo.Client, p *nonFailingPrinter) error {+	kr, err := newKudoResources(options, c)+	if err != nil {+		return err+	}+	ctx := &processingContext{root: KudoDir}++	runner := runnerForKudoManager(kr, ctx)++	if err := runner.run(p); err != nil {+		return err+	}++	return nil+}++func runnerForInstance(ir *resourceFuncsConfig, ctx *processingContext) *runner {+	r := &runner{}++	instance := resourceCollectorGroup{[]resourceCollector{

I still think that we would be better off with a dedicated high-level collector like InstanceCollector than with a generic resourceCollectorGroup. But this might be a topic for a later refactoring

vemelin-epm

comment created time in 2 days

Pull request review commentkudobuilder/kudo

simplified diagnostics

+package diagnostics

I would probably extract each collector in its own small file and put them all into the diagnostics/collectors. Wdyt?

vemelin-epm

comment created time in 2 days

Pull request review commentkudobuilder/kudo

simplified diagnostics

+package diagnostics++import (+	"fmt"+	"io"+	"path/filepath"+	"reflect"++	v1 "k8s.io/api/core/v1"+	"k8s.io/apimachinery/pkg/api/meta"+	"k8s.io/apimachinery/pkg/runtime"++	"github.com/kudobuilder/kudo/pkg/kudoctl/clog"+)++// Ensure collector is implemented+var _ collector = &resourceCollector{}++// resourceCollector - collector interface implementation for Kubernetes resources (runtime objects)+type resourceCollector struct {+	loadResourceFn func() (runtime.Object, error)+	name           string               // object kind used to describe the error+	parentDir      stringGetter         // parent dir to attach the printer's output+	failOnError    bool                 // define whether the collector should return the error+	callback       func(runtime.Object) // will be called with the retrieved resource after collection to update shared context+	printMode      printMode+}++// collect - load a resource and send either the resource or collection error to printer+// return error if failOnError field is set to true+// if failOnError is true, finding no object(s) is treated as an error+func (c *resourceCollector) collect(printer *nonFailingPrinter) error {+	clog.V(4).Printf("Collect Resource %s in parent dir %s", c.name, c.parentDir())+	obj, err := c._collect(c.failOnError)+	if err != nil {+		printer.printError(err, c.parentDir(), c.name)+		if c.failOnError {+			return err+		}+	}+	if obj != nil {+		printer.printObject(obj, c.parentDir(), c.printMode)+	}+	return nil+}++func emptyResult(obj runtime.Object) bool {+	return obj == nil || reflect.ValueOf(obj).IsNil() || (meta.IsListType(obj) && meta.LenList(obj) == 0)+}++func (c *resourceCollector) _collect(failOnError bool) (runtime.Object, error) {

It's very non-Go to have methods names starting with _

vemelin-epm

comment created time in 2 days

Pull request review commentkudobuilder/kudo

simplified diagnostics

+package diagnostics++import (+	"fmt"+	"io"+	"path/filepath"+	"reflect"++	v1 "k8s.io/api/core/v1"+	"k8s.io/apimachinery/pkg/api/meta"+	"k8s.io/apimachinery/pkg/runtime"++	"github.com/kudobuilder/kudo/pkg/kudoctl/clog"+)++// Ensure collector is implemented+var _ collector = &resourceCollector{}++// resourceCollector - collector interface implementation for Kubernetes resources (runtime objects)+type resourceCollector struct {+	loadResourceFn func() (runtime.Object, error)+	name           string               // object kind used to describe the error+	parentDir      stringGetter         // parent dir to attach the printer's output+	failOnError    bool                 // define whether the collector should return the error

Do we need failOnError parameter? The only case when collector that should fail on error is when collecting an Instance resource. Can we just hardcode it?

vemelin-epm

comment created time in 2 days

Pull request review commentkudobuilder/kudo

Refactor operator package installation

+package install++import (+	"fmt"++	"github.com/thoas/go-funk"++	"github.com/kudobuilder/kudo/pkg/kudoctl/clog"+	"github.com/kudobuilder/kudo/pkg/kudoctl/packages"+	"github.com/kudobuilder/kudo/pkg/kudoctl/util/kudo"+)++func installOperatorAndOperatorVersion(client *kudo.Client, resources packages.Resources) error {+	if !client.OperatorExistsInCluster(resources.Operator.Name, resources.Operator.Namespace) {+		if _, err := client.InstallOperatorObjToCluster(resources.Operator, resources.Operator.Namespace); err != nil {+			return fmt.Errorf(+				"failed to install %s-operator.yaml in namespace %s: %v",+				resources.Operator.Name,+				resources.Operator.Namespace,+				err)+		}+		clog.Printf(+			"operator.%s/%s created in namespace %s",

By default k8s and KUDO print the objects as namespace/name - let's not break this pattern please as it makes grepping logs easier and more consistent.

nfnt

comment created time in 2 days

Pull request review commentkudobuilder/kudo

Refactor operator package installation

+// Package install provides function to install package resources+// on a Kubernetes cluster.+package install++import (+	"strings"+	"time"++	"github.com/kudobuilder/kudo/pkg/apis/kudo/v1beta1"+	"github.com/kudobuilder/kudo/pkg/kudoctl/clog"+	"github.com/kudobuilder/kudo/pkg/kudoctl/packages"+	"github.com/kudobuilder/kudo/pkg/kudoctl/util/kudo"+)++type Options struct {

The can be multiple reasons for that. And again, being forced to resolve multiple cyclic dependencies (and giving up on at least one of them) in our codebase makes me appreciate this pattern a lot. Especially in a package like install which is potentially used by server and client and itself uses server types (like Instance). But I guess we'll have to agree to disagree here.

nfnt

comment created time in 2 days

push eventkudobuilder/kudo

Aleksey Dukhovniy

commit sha b6e2a4d3dc42f69abc2efada15e24cb6f46bceeb

`instanceParameters` method returns an empty map instead of nil Signed-off-by: Aleksey Dukhovniy <alex.dukhovniy@googlemail.com>

view details

push time in 2 days

Pull request review commentkudobuilder/kudo

KEP-29: Add `KudoOperatorTask` implementation

 type KudoOperatorTask struct { 	InstanceName    string 	AppVersion      string 	OperatorVersion string+	ParameterFile   string }  // Run method for the KudoOperatorTask. Not yet implemented func (dt KudoOperatorTask) Run(ctx Context) (bool, error) {-	return false, errors.New("kudo-operator task is not yet implemented. Stay tuned though ;)")++	// 0. - A few prerequisites -+	// Note: ctx.Meta has Meta.OperatorName and Meta.OperatorVersion fields but these are of the **parent instance**+	// However, since we don't support multiple namespaces yet, we can use the Meta.InstanceNamespace for the namespace+	namespace := ctx.Meta.InstanceNamespace+	operatorName := dt.Package+	operatorVersion := dt.OperatorVersion+	operatorVersionName := v1beta1.OperatorVersionName(operatorName, operatorVersion)+	instanceName := dependencyInstanceName(ctx.Meta.InstanceName, dt.InstanceName, operatorName)++	// 1. - Expand parameter file if exists -+	params, err := instanceParameters(dt.ParameterFile, ctx.Templates, ctx.Meta, ctx.Parameters)+	if err != nil {+		return false, fatalExecutionError(err, taskRenderingError, ctx.Meta)+	}++	// 2. - Build the instance object -+	instance, err := instanceResource(instanceName, operatorName, operatorVersionName, namespace, params, ctx.Meta.ResourcesOwner, ctx.Scheme)+	if err != nil {+		return false, fatalExecutionError(err, taskRenderingError, ctx.Meta)+	}++	// 3. - Apply the Instance object -+	err = applyInstance(instance, namespace, ctx.Client)+	if err != nil {+		return false, err+	}++	// 4. - Check the Instance health -+	if err := health.IsHealthy(instance); err != nil {+		return false, nil+	}++	return true, nil+}++// dependencyInstanceName returns a name for the child instance in an operator with dependencies looking like+// <parent-instance.<child-instance> if a child instance name is provided e.g. `kafka-instance.custom-name` or+// <parent-instance.<child-operator> if not e.g. `kafka-instance.zookeeper`. This way we always have a valid child+// instance name and user can install the same operator multiple times in the same namespace, because the instance+// names will be unique thanks to the top-level instance name prefix.+func dependencyInstanceName(parentInstanceName, instanceName, operatorName string) string {+	if instanceName != "" {+		return fmt.Sprintf("%s.%s", parentInstanceName, instanceName)+	}+	return fmt.Sprintf("%s.%s", parentInstanceName, operatorName)+}++// render method takes templated parameter file and a map of parameters and then renders passed template using kudo engine.+func instanceParameters(pf string, templates map[string]string, meta renderer.Metadata, parameters map[string]interface{}) (map[string]string, error) {+	if len(pf) != 0 {+		pft, ok := templates[pf]+		if !ok {+			return nil, fmt.Errorf("error finding parameter file %s", pf)+		}++		rendered, err := renderParametersFile(pf, pft, meta, parameters)+		if err != nil {+			return nil, fmt.Errorf("error expanding parameter file %s: %w", pf, err)+		}++		parameters := map[string]string{}+		errs := []string{}+		parser.GetParametersFromFile(pf, []byte(rendered), errs, parameters)+		if len(errs) > 0 {+			return nil, fmt.Errorf("failed to unmarshal parameter file %s: %s", pf, strings.Join(errs, ", "))+		}++		return parameters, nil+	}++	return nil, nil+}++func renderParametersFile(pf string, pft string, meta renderer.Metadata, parameters map[string]interface{}) (string, error) {+	vals := renderer.+		NewVariableMap().+		WithInstance(meta.OperatorName, meta.InstanceName, meta.InstanceNamespace, meta.AppVersion, meta.OperatorVersion).+		WithParameters(parameters)++	engine := renderer.New()++	return engine.Render(pf, pft, vals)+}++func instanceResource(instanceName, operatorName, operatorVersionName, namespace string, parameters map[string]string, owner metav1.Object, scheme *runtime.Scheme) (*v1beta1.Instance, error) {+	instance := &v1beta1.Instance{+		TypeMeta: metav1.TypeMeta{+			Kind:       "Instance",+			APIVersion: packages.APIVersion,+		},+		ObjectMeta: metav1.ObjectMeta{+			Name:      instanceName,+			Namespace: namespace,+			Labels:    map[string]string{kudo.OperatorLabel: operatorName},+		},+		Spec: v1beta1.InstanceSpec{+			OperatorVersion: corev1.ObjectReference{+				Name: operatorVersionName,+			},+			Parameters: parameters,+		},+		Status: v1beta1.InstanceStatus{},+	}+	if err := controllerutil.SetControllerReference(owner, instance, scheme); err != nil {+		return nil, fmt.Errorf("failed to set resource ownership for the new instance: %v", err)+	}++	return instance, nil+}++// applyInstance creates the passed instance if it doesn't exist or patches the existing one. Patch will override+// current spec.parameters and Spec.operatorVersion the same way, kudoctl does it. If the was no error, then the passed+// instance object is updated with the content returned by the server+func applyInstance(new *v1beta1.Instance, ns string, c client.Client) error {+	old := &v1beta1.Instance{}+	err := c.Get(context.TODO(), types.NamespacedName{Name: new.Name, Namespace: ns}, old)++	switch {+	// 1. if instance doesn't exist, create it+	case apierrors.IsNotFound(err):+		log.Printf("Instance %s/%s doesn't exist. Creating it", new.Namespace, new.Name)+		return createInstance(new, c)+	// 2. if the instance exists (there was no error), try to patch it+	case err == nil:+		log.Printf("Instance %s/%s already exist. Patching it", new.Namespace, new.Name)+		return patchInstance(new, c)+	// 3. any other error is treated as transient+	default:+		return fmt.Errorf("failed to check if instance %s/%s already exists: %v", new.Namespace, new.Name, err)+	}+}++func createInstance(i *v1beta1.Instance, c client.Client) error {+	gvk := i.GroupVersionKind()+	err := c.Create(context.TODO(), i)++	// reset the GVK since it is removed by the c.Create call+	// https://github.com/kubernetes/kubernetes/issues/80609+	i.SetGroupVersionKind(gvk)++	return err+}++func patchInstance(i *v1beta1.Instance, c client.Client) error {+	patch, err := json.Marshal(struct {+		Spec *v1beta1.InstanceSpec `json:"spec"`+	}{+		Spec: &i.Spec,+	})++	if err != nil {+		return fmt.Errorf("failed to serialize instance %s/%s patch: %v", i.Namespace, i.Name, err)+	}++	return c.Patch(context.TODO(), i, client.RawPatch(types.MergePatchType, patch))

The applyResources sets all the labels and annotations but more importantly, it sets the kudo.dev/last-applied-configuration. However, an Instance already has it set by the instance_controller.go called kudo.dev/last-applied-instance-state, which is something I wanted to removed altogether. What do you think about setting patchStrategy:"merge" for the spec.parameters like:

type InstanceSpec struct {
	...
	Parameters map[string]string `json:"parameters,omitempty" patchStrategy:"merge"`
}

At least in the kudo_operator_task.go we generate a full map of parameters each time so overriding everything should work, right?

zen-dog

comment created time in 2 days

Pull request review commentkudobuilder/kudo

KEP-29: Add `KudoOperatorTask` implementation

 func getParamsFromFiles(fs afero.Fs, filePaths []string, errs []string) (map[str 			errs = append(errs, fmt.Sprintf("error reading from parameter file %s: %v", filePath, err)) 			continue 		}-		data := make(map[string]interface{})-		err = yaml.Unmarshal(rawData, &data)++		errs = GetParametersFromFile(filePath, rawData, errs, parameters)++	}+	return parameters, errs+}++func GetParametersFromFile(filePath string, bytes []byte, errs []string, parameters map[string]string) []string {+	data := make(map[string]interface{})+	err := yaml.Unmarshal(bytes, &data)+	if err != nil {+		errs = append(errs, fmt.Sprintf("error unmarshalling content of parameter file %s: %v", filePath, err))+		return errs

Ok, I reworked it and the end result is pretty close to the variant above: verbosity is reduced and the method returns an error. However, I keep the original error with the corresponding key.

zen-dog

comment created time in 2 days

push eventkudobuilder/kudo

Aleksey Dukhovniy

commit sha c27382c45967280bd79081e373bd5e5d70319b47

refactored params/parser.go to not take errs as input parameters Signed-off-by: Aleksey Dukhovniy <alex.dukhovniy@googlemail.com>

view details

push time in 2 days

Pull request review commentkudobuilder/kudo

KEP-29: Add `KudoOperatorTask` implementation

 func getParamsFromFiles(fs afero.Fs, filePaths []string, errs []string) (map[str 			errs = append(errs, fmt.Sprintf("error reading from parameter file %s: %v", filePath, err)) 			continue 		}-		data := make(map[string]interface{})-		err = yaml.Unmarshal(rawData, &data)++		errs = GetParametersFromFile(filePath, rawData, errs, parameters)++	}+	return parameters, errs+}++func GetParametersFromFile(filePath string, bytes []byte, errs []string, parameters map[string]string) []string {

I thought nil array expansion wouldn't work but it does. Ok, I'll adjust

zen-dog

comment created time in 2 days

Pull request review commentkudobuilder/kudo

KEP-29: Add `KudoOperatorTask` implementation

 func getParamsFromFiles(fs afero.Fs, filePaths []string, errs []string) (map[str 			errs = append(errs, fmt.Sprintf("error reading from parameter file %s: %v", filePath, err)) 			continue 		}-		data := make(map[string]interface{})-		err = yaml.Unmarshal(rawData, &data)++		errs = GetParametersFromFile(filePath, rawData, errs, parameters)++	}+	return parameters, errs+}++func GetParametersFromFile(filePath string, bytes []byte, errs []string, parameters map[string]string) []string {

I think the reason it was built like this is for more concise error handling. See, for example, the entry method GetParameterMap. Instead of:

	var errs []string

	paramsFromCmdline, errs := getParamsFromCmdline(raw, errs)
	paramsFromFiles, errs := getParamsFromFiles(fs, filePaths, errs)

you would end up with smth. like:

	var errs []string

	paramsFromCmdline, errs := getParamsFromCmdline(raw)
	if len(errs) != 0 {
		errs = append(errs, errs...)
	}
	paramsFromFiles, errs := getParamsFromFiles(fs, filePaths)
	if len(errs) != 0 {
		errs = append(errs, errs...)
	}

and you'll have to do it in all helper methods in this parser. I might be wrong though /cc @porridge

zen-dog

comment created time in 2 days

Pull request review commentkudobuilder/kudo

KEP-29: Add `KudoOperatorTask` implementation

 func getParamsFromFiles(fs afero.Fs, filePaths []string, errs []string) (map[str 			errs = append(errs, fmt.Sprintf("error reading from parameter file %s: %v", filePath, err)) 			continue 		}-		data := make(map[string]interface{})-		err = yaml.Unmarshal(rawData, &data)++		errs = GetParametersFromFile(filePath, rawData, errs, parameters)++	}+	return parameters, errs+}++func GetParametersFromFile(filePath string, bytes []byte, errs []string, parameters map[string]string) []string {+	data := make(map[string]interface{})+	err := yaml.Unmarshal(bytes, &data)+	if err != nil {+		errs = append(errs, fmt.Sprintf("error unmarshalling content of parameter file %s: %v", filePath, err))+		return errs

These are all valid points, but I merely extracted this part from the getParamsFromFiles method above and kept the overall structure. This is the way this parser is built starting with the entry method GetParameterMap above: all the helper methods are treating errors as strings gathering them together.

Imho the biggest drawback of the above approach is that the individual errors collected in this line:

errs = append(errs, fmt.Sprintf("error converting value of parameter %s from file %s %q to a string: %v", key, filePath, value, err))

are lost and replaced by a generic error converting values to strings without extra details of what exactly is the problem.

zen-dog

comment created time in 2 days

push eventkudobuilder/kudo

Aleksey Dukhovniy

commit sha 8105119bed0aead97cf6a555eaf4b65d705573c1

first batch of review feedback Signed-off-by: Aleksey Dukhovniy <alex.dukhovniy@googlemail.com>

view details

push time in 3 days

Pull request review commentkudobuilder/kudo

KEP-29: Add `KudoOperatorTask` implementation

 type KudoOperatorTask struct { 	InstanceName    string 	AppVersion      string 	OperatorVersion string+	ParameterFile   string }  // Run method for the KudoOperatorTask. Not yet implemented func (dt KudoOperatorTask) Run(ctx Context) (bool, error) {-	return false, errors.New("kudo-operator task is not yet implemented. Stay tuned though ;)")++	// 0. - A few prerequisites -+	// Note: ctx.Meta has Meta.OperatorName and Meta.OperatorVersion fields but these are of the **parent instance**+	// However, since we don't support multiple namespaces yet, we can use the Meta.InstanceNamespace for the namespace+	namespace := ctx.Meta.InstanceNamespace+	operatorName := dt.Package+	operatorVersion := dt.OperatorVersion+	operatorVersionName := v1beta1.OperatorVersionName(operatorName, operatorVersion)+	instanceName := dependencyInstanceName(ctx.Meta.InstanceName, dt.InstanceName, operatorName)++	// 1. - Expand parameter file if exists -+	params, err := instanceParameters(dt.ParameterFile, ctx.Templates, ctx.Meta, ctx.Parameters)+	if err != nil {+		return false, fatalExecutionError(err, taskRenderingError, ctx.Meta)+	}++	// 2. - Build the instance object -+	instance, err := instanceResource(instanceName, operatorName, operatorVersionName, namespace, params, ctx.Meta.ResourcesOwner, ctx.Scheme)+	if err != nil {+		return false, fatalExecutionError(err, taskRenderingError, ctx.Meta)+	}++	// 3. - Apply the Instance object -+	err = applyInstance(instance, namespace, ctx.Client)+	if err != nil {+		return false, err+	}++	// 4. - Check the Instance health -+	if err := health.IsHealthy(instance); err != nil {+		return false, nil+	}++	return true, nil+}++// dependencyInstanceName returns a name for the child instance in an operator with dependencies looking like+// <parent-instance.<child-instance> if a child instance name is provided e.g. `kafka-instance.custom-name` or+// <parent-instance.<child-operator> if not e.g. `kafka-instance.zookeeper`. This way we always have a valid child+// instance name and user can install the same operator multiple times in the same namespace, because the instance+// names will be unique thanks to the top-level instance name prefix.+func dependencyInstanceName(parentInstanceName, instanceName, operatorName string) string {+	if instanceName != "" {+		return fmt.Sprintf("%s.%s", parentInstanceName, instanceName)+	}+	return fmt.Sprintf("%s.%s", parentInstanceName, operatorName)+}++// render method takes templated parameter file and a map of parameters and then renders passed template using kudo engine.+func instanceParameters(pf string, templates map[string]string, meta renderer.Metadata, parameters map[string]interface{}) (map[string]string, error) {+	if len(pf) != 0 {+		pft, ok := templates[pf]+		if !ok {+			return nil, fmt.Errorf("error finding parameter file %s", pf)+		}++		rendered, err := renderParametersFile(pf, pft, meta, parameters)+		if err != nil {+			return nil, fmt.Errorf("error expanding parameter file %s: %w", pf, err)+		}++		parameters := map[string]string{}+		errs := []string{}+		parser.GetParametersFromFile(pf, []byte(rendered), errs, parameters)+		if len(errs) > 0 {+			return nil, fmt.Errorf("failed to unmarshal parameter file %s: %s", pf, strings.Join(errs, ", "))+		}++		return parameters, nil+	}++	return nil, nil+}++func renderParametersFile(pf string, pft string, meta renderer.Metadata, parameters map[string]interface{}) (string, error) {+	vals := renderer.+		NewVariableMap().+		WithInstance(meta.OperatorName, meta.InstanceName, meta.InstanceNamespace, meta.AppVersion, meta.OperatorVersion).+		WithParameters(parameters)++	engine := renderer.New()++	return engine.Render(pf, pft, vals)+}++func instanceResource(instanceName, operatorName, operatorVersionName, namespace string, parameters map[string]string, owner metav1.Object, scheme *runtime.Scheme) (*v1beta1.Instance, error) {+	instance := &v1beta1.Instance{+		TypeMeta: metav1.TypeMeta{+			Kind:       "Instance",+			APIVersion: packages.APIVersion,+		},+		ObjectMeta: metav1.ObjectMeta{+			Name:      instanceName,+			Namespace: namespace,+			Labels:    map[string]string{kudo.OperatorLabel: operatorName},+		},+		Spec: v1beta1.InstanceSpec{+			OperatorVersion: corev1.ObjectReference{+				Name: operatorVersionName,+			},+			Parameters: parameters,+		},+		Status: v1beta1.InstanceStatus{},+	}+	if err := controllerutil.SetControllerReference(owner, instance, scheme); err != nil {+		return nil, fmt.Errorf("failed to set resource ownership for the new instance: %v", err)+	}++	return instance, nil+}++// applyInstance creates the passed instance if it doesn't exist or patches the existing one. Patch will override+// current spec.parameters and Spec.operatorVersion the same way, kudoctl does it. If the was no error, then the passed+// instance object is updated with the content returned by the server+func applyInstance(new *v1beta1.Instance, ns string, c client.Client) error {+	old := &v1beta1.Instance{}+	err := c.Get(context.TODO(), types.NamespacedName{Name: new.Name, Namespace: ns}, old)++	switch {+	// 1. if instance doesn't exist, create it+	case apierrors.IsNotFound(err):+		log.Printf("Instance %s/%s doesn't exist. Creating it", new.Namespace, new.Name)+		return createInstance(new, c)+	// 2. if the instance exists (there was no error), try to patch it+	case err == nil:+		log.Printf("Instance %s/%s already exist. Patching it", new.Namespace, new.Name)+		return patchInstance(new, c)+	// 3. any other error is treated as transient+	default:+		return fmt.Errorf("failed to check if instance %s/%s already exists: %v", new.Namespace, new.Name, err)+	}+}++func createInstance(i *v1beta1.Instance, c client.Client) error {+	gvk := i.GroupVersionKind()+	err := c.Create(context.TODO(), i)++	// reset the GVK since it is removed by the c.Create call+	// https://github.com/kubernetes/kubernetes/issues/80609+	i.SetGroupVersionKind(gvk)++	return err+}++func patchInstance(i *v1beta1.Instance, c client.Client) error {+	patch, err := json.Marshal(struct {+		Spec *v1beta1.InstanceSpec `json:"spec"`+	}{+		Spec: &i.Spec,+	})++	if err != nil {+		return fmt.Errorf("failed to serialize instance %s/%s patch: %v", i.Namespace, i.Name, err)+	}++	return c.Patch(context.TODO(), i, client.RawPatch(types.MergePatchType, patch))

You're right. I've adopted this logic from kudo.go::UpdateInstance. But switching to the StrategicMergePatchType will require storing the previous state but instance controller is already doing that... I've created an issue #1544

zen-dog

comment created time in 3 days

issue openedkudobuilder/kudo

`kudo upgrade/update` commands do not remove existing parameters

<!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks! If the matter is security sensitive, please disclose it privately to a security contact: https://github.com/kudobuilder/kudo/blob/master/SECURITY_CONTACTS -->

What happened: Consider the following scenario: we have an Instance running that has the parameter:

spec:
  parameters:
    FANCY_PARAM_THAT_CAN_BE_THERE_OR_NOT: "cool value"

the param existence is templated like:

{{ if eq ".Params.INCLUDE_STUFF }}
FANCY_PARAM_THAT_CAN_BE_THERE_OR_NOT: "cool value"
{{ end }}

now, the user updates the instance disabling the INCLUDE_STUFF like:

$ kudo update --instance foo -p INCLUDE_STUFF=false

and expects that the parameter is removed from the instance, however, it is still there because kudoctl uses MergePatchType in the when updating/upgrading an Instance.

What you expected to happen: The parameter should be removed in the scenario abouve

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version):
  • Kudo version (use kubectl kudo version):
  • Operator:
  • operatorVersion:
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

created time in 3 days

Pull request review commentkudobuilder/kudo

Refactor operator package installation

+// Package install provides function to install package resources+// on a Kubernetes cluster.+package install++import (+	"strings"+	"time"++	"github.com/kudobuilder/kudo/pkg/apis/kudo/v1beta1"+	"github.com/kudobuilder/kudo/pkg/kudoctl/clog"+	"github.com/kudobuilder/kudo/pkg/kudoctl/packages"+	"github.com/kudobuilder/kudo/pkg/kudoctl/util/kudo"+)++type Options struct {

Adding a install.types.go won't change anything regarding cyclic dependencies not yet, that's why I wrote that it becomes easier "later" 😉

I consider this an anti-pattern well, k8s codebase disagrees with you, and frankly so do I. we adopted this pattern for a good reason. Lack of fine-grained imports in Go makes it challenging to avoid cyclic dependencies.

nfnt

comment created time in 3 days

Pull request review commentkudobuilder/kudo

KEP-29: Add `KudoOperatorTask` implementation

 type KudoOperatorTask struct { 	InstanceName    string 	AppVersion      string 	OperatorVersion string+	ParameterFile   string }  // Run method for the KudoOperatorTask. Not yet implemented func (dt KudoOperatorTask) Run(ctx Context) (bool, error) {-	return false, errors.New("kudo-operator task is not yet implemented. Stay tuned though ;)")++	// 0. - A few prerequisites -+	// Note: ctx.Meta has Meta.OperatorName and Meta.OperatorVersion fields but these are of the **parent instance**+	// However, since we don't support multiple namespaces yet, we can use the Meta.InstanceNamespace for the namespace+	namespace := ctx.Meta.InstanceNamespace+	operatorName := dt.Package+	operatorVersion := dt.OperatorVersion+	operatorVersionName := v1beta1.OperatorVersionName(operatorName, operatorVersion)+	instanceName := dependencyInstanceName(ctx.Meta.InstanceName, dt.InstanceName, operatorName)++	// 1. - Expand parameter file if exists -+	params, err := instanceParameters(dt.ParameterFile, ctx.Templates, ctx.Meta, ctx.Parameters)+	if err != nil {+		return false, fatalExecutionError(err, taskRenderingError, ctx.Meta)+	}++	// 2. - Build the instance object -+	instance, err := instanceResource(instanceName, operatorName, operatorVersionName, namespace, params, ctx.Meta.ResourcesOwner, ctx.Scheme)+	if err != nil {+		return false, fatalExecutionError(err, taskRenderingError, ctx.Meta)+	}++	// 3. - Apply the Instance object -+	err = applyInstance(instance, namespace, ctx.Client)+	if err != nil {+		return false, err+	}++	// 4. - Check the Instance health -+	if err := health.IsHealthy(instance); err != nil {+		return false, nil+	}++	return true, nil+}++// dependencyInstanceName returns a name for the child instance in an operator with dependencies looking like+// <parent-instance.<child-instance> if a child instance name is provided e.g. `kafka-instance.custom-name` or+// <parent-instance.<child-operator> if not e.g. `kafka-instance.zookeeper`. This way we always have a valid child+// instance name and user can install the same operator multiple times in the same namespace, because the instance+// names will be unique thanks to the top-level instance name prefix.+func dependencyInstanceName(parentInstanceName, instanceName, operatorName string) string {+	if instanceName != "" {+		return fmt.Sprintf("%s.%s", parentInstanceName, instanceName)+	}+	return fmt.Sprintf("%s.%s", parentInstanceName, operatorName)+}++// render method takes templated parameter file and a map of parameters and then renders passed template using kudo engine.+func instanceParameters(pf string, templates map[string]string, meta renderer.Metadata, parameters map[string]interface{}) (map[string]string, error) {+	if len(pf) != 0 {+		pft, ok := templates[pf]+		if !ok {+			return nil, fmt.Errorf("error finding parameter file %s", pf)+		}++		rendered, err := renderParametersFile(pf, pft, meta, parameters)+		if err != nil {+			return nil, fmt.Errorf("error expanding parameter file %s: %w", pf, err)+		}++		parameters := map[string]string{}+		errs := []string{}+		parser.GetParametersFromFile(pf, []byte(rendered), errs, parameters)+		if len(errs) > 0 {+			return nil, fmt.Errorf("failed to unmarshal parameter file %s: %s", pf, strings.Join(errs, ", "))+		}++		return parameters, nil+	}++	return nil, nil+}++func renderParametersFile(pf string, pft string, meta renderer.Metadata, parameters map[string]interface{}) (string, error) {+	vals := renderer.+		NewVariableMap().+		WithInstance(meta.OperatorName, meta.InstanceName, meta.InstanceNamespace, meta.AppVersion, meta.OperatorVersion).+		WithParameters(parameters)++	engine := renderer.New()++	return engine.Render(pf, pft, vals)+}++func instanceResource(instanceName, operatorName, operatorVersionName, namespace string, parameters map[string]string, owner metav1.Object, scheme *runtime.Scheme) (*v1beta1.Instance, error) {+	instance := &v1beta1.Instance{+		TypeMeta: metav1.TypeMeta{+			Kind:       "Instance",+			APIVersion: packages.APIVersion,+		},+		ObjectMeta: metav1.ObjectMeta{+			Name:      instanceName,+			Namespace: namespace,+			Labels:    map[string]string{kudo.OperatorLabel: operatorName},+		},+		Spec: v1beta1.InstanceSpec{+			OperatorVersion: corev1.ObjectReference{+				Name: operatorVersionName,+			},+			Parameters: parameters,+		},+		Status: v1beta1.InstanceStatus{},+	}+	if err := controllerutil.SetControllerReference(owner, instance, scheme); err != nil {+		return nil, fmt.Errorf("failed to set resource ownership for the new instance: %v", err)+	}++	return instance, nil+}++// applyInstance creates the passed instance if it doesn't exist or patches the existing one. Patch will override+// current spec.parameters and Spec.operatorVersion the same way, kudoctl does it. If the was no error, then the passed+// instance object is updated with the content returned by the server+func applyInstance(new *v1beta1.Instance, ns string, c client.Client) error {

I wanted that initially too, but it turned out to be more complex than I thought:

  • different error handling
  • different logging
  • different logic with waiting for the Instance, checking its existence, and a dozen other little things
  • different clients But maybe your refactoring in #1542 will make resolve the issues 😉
zen-dog

comment created time in 3 days

Pull request review commentkudobuilder/kudo

Refactor operator package installation

+// Package install provides function to install package resources+// on a Kubernetes cluster.+package install++import (+	"strings"+	"time"++	"github.com/kudobuilder/kudo/pkg/apis/kudo/v1beta1"+	"github.com/kudobuilder/kudo/pkg/kudoctl/clog"+	"github.com/kudobuilder/kudo/pkg/kudoctl/packages"+	"github.com/kudobuilder/kudo/pkg/kudoctl/util/kudo"+)++type Options struct {+	skipInstance    bool+	wait            *time.Duration+	createNamespace bool+}++type Option func(*Options)++// SkipInstance installs only Operator and OperatorVersion+// of an operator package.+func SkipInstance() Option {+	return func(o *Options) {+		o.skipInstance = true+	}+}++// WaitForInstance waits an amount of time for the instance+// to complete installation.+func WaitForInstance(duration time.Duration) Option {+	return func(o *Options) {+		o.wait = &duration+	}+}++// CreateNamespace creates the specified namespace before installation.+// If available, a namespace manifest in the operator package is+// rendered using the installation parameters.+func CreateNamespace() Option {+	return func(o *Options) {+		o.createNamespace = true+	}+}++// Package installs an operator package with parameters into a namespace.+// Instance name, namespace and operator parameters are applied to the+// operator package resources. These rendered resources are then created+// on the Kubernetes cluster.+func Package(+	client *kudo.Client,+	instanceName string,+	namespace string,+	resources packages.Resources,+	parameters map[string]string,+	opts ...Option) error {+	clog.V(3).Printf("operator name: %v", resources.Operator.Name)+	clog.V(3).Printf("operator version: %v", resources.OperatorVersion.Spec.Version)++	options := Options{}+	for _, o := range opts {+		o(&options)+	}++	applyOverrides(&resources, instanceName, namespace, parameters)++	if err := client.ValidateServerForOperator(resources.Operator); err != nil {+		return err+	}++	if options.createNamespace {+		if err := installNamespace(client, resources, parameters); err != nil {+			return err+		}+	}++	if err := installOperatorAndOperatorVersion(client, resources); err != nil {+		return err+	}++	if options.skipInstance {+		return nil+	}++	if err := validateParameters(

In case of KEP-29, we're allowing potentially broken parameters to skip the validation and make it to the server

nfnt

comment created time in 3 days

Pull request review commentkudobuilder/kudo

KEP-29: Add `KudoOperatorTask` implementation

 type KudoOperatorTask struct { 	InstanceName    string 	AppVersion      string 	OperatorVersion string+	ParameterFile   string }  // Run method for the KudoOperatorTask. Not yet implemented func (dt KudoOperatorTask) Run(ctx Context) (bool, error) {-	return false, errors.New("kudo-operator task is not yet implemented. Stay tuned though ;)")++	// 0. - A few prerequisites -+	// Note: ctx.Meta has Meta.OperatorName and Meta.OperatorVersion fields but these are of the **parent instance**+	// However, since we don't support multiple namespaces yet, we can use the Meta.InstanceNamespace for the namespace+	namespace := ctx.Meta.InstanceNamespace+	operatorName := dt.Package

That a very good point (which I forgot to document). It is called Pacakge because that's what we call it on the CLI side. Since O/OV resources are created on the client-side the package must be resolved there too. Here, the same resolution rules apply e.g. local path has a priority before remote repo. This might come handy when debugging/developing operators, where a child-operator can (temporarily) be installed from a local folder. However, once we're on the server-side, this has to be an operator name (overridden by the CLI?).

zen-dog

comment created time in 3 days

Pull request review commentkudobuilder/kudo

KEP-29: Add `KudoOperatorTask` implementation

 type KudoOperatorTask struct { 	InstanceName    string 	AppVersion      string 	OperatorVersion string+	ParameterFile   string }  // Run method for the KudoOperatorTask. Not yet implemented func (dt KudoOperatorTask) Run(ctx Context) (bool, error) {-	return false, errors.New("kudo-operator task is not yet implemented. Stay tuned though ;)")++	// 0. - A few prerequisites -+	// Note: ctx.Meta has Meta.OperatorName and Meta.OperatorVersion fields but these are of the **parent instance**+	// However, since we don't support multiple namespaces yet, we can use the Meta.InstanceNamespace for the namespace+	namespace := ctx.Meta.InstanceNamespace+	operatorName := dt.Package+	operatorVersion := dt.OperatorVersion+	operatorVersionName := v1beta1.OperatorVersionName(operatorName, operatorVersion)+	instanceName := dependencyInstanceName(ctx.Meta.InstanceName, dt.InstanceName, operatorName)++	// 1. - Expand parameter file if exists -+	params, err := instanceParameters(dt.ParameterFile, ctx.Templates, ctx.Meta, ctx.Parameters)

Worth mentioning, that you can have N child operators and therefore N parameter files. I believe, at this point, a convention would become cumbersome 😉

zen-dog

comment created time in 3 days

Pull request review commentkudobuilder/kudo

KEP-29: Add `KudoOperatorTask` implementation

 func newKudoOperator(task *v1beta1.Task) (Tasker, error) { 		return nil, fmt.Errorf("task validation error: kudo operator task '%s' has an empty package name", task.Name) 	} +	if len(task.Spec.KudoOperatorTaskSpec.OperatorVersion) == 0 {

You keep saying this, but that's how we check it 😆 (at least in this file, see above)

zen-dog

comment created time in 3 days

Pull request review commentkudobuilder/kudo

KEP-29: Add `KudoOperatorTask` implementation

 type KudoOperatorTaskSpec struct { 	// a specific operator version in the official repo, defaults to the most recent one 	// +optional 	OperatorVersion string `json:"operatorVersion,omitempty"`+	// a parameter file name that will be used to populate Instance.Spec.Parameters

I rephrased it a little but the gist is there

zen-dog

comment created time in 3 days

Pull request review commentkudobuilder/kudo

KEP-29: Add `KudoOperatorTask` implementation

 type KudoOperatorTaskSpec struct { 	// a specific operator version in the official repo, defaults to the most recent one 	// +optional 	OperatorVersion string `json:"operatorVersion,omitempty"`+	// a parameter file name that will be used to populate Instance.Spec.Parameters

Regarding the name parameterFile: from the perspective of the operator developer, it is exactly what it is: a name of the file with parameters for the child operator. I'm open to suggestions though.

As for why it needs to be a file and not an array is described in the KEP but the tl;dr is: the parameter file is templated and evaluated on the server-side, during plan execution (same as with parent params.yaml file)

zen-dog

comment created time in 3 days

Pull request review commentkudobuilder/kudo

KEP-29: Add `KudoOperatorTask` implementation

 func GetPhaseStatus(phaseName string, planStatus *PlanStatus) *PhaseStatus {  	return nil }++func InstanceName(operatorName string) string {+	return fmt.Sprintf("%s-instance", operatorName)+}++func OperatorVersionName(operatorName, operatorVersion string) string {

I agree 👍

zen-dog

comment created time in 3 days

Pull request review commentkudobuilder/kudo

KEP-29: Add `KudoOperatorTask` implementation

 func GetPhaseStatus(phaseName string, planStatus *PlanStatus) *PhaseStatus {  	return nil }++func InstanceName(operatorName string) string {

It is "typeless" somewhat on purpose. In the case of OperatorVersionName below, o *Operator is not always there and the operator name can sometimes come from:

  • packages.OperatorFile
  • KudoOperatorTaskSpec
  • o *Operator? so I decided to keep it simple for now 🤷
zen-dog

comment created time in 3 days

Pull request review commentkudobuilder/kudo

Refactor operator package installation

+// Package install provides function to install package resources+// on a Kubernetes cluster.+package install++import (+	"strings"+	"time"++	"github.com/kudobuilder/kudo/pkg/apis/kudo/v1beta1"+	"github.com/kudobuilder/kudo/pkg/kudoctl/clog"+	"github.com/kudobuilder/kudo/pkg/kudoctl/packages"+	"github.com/kudobuilder/kudo/pkg/kudoctl/util/kudo"+)++type Options struct {+	skipInstance    bool+	wait            *time.Duration+	createNamespace bool+}++type Option func(*Options)++// SkipInstance installs only Operator and OperatorVersion+// of an operator package.+func SkipInstance() Option {+	return func(o *Options) {+		o.skipInstance = true+	}+}++// WaitForInstance waits an amount of time for the instance+// to complete installation.+func WaitForInstance(duration time.Duration) Option {+	return func(o *Options) {+		o.wait = &duration+	}+}++// CreateNamespace creates the specified namespace before installation.+// If available, a namespace manifest in the operator package is+// rendered using the installation parameters.+func CreateNamespace() Option {+	return func(o *Options) {+		o.createNamespace = true+	}+}++// Package installs an operator package with parameters into a namespace.+// Instance name, namespace and operator parameters are applied to the+// operator package resources. These rendered resources are then created+// on the Kubernetes cluster.+func Package(+	client *kudo.Client,+	instanceName string,+	namespace string,+	resources packages.Resources,+	parameters map[string]string,+	opts ...Option) error {+	clog.V(3).Printf("operator name: %v", resources.Operator.Name)+	clog.V(3).Printf("operator version: %v", resources.OperatorVersion.Spec.Version)++	options := Options{}+	for _, o := range opts {+		o(&options)+	}++	applyOverrides(&resources, instanceName, namespace, parameters)++	if err := client.ValidateServerForOperator(resources.Operator); err != nil {+		return err+	}++	if options.createNamespace {+		if err := installNamespace(client, resources, parameters); err != nil {+			return err+		}+	}++	if err := installOperatorAndOperatorVersion(client, resources); err != nil {+		return err+	}++	if options.skipInstance {+		return nil+	}++	if err := validateParameters(+		*resources.Instance,+		resources.OperatorVersion.Spec.Parameters); err != nil {+		return err+	}++	if err := installInstance(client, resources.Instance); err != nil {+		return err+	}++	if options.wait != nil {+		if err := waitForInstance(client, resources.Instance, *options.wait); err != nil {+			return err+		}+	}++	return nil+}++func applyOverrides(+	resources *packages.Resources,+	instanceName string,+	namespace string,+	parameters map[string]string) {+	resources.Operator.SetNamespace(namespace)+	resources.OperatorVersion.SetNamespace(namespace)+	resources.Instance.SetNamespace(namespace)++	if instanceName != "" {+		resources.Instance.SetName(instanceName)+		clog.V(3).Printf("instance name: %v", instanceName)

Here and in L121, let's improve logging using namespaced instance names as mentioned above

nfnt

comment created time in 3 days

Pull request review commentkudobuilder/kudo

Refactor operator package installation

+package install++import (+	"fmt"++	"github.com/thoas/go-funk"++	"github.com/kudobuilder/kudo/pkg/kudoctl/clog"+	"github.com/kudobuilder/kudo/pkg/kudoctl/packages"+	"github.com/kudobuilder/kudo/pkg/kudoctl/util/kudo"+)++func installOperatorAndOperatorVersion(client *kudo.Client, resources packages.Resources) error {

Nice, we'll need this method for the dependencies.

nfnt

comment created time in 3 days

Pull request review commentkudobuilder/kudo

Refactor operator package installation

+package install++import (+	"fmt"++	"github.com/thoas/go-funk"++	"github.com/kudobuilder/kudo/pkg/kudoctl/clog"+	"github.com/kudobuilder/kudo/pkg/kudoctl/packages"+	"github.com/kudobuilder/kudo/pkg/kudoctl/util/kudo"+)++func installOperatorAndOperatorVersion(client *kudo.Client, resources packages.Resources) error {+	if !client.OperatorExistsInCluster(resources.Operator.Name, resources.Operator.Namespace) {+		if _, err := client.InstallOperatorObjToCluster(resources.Operator, resources.Operator.Namespace); err != nil {+			return fmt.Errorf("failed to install %s-operator.yaml: %v", resources.Operator.Name, err)+		}+		clog.Printf("operator.%s/%s created", resources.Operator.APIVersion, resources.Operator.Name)

Some more logging improvements (see below too) needed 😉

nfnt

comment created time in 3 days

Pull request review commentkudobuilder/kudo

Refactor operator package installation

+// Package install provides function to install package resources+// on a Kubernetes cluster.+package install++import (+	"strings"+	"time"++	"github.com/kudobuilder/kudo/pkg/apis/kudo/v1beta1"+	"github.com/kudobuilder/kudo/pkg/kudoctl/clog"+	"github.com/kudobuilder/kudo/pkg/kudoctl/packages"+	"github.com/kudobuilder/kudo/pkg/kudoctl/util/kudo"+)++type Options struct {

I'd like to adopt the type.go-pattern as with other packages and keep all general types in an install/types.go. It keeps entangling cyclic dependencies later much easier.

nfnt

comment created time in 3 days

Pull request review commentkudobuilder/kudo

Refactor operator package installation

+// Package install provides function to install package resources

why is this file not called install/package.go?

nfnt

comment created time in 3 days

Pull request review commentkudobuilder/kudo

Refactor operator package installation

+// Package install provides function to install package resources+// on a Kubernetes cluster.+package install++import (+	"strings"+	"time"++	"github.com/kudobuilder/kudo/pkg/apis/kudo/v1beta1"+	"github.com/kudobuilder/kudo/pkg/kudoctl/clog"+	"github.com/kudobuilder/kudo/pkg/kudoctl/packages"+	"github.com/kudobuilder/kudo/pkg/kudoctl/util/kudo"+)++type Options struct {+	skipInstance    bool+	wait            *time.Duration+	createNamespace bool+}++type Option func(*Options)++// SkipInstance installs only Operator and OperatorVersion+// of an operator package.+func SkipInstance() Option {+	return func(o *Options) {+		o.skipInstance = true+	}+}++// WaitForInstance waits an amount of time for the instance+// to complete installation.+func WaitForInstance(duration time.Duration) Option {+	return func(o *Options) {+		o.wait = &duration+	}+}++// CreateNamespace creates the specified namespace before installation.+// If available, a namespace manifest in the operator package is+// rendered using the installation parameters.+func CreateNamespace() Option {+	return func(o *Options) {+		o.createNamespace = true+	}+}++// Package installs an operator package with parameters into a namespace.+// Instance name, namespace and operator parameters are applied to the+// operator package resources. These rendered resources are then created+// on the Kubernetes cluster.+func Package(

Shouldn't this be:

func InstallPackage(

as with other similar methods e.g. installNamespace or installInstance?

nfnt

comment created time in 3 days

Pull request review commentkudobuilder/kudo

Refactor operator package installation

+// Package install provides function to install package resources+// on a Kubernetes cluster.+package install++import (+	"strings"+	"time"++	"github.com/kudobuilder/kudo/pkg/apis/kudo/v1beta1"+	"github.com/kudobuilder/kudo/pkg/kudoctl/clog"+	"github.com/kudobuilder/kudo/pkg/kudoctl/packages"+	"github.com/kudobuilder/kudo/pkg/kudoctl/util/kudo"+)++type Options struct {+	skipInstance    bool+	wait            *time.Duration+	createNamespace bool+}++type Option func(*Options)++// SkipInstance installs only Operator and OperatorVersion+// of an operator package.+func SkipInstance() Option {+	return func(o *Options) {+		o.skipInstance = true+	}+}++// WaitForInstance waits an amount of time for the instance+// to complete installation.+func WaitForInstance(duration time.Duration) Option {+	return func(o *Options) {+		o.wait = &duration+	}+}++// CreateNamespace creates the specified namespace before installation.+// If available, a namespace manifest in the operator package is+// rendered using the installation parameters.+func CreateNamespace() Option {+	return func(o *Options) {+		o.createNamespace = true+	}+}++// Package installs an operator package with parameters into a namespace.+// Instance name, namespace and operator parameters are applied to the+// operator package resources. These rendered resources are then created+// on the Kubernetes cluster.+func Package(+	client *kudo.Client,+	instanceName string,+	namespace string,+	resources packages.Resources,+	parameters map[string]string,+	opts ...Option) error {+	clog.V(3).Printf("operator name: %v", resources.Operator.Name)+	clog.V(3).Printf("operator version: %v", resources.OperatorVersion.Spec.Version)++	options := Options{}+	for _, o := range opts {+		o(&options)+	}++	applyOverrides(&resources, instanceName, namespace, parameters)++	if err := client.ValidateServerForOperator(resources.Operator); err != nil {+		return err+	}++	if options.createNamespace {+		if err := installNamespace(client, resources, parameters); err != nil {+			return err+		}+	}++	if err := installOperatorAndOperatorVersion(client, resources); err != nil {+		return err+	}++	if options.skipInstance {+		return nil+	}++	if err := validateParameters(+		*resources.Instance,+		resources.OperatorVersion.Spec.Parameters); err != nil {+		return err+	}++	if err := installInstance(client, resources.Instance); err != nil {+		return err+	}++	if options.wait != nil {+		if err := waitForInstance(client, resources.Instance, *options.wait); err != nil {+			return err+		}+	}++	return nil

Much cleaner 👏

nfnt

comment created time in 3 days

Pull request review commentkudobuilder/kudo

Refactor operator package installation

+// Package install provides function to install package resources+// on a Kubernetes cluster.+package install++import (+	"strings"+	"time"++	"github.com/kudobuilder/kudo/pkg/apis/kudo/v1beta1"+	"github.com/kudobuilder/kudo/pkg/kudoctl/clog"+	"github.com/kudobuilder/kudo/pkg/kudoctl/packages"+	"github.com/kudobuilder/kudo/pkg/kudoctl/util/kudo"+)++type Options struct {+	skipInstance    bool+	wait            *time.Duration+	createNamespace bool+}++type Option func(*Options)++// SkipInstance installs only Operator and OperatorVersion+// of an operator package.+func SkipInstance() Option {+	return func(o *Options) {+		o.skipInstance = true+	}+}++// WaitForInstance waits an amount of time for the instance+// to complete installation.+func WaitForInstance(duration time.Duration) Option {+	return func(o *Options) {+		o.wait = &duration+	}+}++// CreateNamespace creates the specified namespace before installation.+// If available, a namespace manifest in the operator package is+// rendered using the installation parameters.+func CreateNamespace() Option {+	return func(o *Options) {+		o.createNamespace = true+	}+}++// Package installs an operator package with parameters into a namespace.+// Instance name, namespace and operator parameters are applied to the+// operator package resources. These rendered resources are then created+// on the Kubernetes cluster.+func Package(+	client *kudo.Client,+	instanceName string,+	namespace string,+	resources packages.Resources,+	parameters map[string]string,+	opts ...Option) error {+	clog.V(3).Printf("operator name: %v", resources.Operator.Name)+	clog.V(3).Printf("operator version: %v", resources.OperatorVersion.Spec.Version)++	options := Options{}+	for _, o := range opts {+		o(&options)+	}++	applyOverrides(&resources, instanceName, namespace, parameters)++	if err := client.ValidateServerForOperator(resources.Operator); err != nil {+		return err+	}++	if options.createNamespace {+		if err := installNamespace(client, resources, parameters); err != nil {+			return err+		}+	}++	if err := installOperatorAndOperatorVersion(client, resources); err != nil {+		return err+	}++	if options.skipInstance {+		return nil+	}++	if err := validateParameters(

Shouldn't we validate parameters before we jump out in L83

nfnt

comment created time in 3 days

Pull request review commentkudobuilder/kudo

Refactor operator package installation

+// Package install provides function to install package resources+// on a Kubernetes cluster.+package install++import (+	"strings"+	"time"++	"github.com/kudobuilder/kudo/pkg/apis/kudo/v1beta1"+	"github.com/kudobuilder/kudo/pkg/kudoctl/clog"+	"github.com/kudobuilder/kudo/pkg/kudoctl/packages"+	"github.com/kudobuilder/kudo/pkg/kudoctl/util/kudo"+)++type Options struct {+	skipInstance    bool+	wait            *time.Duration+	createNamespace bool+}++type Option func(*Options)++// SkipInstance installs only Operator and OperatorVersion+// of an operator package.+func SkipInstance() Option {+	return func(o *Options) {+		o.skipInstance = true+	}+}++// WaitForInstance waits an amount of time for the instance+// to complete installation.+func WaitForInstance(duration time.Duration) Option {+	return func(o *Options) {+		o.wait = &duration+	}+}++// CreateNamespace creates the specified namespace before installation.+// If available, a namespace manifest in the operator package is+// rendered using the installation parameters.+func CreateNamespace() Option {+	return func(o *Options) {+		o.createNamespace = true+	}+}++// Package installs an operator package with parameters into a namespace.+// Instance name, namespace and operator parameters are applied to the+// operator package resources. These rendered resources are then created+// on the Kubernetes cluster.+func Package(+	client *kudo.Client,+	instanceName string,+	namespace string,+	resources packages.Resources,+	parameters map[string]string,+	opts ...Option) error {+	clog.V(3).Printf("operator name: %v", resources.Operator.Name)

You probably copied it from the old method, but this logging could be friendlier. E.g. KUDO manager logs this smth. like:

	clog.V(3).Printf("Preparing %s/%s:%s for installation", namespace, resources.Operator.Name, resources.OperatorVersion.Spec.Version)

This way we also adopt logging instances/operators with namespaces (which is important for debugging)

nfnt

comment created time in 3 days

Pull request review commentkudobuilder/kudo

Refactor operator package installation

+// Package install provides function to install package resources+// on a Kubernetes cluster.+package install++import (+	"strings"+	"time"++	"github.com/kudobuilder/kudo/pkg/apis/kudo/v1beta1"+	"github.com/kudobuilder/kudo/pkg/kudoctl/clog"+	"github.com/kudobuilder/kudo/pkg/kudoctl/packages"+	"github.com/kudobuilder/kudo/pkg/kudoctl/util/kudo"+)++type Options struct {+	skipInstance    bool+	wait            *time.Duration+	createNamespace bool+}++type Option func(*Options)++// SkipInstance installs only Operator and OperatorVersion+// of an operator package.+func SkipInstance() Option {

It's an interesting pattern but I wonder why we need it. Why not simply pass install.Options struct to the install.Package method and let the caller set the options that are needed?

nfnt

comment created time in 3 days

pull request commentkudobuilder/kudo

KEP-29: Add `KudoOperatorTask` implementation

Failing operator-tests are due to quay.io outage

zen-dog

comment created time in 7 days

push eventkudobuilder/kudo

Aleksey Dukhovniy

commit sha a0e008d9d87624c8af550cab1cb358a75d06f317

added an integration test and dependent instances now have `ownerReference` set Signed-off-by: Aleksey Dukhovniy <alex.dukhovniy@googlemail.com>

view details

push time in 8 days

Pull request review commentkudobuilder/kudo

simplified diagnostics

 import ( // Client is a KUDO Client providing access to a kudo clientset and kubernetes clientsets type Client struct { 	kudoClientset versioned.Interface-	kubeClientset kubernetes.Interface+	kubernetes.Interface

I actually didn't mind this change. KUDO types are distinct enough from k8s ones and I see no reason to type client.kubeClientset.* every time. But this is a soft opinion.

vemelin-epm

comment created time in 8 days

push eventkudobuilder/kudo

Ken Sipe

commit sha 609b74442b3a0cd78d3a0335f6a6dec9cd1cc4e2

KUTTL 0.2.2 Bump (#1532) Signed-off-by: Ken Sipe <kensipe@gmail.com>

view details

Andreas Neumann

commit sha 2e1b38c457934b3f4104523f1d30c59729d7de6c

KEP-30: Immutable parameters (#1485) * Added KEP for immutable parameters Signed-off-by: Andreas Neumann <aneumann@mesosphere.com>

view details

Ken Sipe

commit sha 66cada219ba062b79bd154a823e5d26e3c2880bf

removing 32-bit darwin from release (#1534) Signed-off-by: Ken Sipe <kensipe@gmail.com>

view details

Ken Sipe

commit sha cdbbade020b441e3f1a76a680a0e35c13a43e240

Namespace Package Verify (#1536) Co-authored-by: Andreas Neumann <aneumann@mesosphere.com> Signed-off-by: Ken Sipe <kensipe@gmail.com>

view details

Ken Sipe

commit sha f3e6f9d60b8f064d767cdd8f56a323c5f14bfeae

KEP31: Template Support for Namespace Manifest (#1535) Co-authored-by: Andreas Neumann <aneumann@mesosphere.com> Signed-off-by: Ken Sipe <kensipe@gmail.com>

view details

Ken Sipe

commit sha 73cd8c24b3381596438ef8b5e69e106f699bf5cc

Plan Update and Trigger with Wait (#1470) Signed-off-by: Ken Sipe <kensipe@gmail.com>

view details

Jan Schlicht

commit sha 5e6ccf2f754de83efa5762f4794ff7883f96f08d

Separate E2E and operator tests (#1540) This runs operator tests as a separate test in parallel with the other tests. It makes operator tests independent from E2E test results and their failures distinguishable. Signed-off-by: Jan Schlicht <jan@d2iq.com> Signed-off-by: Ken Sipe <kensipe@gmail.com> Co-authored-by: Ken Sipe <kensipe@gmail.com>

view details

Aleksey Dukhovniy

commit sha cbd9ba9794d74b274b8d3f7ff3f21599faeeea15

Merge branch 'master' into ad/kudo-operator-task

view details

Aleksey Dukhovniy

commit sha f40027a4c76d1f91b9b47a99717efcb847121767

using newly introduced `VariablesMap` for template rendering Signed-off-by: Aleksey Dukhovniy <alex.dukhovniy@googlemail.com>

view details

push time in 8 days

Pull request review commentkudobuilder/kudo

KEP-29: Add `KudoOperatorTask` implementation

 package task  import (-	"errors"+	"context"+	"encoding/json"+	"fmt"+	"strings"++	"github.com/prometheus/common/log"

right? 😕

zen-dog

comment created time in 8 days

Pull request review commentkudobuilder/kudo

KEP-29: Add `KudoOperatorTask` implementation

 require ( 	github.com/mitchellh/copystructure v1.0.0 // indirect 	github.com/onsi/ginkgo v1.12.0 	github.com/onsi/gomega v1.9.0+	github.com/prometheus/common v0.4.1

I think, at this point, Goland is just screwing with me by importing a random logging library every time I type in log. 😿 Removed

zen-dog

comment created time in 8 days

Pull request review commentkudobuilder/kudo

KEP-29: Add `KudoOperatorTask` implementation

 type KudoOperatorTask struct { 	InstanceName    string 	AppVersion      string 	OperatorVersion string+	ParameterFile   string }  // Run method for the KudoOperatorTask. Not yet implemented func (dt KudoOperatorTask) Run(ctx Context) (bool, error) {-	return false, errors.New("kudo-operator task is not yet implemented. Stay tuned though ;)")++	// 0. - A few prerequisites -+	// Note: ctx.Meta has Meta.OperatorName and Meta.OperatorVersion fields but these are of the **parent instance**+	// However, since we don't support multiple namespaces yet, we can use the Meta.InstanceNamespace for the namespace+	namespace := ctx.Meta.InstanceNamespace+	operatorName := dt.Package+	operatorVersion := dt.OperatorVersion+	operatorVersionName := v1beta1.OperatorVersionName(operatorName, operatorVersion)++	// 1. - Expand parameter file if exists -+	params, err := instanceParameters(dt.ParameterFile, ctx.Templates, ctx.Meta, ctx.Parameters)+	if err != nil {+		return false, fatalExecutionError(err, taskRenderingError, ctx.Meta)+	}++	// 2. - Build the instance object -+	// TODO: make it possible to install the same operator N times in the same namespace by making instance names unique/hierarchical+	instance := instanceResource(operatorName, operatorVersionName, namespace, params)++	// 3. - Apply the Instance object -+	err = applyInstance(instance, namespace, ctx.Client)+	if err != nil {+		return false, err+	}++	// 4. - Check the Instance health -+	if err := health.IsHealthy(instance); err != nil {+		return false, nil+	}++	return true, nil+}++// render method takes templated parameter file and a map of parameters and then renders passed template using kudo engine.+func instanceParameters(pf string, templates map[string]string, meta renderer.Metadata, parameters map[string]interface{}) (map[string]string, error) {+	if len(pf) != 0 {+		pft, ok := templates[pf]+		if !ok {+			return nil, fmt.Errorf("error finding parameter file %s", pf)+		}++		rendered, err := renderParametersFile(pf, pft, meta, parameters)+		if err != nil {+			return nil, fmt.Errorf("error expanding parameter file %s: %w", pf, err)+		}++		parameters := map[string]string{}+		errs := []string{}+		parser.GetParametersFromFile(pf, []byte(rendered), errs, parameters)+		if len(errs) > 0 {+			return nil, fmt.Errorf("failed to unmarshal parameter file %s: %s", pf, strings.Join(errs, ", "))+		}++		return parameters, nil+	}++	return nil, nil+}++func renderParametersFile(pf string, pft string, meta renderer.Metadata, parameters map[string]interface{}) (string, error) {+	configs := make(map[string]interface{})+	configs["OperatorName"] = meta.OperatorName+	configs["AppVersion"] = meta.AppVersion+	configs["Name"] = meta.InstanceName+	configs["Namespace"] = meta.InstanceNamespace+	configs["Params"] = parameters

Nice! I like the new VariablesMap

zen-dog

comment created time in 8 days

push eventkudobuilder/kudo

Aleksey Dukhovniy

commit sha aff5c65e819f6aec51150429c2d6d3c333ce62fa

removed the sneaked in prometheus logger Signed-off-by: Aleksey Dukhovniy <alex.dukhovniy@googlemail.com>

view details

push time in 8 days

push eventkudobuilder/kudo

Aleksey Dukhovniy

commit sha ac3c853af3873c120a75050b8a3a8605ffc657e6

updated golden files and CRDs Signed-off-by: Aleksey Dukhovniy <alex.dukhovniy@googlemail.com>

view details

push time in 8 days

Pull request review commentkudobuilder/kudo

KEP-29: Add `KudoOperatorTask` implementation

 func getParamsFromFiles(fs afero.Fs, filePaths []string, errs []string) (map[str 			errs = append(errs, fmt.Sprintf("error reading from parameter file %s: %v", filePath, err)) 			continue 		}-		data := make(map[string]interface{})-		err = yaml.Unmarshal(rawData, &data)++		errs = GetParametersFromFile(filePath, rawData, errs, parameters)++	}+	return parameters, errs+}++func GetParametersFromFile(filePath string, bytes []byte, errs []string, parameters map[string]string) []string {

Split this part out of the getParamsFromFiles method because I needed it for paramsFile expansion.

zen-dog

comment created time in 8 days

Pull request review commentkudobuilder/kudo

KEP-29: Add `KudoOperatorTask` implementation

 func remove(values []string, s string) []string { 	}) } +// GetOperatorVersion retrieves OperatorVersion belonging to the given instance

Moved from the instance_contorller.go

zen-dog

comment created time in 8 days

PR opened kudobuilder/kudo

Reviewers
KEP-29: Add `KudoOperatorTask` implementation

Summary: implemented KudoTaskOperator which, given a KudoOperator task in the operator will create the Instance object and wait for it to become healthy. Additionally added paramsFile to the KudoOperatorTaskSpec.

Fixes: #1509

Signed-off-by: Aleksey Dukhovniy alex.dukhovniy@googlemail.com

+455 -65

0 comment

13 changed files

pr created time in 8 days

create barnchkudobuilder/kudo

branch : ad/kudo-operator-task

created branch time in 8 days

Pull request review commentkudobuilder/kudo

Separate E2E and operator tests

 jobs:       - store_artifacts:           path: kind-logs.tar.bz2 +  operator-test:

nit:

  operators-test:
nfnt

comment created time in 9 days

Pull request review commentkudobuilder/kudo

simplified diagnostics

+package diagnostics++import (+	"github.com/kudobuilder/kudo/pkg/apis/kudo/v1beta1"+	"github.com/spf13/afero"+	"github.com/stretchr/testify/assert"+	v1 "k8s.io/api/core/v1"+	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"+	"k8s.io/apimachinery/pkg/runtime"+	"testing"+)++func TestPrintLog (t *testing.T){}++func TestPrintError (t *testing.T){}++func TestPrintYaml (t *testing.T){}++func Test_nonFailingPrinter_printObject(t *testing.T) {++	tests := []struct {+		name      string+		o         runtime.Object+		parentDir string+		mode      printMode+		expFiles  []string+		failOn    string+	}{+		{+			name:      "kube object with dir",+			o:         &v1.Pod{+				ObjectMeta: metav1.ObjectMeta{Namespace: fakeNamespace, Name: "my-fancy-pod"},+			},+			parentDir: "root",+			mode:      ObjectWithDir,+			expFiles: []string{"root/pod_my-fancy-pod/my-fancy-pod.yaml"},+		},+		{+			name:      "kudo object with dir",+			o:         &v1beta1.Operator{+				TypeMeta:   metav1.TypeMeta{+					Kind: "Operator",+					APIVersion: "kudo.dev/v1beta1",+				},+				ObjectMeta: metav1.ObjectMeta{Namespace: fakeNamespace, Name: "my-fancy-operator"},+			},+			parentDir: "root",+			mode:      ObjectWithDir,+			expFiles: []string{"root/operator_my-fancy-operator/my-fancy-operator.yaml"},+		},+		{+			name:      "kube object as runtime object",+			o:         &v1.Pod{+				ObjectMeta: metav1.ObjectMeta{Namespace: fakeNamespace, Name: "my-fancy-pod"},+			},+			parentDir: "root",+			mode:      RuntimeObject,+			expFiles: []string{"root/pod.yaml"},+		},+		{+			name:      "list of objects as runtime object",+			o:         &v1.PodList{+				Items:    []v1.Pod{+					{ObjectMeta: metav1.ObjectMeta{Namespace: fakeNamespace, Name: "my-fancy-pod-01"}},+					{ObjectMeta: metav1.ObjectMeta{Namespace: fakeNamespace, Name: "my-fancy-pod-02"}},+				},+			},+			parentDir: "root",+			mode:      RuntimeObject,+			expFiles: []string{"root/podlist.yaml"},+		},+		{+			name:      "list of objects with dirs",+			o:         &v1.PodList{+				Items:    []v1.Pod{+					{ObjectMeta: metav1.ObjectMeta{Namespace: fakeNamespace, Name: "my-fancy-pod-01"}},+					{ObjectMeta: metav1.ObjectMeta{Namespace: fakeNamespace, Name: "my-fancy-pod-02"}},+				},+			},+			parentDir: "root",+			mode:      ObjectListWithDirs,+			expFiles: []string{"root/pod_my-fancy-pod-01/my-fancy-pod-01.yaml", "root/pod_my-fancy-pod-02/my-fancy-pod-02.yaml"},+		},+		{+			name:      "list of objects with dirs, one fails",+			o:         &v1.PodList{+				Items:    []v1.Pod{+					{ObjectMeta: metav1.ObjectMeta{Namespace: fakeNamespace, Name: "my-fancy-pod-01"}},+					{ObjectMeta: metav1.ObjectMeta{Namespace: fakeNamespace, Name: "my-fancy-pod-02"}},+					{ObjectMeta: metav1.ObjectMeta{Namespace: fakeNamespace, Name: "my-fancy-pod-03"}},+				},+			},+			parentDir: "root",+			mode:      ObjectListWithDirs,+			expFiles:  []string{"root/pod_my-fancy-pod-01/my-fancy-pod-01.yaml", "root/pod_my-fancy-pod-03/my-fancy-pod-03.yaml"},+			failOn:    "root/pod_my-fancy-pod-02/my-fancy-pod-02.yaml",+		},+	}+	for _, tt := range tests {+		t.Run(tt.name, func(t *testing.T) {

I was looking at the outdated commit 🙈

vemelin-epm

comment created time in 9 days

Pull request review commentkudobuilder/kudo

simplified diagnostics

+package diagnostics++import (+	"log"+	"os"+	"testing"+	"time"++	"github.com/ghodss/yaml"+	"github.com/spf13/afero"+	"github.com/stretchr/testify/assert"++	appsv1 "k8s.io/api/apps/v1"+	corev1 "k8s.io/api/core/v1"+	rbacv1beta1 "k8s.io/api/rbac/v1beta1"+	"k8s.io/apimachinery/pkg/api/errors"+	"k8s.io/apimachinery/pkg/api/meta"+	"k8s.io/apimachinery/pkg/runtime"+	"k8s.io/apimachinery/pkg/runtime/schema"+	"k8s.io/apimachinery/pkg/util/json"+	kubefake "k8s.io/client-go/kubernetes/fake"+	clienttesting "k8s.io/client-go/testing"++	"github.com/kudobuilder/kudo/pkg/apis/kudo/v1beta1"+	"github.com/kudobuilder/kudo/pkg/client/clientset/versioned/fake"+	"github.com/kudobuilder/kudo/pkg/kudoctl/env"+	"github.com/kudobuilder/kudo/pkg/kudoctl/util/kudo"+)++const (+	fakeNamespace  = "my-namespace"+	fakeZkInstance = "zookeeper-instance"+)++const (+	zkOperatorFile        = "diag/operator_zookeeper/zookeeper.yaml"+	zkOperatorVersionFile = "diag/operator_zookeeper/operatorversion_zookeeper-0.3.0/zookeeper-0.3.0.yaml"+	zkPod2File            = "diag/operator_zookeeper/instance_zookeeper-instance/pod_zookeeper-instance-zookeeper-2/zookeeper-instance-zookeeper-2.yaml"+	zkLog2File            = "diag/operator_zookeeper/instance_zookeeper-instance/pod_zookeeper-instance-zookeeper-2/zookeeper-instance-zookeeper-2.log.gz"+	zkServicesFile        = "diag/operator_zookeeper/instance_zookeeper-instance/servicelist.yaml"+	zkPod0File            = "diag/operator_zookeeper/instance_zookeeper-instance/pod_zookeeper-instance-zookeeper-0/zookeeper-instance-zookeeper-0.yaml"+	zkLog0File            = "diag/operator_zookeeper/instance_zookeeper-instance/pod_zookeeper-instance-zookeeper-0/zookeeper-instance-zookeeper-0.log.gz"+	zkInstanceFile        = "diag/operator_zookeeper/instance_zookeeper-instance/zookeeper-instance.yaml"+	zkPod1File            = "diag/operator_zookeeper/instance_zookeeper-instance/pod_zookeeper-instance-zookeeper-1/zookeeper-instance-zookeeper-1.yaml"+	zkLog1File            = "diag/operator_zookeeper/instance_zookeeper-instance/pod_zookeeper-instance-zookeeper-1/zookeeper-instance-zookeeper-1.log.gz"+	zkStatefulSetsFile    = "diag/operator_zookeeper/instance_zookeeper-instance/statefulsetlist.yaml"+	versionFile           = "diag/version.yaml"+	kmServicesFile        = "diag/kudo/servicelist.yaml"+	kmPodFile             = "diag/kudo/pod_kudo-controller-manager-0/kudo-controller-manager-0.yaml"+	kmLogFile             = "diag/kudo/pod_kudo-controller-manager-0/kudo-controller-manager-0.log.gz"+	kmServiceAccountsFile = "diag/kudo/serviceaccountlist.yaml"+	kmStatefulSetsFile    = "diag/kudo/statefulsetlist.yaml"+	settingsFile          = "diag/settings.yaml"+)++// defaultFileNames - all the files that should be created if no error happens+func defaultFileNames() map[string]struct{} {+	return map[string]struct{}{+		zkOperatorFile:        {},+		zkOperatorVersionFile: {},+		zkPod2File:            {},+		zkLog2File:            {},+		zkServicesFile:        {},+		zkPod0File:            {},+		zkLog0File:            {},+		zkInstanceFile:        {},+		zkPod1File:            {},+		zkLog1File:            {},+		zkStatefulSetsFile:    {},+		versionFile:           {},+		kmServicesFile:        {},+		kmPodFile:             {},+		kmLogFile:             {},+		kmServiceAccountsFile: {},+		kmStatefulSetsFile:    {},+		settingsFile:          {},+	}+}++// resource to be loaded into fake clients+var (+	// resource of the instance for which diagnostics is run+	pods            corev1.PodList+	serviceAccounts corev1.ServiceAccountList+	services        corev1.ServiceList+	statefulsets    appsv1.StatefulSetList+	pvs             corev1.PersistentVolumeList+	pvcs            corev1.PersistentVolumeClaimList+	operator        v1beta1.Operator+	operatorVersion v1beta1.OperatorVersion+	instance        v1beta1.Instance++	// kudo-manager resources+	kmNs              corev1.Namespace+	kmPod             corev1.Pod+	kmServices        corev1.ServiceList+	kmServiceAccounts corev1.ServiceAccountList+	kmStatefulsets    appsv1.StatefulSetList++	// resources unrelated to the diagnosed instance or kudo-manager, should not be collected+	cowPod                corev1.Pod+	defaultServiceAccount corev1.ServiceAccount+	clusterRole           rbacv1beta1.ClusterRole+)++var (+	kubeObjects objectList+	kudoObjects objectList+)++func check(err error) {+	if err != nil {+		log.Fatalln(err)+	}+}++func assertNilError(t *testing.T) func(error) {+	return func(e error) {+		assert.Nil(t, e)

nit: here and everywhere: a more readable assertion for when an error is not expected: assert.NoError(t, err)

vemelin-epm

comment created time in 10 days

Pull request review commentkudobuilder/kudo

simplified diagnostics

+package diagnostics++import (+	"github.com/kudobuilder/kudo/pkg/apis/kudo/v1beta1"+	"github.com/spf13/afero"+	"github.com/stretchr/testify/assert"+	v1 "k8s.io/api/core/v1"+	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"+	"k8s.io/apimachinery/pkg/runtime"+	"testing"+)++func TestPrintLog (t *testing.T){}++func TestPrintError (t *testing.T){}++func TestPrintYaml (t *testing.T){}++func Test_nonFailingPrinter_printObject(t *testing.T) {++	tests := []struct {+		name      string+		o         runtime.Object

nit: s/o/obj everywhere. the codebase is not necessarily consistent here but obj is a de-facto standard

vemelin-epm

comment created time in 10 days

Pull request review commentkudobuilder/kudo

simplified diagnostics

+package diagnostics++import (+	"log"+	"os"+	"testing"+	"time"++	"github.com/ghodss/yaml"+	"github.com/spf13/afero"+	"github.com/stretchr/testify/assert"++	appsv1 "k8s.io/api/apps/v1"+	corev1 "k8s.io/api/core/v1"+	rbacv1beta1 "k8s.io/api/rbac/v1beta1"+	"k8s.io/apimachinery/pkg/api/errors"+	"k8s.io/apimachinery/pkg/api/meta"+	"k8s.io/apimachinery/pkg/runtime"+	"k8s.io/apimachinery/pkg/runtime/schema"+	"k8s.io/apimachinery/pkg/util/json"+	kubefake "k8s.io/client-go/kubernetes/fake"+	clienttesting "k8s.io/client-go/testing"++	"github.com/kudobuilder/kudo/pkg/apis/kudo/v1beta1"+	"github.com/kudobuilder/kudo/pkg/client/clientset/versioned/fake"+	"github.com/kudobuilder/kudo/pkg/kudoctl/env"+	"github.com/kudobuilder/kudo/pkg/kudoctl/util/kudo"+)++const (+	fakeNamespace  = "my-namespace"+	fakeZkInstance = "zookeeper-instance"+)++const (+	zkOperatorFile        = "diag/operator_zookeeper/zookeeper.yaml"+	zkOperatorVersionFile = "diag/operator_zookeeper/operatorversion_zookeeper-0.3.0/zookeeper-0.3.0.yaml"+	zkPod2File            = "diag/operator_zookeeper/instance_zookeeper-instance/pod_zookeeper-instance-zookeeper-2/zookeeper-instance-zookeeper-2.yaml"+	zkLog2File            = "diag/operator_zookeeper/instance_zookeeper-instance/pod_zookeeper-instance-zookeeper-2/zookeeper-instance-zookeeper-2.log.gz"+	zkServicesFile        = "diag/operator_zookeeper/instance_zookeeper-instance/servicelist.yaml"+	zkPod0File            = "diag/operator_zookeeper/instance_zookeeper-instance/pod_zookeeper-instance-zookeeper-0/zookeeper-instance-zookeeper-0.yaml"+	zkLog0File            = "diag/operator_zookeeper/instance_zookeeper-instance/pod_zookeeper-instance-zookeeper-0/zookeeper-instance-zookeeper-0.log.gz"+	zkInstanceFile        = "diag/operator_zookeeper/instance_zookeeper-instance/zookeeper-instance.yaml"+	zkPod1File            = "diag/operator_zookeeper/instance_zookeeper-instance/pod_zookeeper-instance-zookeeper-1/zookeeper-instance-zookeeper-1.yaml"+	zkLog1File            = "diag/operator_zookeeper/instance_zookeeper-instance/pod_zookeeper-instance-zookeeper-1/zookeeper-instance-zookeeper-1.log.gz"+	zkStatefulSetsFile    = "diag/operator_zookeeper/instance_zookeeper-instance/statefulsetlist.yaml"+	versionFile           = "diag/version.yaml"+	kmServicesFile        = "diag/kudo/servicelist.yaml"+	kmPodFile             = "diag/kudo/pod_kudo-controller-manager-0/kudo-controller-manager-0.yaml"+	kmLogFile             = "diag/kudo/pod_kudo-controller-manager-0/kudo-controller-manager-0.log.gz"+	kmServiceAccountsFile = "diag/kudo/serviceaccountlist.yaml"+	kmStatefulSetsFile    = "diag/kudo/statefulsetlist.yaml"+	settingsFile          = "diag/settings.yaml"+)++// defaultFileNames - all the files that should be created if no error happens+func defaultFileNames() map[string]struct{} {+	return map[string]struct{}{+		zkOperatorFile:        {},+		zkOperatorVersionFile: {},+		zkPod2File:            {},+		zkLog2File:            {},+		zkServicesFile:        {},+		zkPod0File:            {},+		zkLog0File:            {},+		zkInstanceFile:        {},+		zkPod1File:            {},+		zkLog1File:            {},+		zkStatefulSetsFile:    {},+		versionFile:           {},+		kmServicesFile:        {},+		kmPodFile:             {},+		kmLogFile:             {},+		kmServiceAccountsFile: {},+		kmStatefulSetsFile:    {},+		settingsFile:          {},+	}+}++// resource to be loaded into fake clients+var (+	// resource of the instance for which diagnostics is run+	pods            corev1.PodList+	serviceAccounts corev1.ServiceAccountList+	services        corev1.ServiceList+	statefulsets    appsv1.StatefulSetList+	pvs             corev1.PersistentVolumeList+	pvcs            corev1.PersistentVolumeClaimList+	operator        v1beta1.Operator+	operatorVersion v1beta1.OperatorVersion+	instance        v1beta1.Instance++	// kudo-manager resources+	kmNs              corev1.Namespace+	kmPod             corev1.Pod+	kmServices        corev1.ServiceList+	kmServiceAccounts corev1.ServiceAccountList+	kmStatefulsets    appsv1.StatefulSetList++	// resources unrelated to the diagnosed instance or kudo-manager, should not be collected+	cowPod                corev1.Pod+	defaultServiceAccount corev1.ServiceAccount+	clusterRole           rbacv1beta1.ClusterRole+)++var (+	kubeObjects objectList+	kudoObjects objectList+)++func check(err error) {+	if err != nil {+		log.Fatalln(err)+	}+}++func assertNilError(t *testing.T) func(error) {+	return func(e error) {+		assert.Nil(t, e)+	}+}++func mustReadObjectFromYaml(fs afero.Fs, fname string, object runtime.Object, check func(error)) {+	b, err := afero.ReadFile(fs, fname)+	check(err)

It seems that essentially you always pass the assertion that there is no error as the check parameter. Why not simply assert.NoError(t, err) in the method body and get rid of an extra parameter?

vemelin-epm

comment created time in 10 days

Pull request review commentkudobuilder/kudo

simplified diagnostics

+package diagnostics++import (+	"github.com/kudobuilder/kudo/pkg/apis/kudo/v1beta1"+	"github.com/spf13/afero"+	"github.com/stretchr/testify/assert"+	v1 "k8s.io/api/core/v1"+	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"+	"k8s.io/apimachinery/pkg/runtime"+	"testing"+)++func TestPrintLog (t *testing.T){}++func TestPrintError (t *testing.T){}++func TestPrintYaml (t *testing.T){}++func Test_nonFailingPrinter_printObject(t *testing.T) {++	tests := []struct {+		name      string+		o         runtime.Object+		parentDir string+		mode      printMode+		expFiles  []string+		failOn    string+	}{+		{+			name:      "kube object with dir",+			o:         &v1.Pod{+				ObjectMeta: metav1.ObjectMeta{Namespace: fakeNamespace, Name: "my-fancy-pod"},+			},+			parentDir: "root",+			mode:      ObjectWithDir,+			expFiles: []string{"root/pod_my-fancy-pod/my-fancy-pod.yaml"},+		},+		{+			name:      "kudo object with dir",+			o:         &v1beta1.Operator{+				TypeMeta:   metav1.TypeMeta{+					Kind: "Operator",+					APIVersion: "kudo.dev/v1beta1",+				},+				ObjectMeta: metav1.ObjectMeta{Namespace: fakeNamespace, Name: "my-fancy-operator"},+			},+			parentDir: "root",+			mode:      ObjectWithDir,+			expFiles: []string{"root/operator_my-fancy-operator/my-fancy-operator.yaml"},+		},+		{+			name:      "kube object as runtime object",+			o:         &v1.Pod{+				ObjectMeta: metav1.ObjectMeta{Namespace: fakeNamespace, Name: "my-fancy-pod"},+			},+			parentDir: "root",+			mode:      RuntimeObject,+			expFiles: []string{"root/pod.yaml"},+		},+		{+			name:      "list of objects as runtime object",+			o:         &v1.PodList{+				Items:    []v1.Pod{+					{ObjectMeta: metav1.ObjectMeta{Namespace: fakeNamespace, Name: "my-fancy-pod-01"}},+					{ObjectMeta: metav1.ObjectMeta{Namespace: fakeNamespace, Name: "my-fancy-pod-02"}},+				},+			},+			parentDir: "root",+			mode:      RuntimeObject,+			expFiles: []string{"root/podlist.yaml"},+		},+		{+			name:      "list of objects with dirs",+			o:         &v1.PodList{+				Items:    []v1.Pod{+					{ObjectMeta: metav1.ObjectMeta{Namespace: fakeNamespace, Name: "my-fancy-pod-01"}},+					{ObjectMeta: metav1.ObjectMeta{Namespace: fakeNamespace, Name: "my-fancy-pod-02"}},+				},+			},+			parentDir: "root",+			mode:      ObjectListWithDirs,+			expFiles: []string{"root/pod_my-fancy-pod-01/my-fancy-pod-01.yaml", "root/pod_my-fancy-pod-02/my-fancy-pod-02.yaml"},+		},+		{+			name:      "list of objects with dirs, one fails",+			o:         &v1.PodList{+				Items:    []v1.Pod{+					{ObjectMeta: metav1.ObjectMeta{Namespace: fakeNamespace, Name: "my-fancy-pod-01"}},+					{ObjectMeta: metav1.ObjectMeta{Namespace: fakeNamespace, Name: "my-fancy-pod-02"}},+					{ObjectMeta: metav1.ObjectMeta{Namespace: fakeNamespace, Name: "my-fancy-pod-03"}},+				},+			},+			parentDir: "root",+			mode:      ObjectListWithDirs,+			expFiles:  []string{"root/pod_my-fancy-pod-01/my-fancy-pod-01.yaml", "root/pod_my-fancy-pod-03/my-fancy-pod-03.yaml"},+			failOn:    "root/pod_my-fancy-pod-02/my-fancy-pod-02.yaml",+		},+	}+	for _, tt := range tests {+		t.Run(tt.name, func(t *testing.T) {

I wonder why the linter doesn't complain here /cc @nfnt

                tt := tt
		t.Run(tt.name, func(t *testing.T) {
vemelin-epm

comment created time in 10 days

Pull request review commentkudobuilder/kudo.dev

Document limitations of namespace-scoped KUDO resources

 Normally, _Operator_ and _OperatorVersion_ are handled by the manager and rarely ```bash $ kubectl kudo install kafka --instance dev-kafka ```++### Limitations++The _Operator_, _OperatorVersion_, and _Instance_ resources created by KUDO are namespace scoped. These resources can only own resources in the same namespace. As a result, it isn't possible for a single operator to create resources in multiple namespaces.
All the resources created by an operator _Instance_ are [owned](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/#owners-and-dependents) by it. This way, when the _Instance_ is deleted, the resources are removed too. However, the _Instance_ (as well as  _Operator_, and _OperatorVersion_)  are namespace scoped. Namespaced resources can only own resources in the same namespace. As a result, it isn't possible for a single operator to create resources in multiple namespaces.
nfnt

comment created time in 11 days

Pull request review commentkudobuilder/kudo.dev

Document limitations of namespace-scoped KUDO resources

 Normally, _Operator_ and _OperatorVersion_ are handled by the manager and rarely ```bash $ kubectl kudo install kafka --instance dev-kafka ```++### Limitations++The _Operator_, _OperatorVersion_, and _Instance_ resources created by KUDO are namespace scoped. These resources can only own resources in the same namespace. As a result, it isn't possible for an operator to create resources in multiple namespaces.++It is possible for an _Instance_ to create cluster-scoped resources. However, this can result in issues when updating or upgrading an _Instance_. [KEP-5](https://github.com/kudobuilder/kudo/blob/master/keps/0005-cluster-resources-for-crds.md) will resolve this limitation.
It is possible for an _Instance_ to create cluster-scoped resources. However, these `resourceOwners` field for such resources is not populated with the _Instance_ reference. Cluster-scoped resources are also **not** automatically cleaned up when the _Instance_ is deleted. Additionally,  they can result in issues when updating or upgrading an _Instance_. [KEP-5](https://github.com/kudobuilder/kudo/blob/master/keps/0005-cluster-resources-for-crds.md) will resolve this limitation.
nfnt

comment created time in 11 days

Pull request review commentkudobuilder/kudo

Namespace Package Verify

+package template++import (+	"testing"++	"github.com/stretchr/testify/assert"++	"github.com/kudobuilder/kudo/pkg/kudoctl/packages"+)++func TestNamespaceUsedInTemplate(t *testing.T) {++	templates := make(map[string]string)+	templates["foo.yaml"] = `+## this template contains a namespace+kind: Namespace++{{ Name }}+`+	pf := packages.Files{+		Templates: templates,+		Operator:  &packages.OperatorFile{},+	}+	verifier := NamespaceVerifier{}+	res := verifier.Verify(&pf)++	assert.Equal(t, 0, len(res.Warnings))+	assert.Equal(t, 1, len(res.Errors))+	assert.Equal(t, "template \"foo.yaml\" contains 'kind: Namespace' not allowed unless specified as 'NamespaceManifest'", res.Errors[0])+}++func TestNamespaceNotUsedInTemplate(t *testing.T) {++	templates := make(map[string]string)+	templates["foo.yaml"] = `+## this template contains a namespace+kind: Name++{{ Name }}+`+	pf := packages.Files{+		Templates: templates,+		Operator:  &packages.OperatorFile{},+	}+	verifier := NamespaceVerifier{}+	res := verifier.Verify(&pf)++	assert.Equal(t, 0, len(res.Warnings))+	assert.Equal(t, 0, len(res.Errors))+}++func TestNamespaceManifestFileMissing(t *testing.T) {++	templates := make(map[string]string)+	templates["foo.yaml"] = `+## this template contains a namespace+kind: Name++{{ Name }}+`+	pf := packages.Files{+		Templates: templates,+		Operator:  &packages.OperatorFile{NamespaceManifest: "bar.yaml"},+	}+	verifier := NamespaceVerifier{}+	res := verifier.Verify(&pf)++	assert.Equal(t, 0, len(res.Warnings))+	assert.Equal(t, 1, len(res.Errors))+	assert.Equal(t, "NamespaceManifest \"bar.yaml\" not found in /templates folder", res.Errors[0])+}++func TestNamespaceManifestFileEmpty(t *testing.T) {++	templates := make(map[string]string)+	templates["foo.yaml"] = `+`+	pf := packages.Files{+		Templates: templates,+		Operator:  &packages.OperatorFile{NamespaceManifest: "foo.yaml"},+	}+	verifier := NamespaceVerifier{}+	res := verifier.Verify(&pf)++	assert.Equal(t, 0, len(res.Warnings))+	assert.Equal(t, 1, len(res.Errors))+	assert.Equal(t, "NamespaceManifest \"foo.yaml\" found but does not contain a manifest", res.Errors[0])+}++func TestNamespaceManifestNotNamespace(t *testing.T) {++	templates := make(map[string]string)+	templates["foo.yaml"] = `apiVersion: v1+kind: Pod+metadata:+ labels:+    app: my-app`++	pf := packages.Files{+		Templates: templates,+		Operator:  &packages.OperatorFile{NamespaceManifest: "foo.yaml"},+	}+	verifier := NamespaceVerifier{}+	res := verifier.Verify(&pf)++	assert.Equal(t, 0, len(res.Warnings))+	assert.Equal(t, 1, len(res.Errors))+	assert.Equal(t, "NamespaceManifest \"foo.yaml\" found but manifest is not kind: Namespace", res.Errors[0])+}++func TestNamespaceManifestMultiNamespace(t *testing.T) {++	templates := make(map[string]string)+	templates["foo.yaml"] = `apiVersion: foo+kind: Foo+metadata:+  name: foo1+---+apiVersion: foo+kind: Foo+metadata:+name: foo2`++	pf := packages.Files{+		Templates: templates,+		Operator:  &packages.OperatorFile{NamespaceManifest: "foo.yaml"},+	}+	verifier := NamespaceVerifier{}+	res := verifier.Verify(&pf)++	assert.Equal(t, 0, len(res.Warnings))+	assert.Equal(t, 1, len(res.Errors))+	assert.Equal(t, "NamespaceManifest \"foo.yaml\" found but contains 2 manifests which is greater than 1", res.Errors[0])+}++func TestNamespaceManifestGood(t *testing.T) {++	templates := make(map[string]string)+	templates["foo.yaml"] = `apiVersion: v1+kind: Namespace+metadata:+ labels:+    app: my-app`++	pf := packages.Files{

This could be a table test and you would "save" ~50 slocs 😉

kensipe

comment created time in 11 days

Pull request review commentkudobuilder/kudo

WIP: simplified diagnostics

+package diagnostics++import (+	"fmt"+	"strings"+	"time"++	"github.com/kudobuilder/kudo/pkg/kudoctl/env"+	"github.com/kudobuilder/kudo/pkg/version"++	"github.com/spf13/afero"+)++type Options struct {+	Instance string+	LogSince int64+}++func NewOptions(instance string, logSince time.Duration) *Options {+	opts := Options{Instance: instance}+	if logSince > 0 {+		sec := int64(logSince.Round(time.Second).Seconds())+		opts.LogSince = sec+	}+	return &opts+}++func Collect(fs afero.Fs, options *Options, s *env.Settings) error {+	ir, err := NewInstanceResources(options, s)+	if err != nil {+		return err+	}+	instanceDiagRunner := &Runner{}

I'd like for us to also think about dependencies. We don't need to implement the support for it now, but I'd very much like to keep the general architecture open enough to support it later. Just a brief reminder: with KEP-29 implemented, we could have instances owning (via ownerReferences field) other dependent instances e.g.:

AA
├── BB
│   ├── EE
│   └── GG
└── CC

I think it would help if instead of a Runner we would have a higher-level InstanceCollector which could (in the future) check the ownerReferences field and recursively instantiate itself to collect the dependency instances. So instance-AA collector instantiating instance-BB, which in turn instantiates instance-EE and so on. Each will take a parentDir via constructor so that the resulting file structure can look like:

diag
├── kudo
│   └── ...
├── operator_aa
│   ├── instance_aa
│   │   ├── operator_bb  // <-- nested operator BB files
│   │   │   ├── instance_bb
│   │   │   ├── pod_bb-0
│   │   │   │   ├── bb-0.log.gz
│   │   │   │   └── bb-0.yaml
│   │   │   ├── ....
│   │   │   └── bb.yaml
│   │   ├── pod_aa-0
│   │   │   ├── aa-0.log.gz
│   │   │   └── aa-0.yaml
│   │   └── aa.yaml
│   ├── operatorversion_aa-0.3.0
│   │   └── aa-0.3.0.yaml
│   └── aa.yaml
└── settings.yaml

Again, we don't have to tackle it now but it would be nice if the structure would allow dependencies with minimal overhead.

vemelin-epm

comment created time in 16 days

Pull request review commentkudobuilder/kudo

WIP: simplified diagnostics

+package diagnostics++import (+	"fmt"+	"strings"+	"time"++	"github.com/kudobuilder/kudo/pkg/kudoctl/env"+	"github.com/kudobuilder/kudo/pkg/version"++	"github.com/spf13/afero"+)++type Options struct {+	Instance string+	LogSince int64+}++func NewOptions(instance string, logSince time.Duration) *Options {+	opts := Options{Instance: instance}+	if logSince > 0 {+		sec := int64(logSince.Round(time.Second).Seconds())+		opts.LogSince = sec+	}+	return &opts+}++func Collect(fs afero.Fs, options *Options, s *env.Settings) error {+	ir, err := NewInstanceResources(options, s)+	if err != nil {+		return err+	}+	instanceDiagRunner := &Runner{}

There a few things to unpack here but the gist of it is: while we still need a somewhat generic ResourceCollector which is a "working horse" of the collection, but we also need higher-level collectors that can utilize the lower ones. Let's take an imaginary PodCollector as an example which ideally collects two files: pod definition and the logs like this:

├── pod_zk-zookeeper-0
│   ├── zk-zookeeper-0.log.gz
│   └── zk-zookeeper-0.yaml

All it needs to collect both are pod name and namespace.

  1. For the definition, it might construct a generic ResourceCollector(ir.Pods, ...) using a resource collection fund ir.Pods
  2. For the logs, it can use a LogCollector(...)

The same goes for an imaginary InstanceResourceCollector. Currently, you have to invent the ResourceCollectorGroup for this case, but I'd rather have a specified collector as it is (subjectively) simpler to grok and you don't need the ResourceCollectorGroup abstraction.

An imaginary KUDOManagerCollector can utilize all of the above to collect everything that is needed about the KUDO manager.

The uniformity should be in the fact that all collectors implement an interface like:

type Collector interface {
	func Collect(fs afero.Fs, ctx *Context,  kc kudo.Client s *env.Settings) error
}

and know how to collect the resources they need.

P.S. I hope, that on the way, some of the interfaces become simpler. Like the fact that you needed to copy type Object interface from the k8s internals to generalize over all returned objects. Here too, I'd rather accept some code redundancy and opt-in into the typed resource collectors. Premature over-generalization is the root of all evil.

vemelin-epm

comment created time in 16 days

Pull request review commentkudobuilder/kudo

WIP: simplified diagnostics

+package diagnostics++import (+	"fmt"++	"github.com/kudobuilder/kudo/pkg/apis/kudo/v1beta1"++	v1 "k8s.io/api/core/v1"+	"k8s.io/apimachinery/pkg/api/meta"+	"k8s.io/apimachinery/pkg/runtime"+)++// processingContext - shared data for the resource collectors+// provides property accessors allowing to define a collector before the data it needs is available+// provides update callback functions. callbacks panic if called on a wrong type of runtime.Object+type processingContext struct {

I thought about this too. But while simplifying some things, it makes the other more complex: currently, collecting Instance, Operator, and OperatorVersion resources is treated like any other resource and not as some sort of a prerequisite (which will require a pre-processing step). So it's homogenous collectors list and dynamic context vs extra collecting steps and static context 🤷

vemelin-epm

comment created time in 16 days

Pull request review commentkudobuilder/kudo

WIP: simplified diagnostics

+package diagnostics++import (+	"fmt"+	"io"+	"strings"++	"github.com/kudobuilder/kudo/pkg/kudoctl/util/kudo"++	"github.com/spf13/afero"+	"gopkg.in/yaml.v2"++	"k8s.io/apimachinery/pkg/api/meta"+	"k8s.io/apimachinery/pkg/runtime"+	"k8s.io/cli-runtime/pkg/printers"+	"k8s.io/client-go/kubernetes/scheme"+)++const (+	DiagDir = "diag"+	KudoDir = "diag/kudo"+)++type printMode int++const (+	ObjectWithDir      printMode = iota // print object into its own nested directory based on its name and kind

See for example here

vemelin-epm

comment created time in 16 days

Pull request review commentkudobuilder/kudo

WIP: simplified diagnostics

+package cmd++import (+	"time"++	"github.com/spf13/afero"+	"github.com/spf13/cobra"++	"github.com/kudobuilder/kudo/pkg/kudoctl/cmd/diagnostics"+)++const (+	diagCollectExample = `  # collect diagnostics example+  kubectl kudo diagnostics collect --instance=%instance% --namespace=%namespace%+`+)++func newDiagnosticsCmd(fs afero.Fs) *cobra.Command {+	cmd := &cobra.Command{+		Use:   "diagnostics",+		Short: "diagnostics",+		Long:  "diagnostics command has sub-commands to collect and analyze diagnostics data",

Ah, I see what you mean. Not every pattern found in the code base is good 😉 Also, it currently has only one sub-command.

vemelin-epm

comment created time in 16 days

Pull request review commentkudobuilder/kudo

WIP: simplified diagnostics

+package diagnostics++// Collector - generic interface for diagnostic data collection+// implementors are expected to return only fatal errors and handle non-fatal ones themselves+type Collector interface {+	Collect() error+}++// Runner - sequential runner for Collectors reducing error checking boilerplate code+type Runner struct {+	fatalErr error+}++func (r *Runner) Run(c Collector) *Runner {+	if r.fatalErr == nil {+		r.fatalErr = c.Collect()

You Collect one more time after a fatal error happened, right?

vemelin-epm

comment created time in 16 days

Pull request review commentkudobuilder/kudo

Create NS Annotation label Made a Constant

 const (  	// Last applied state for three way merges 	LastAppliedConfigAnnotation = "kudo.dev/last-applied-configuration"++	CreatedByAnnotation = "kudo.dev/created-by"

The created-by semantics is somewhat strange: CLI also creates Instance, Operator, and OperatorVersion resources - should we now mark them too? Instance controller creates all other resources - should we then add kudo.dev/created-by: instance-controller? Our labels (seen above) have been about what-this-is and not who-created-it. So maybe:

	InstanceNamespaceAnnotation = "kudo.dev/instance-namespace"
kensipe

comment created time in 16 days

Pull request review commentkudobuilder/kuttl

Log the processed command.

 func RunCommands(logger Logger, namespace string, commands []harness.Command, wo 	}  	for _, cmd := range commands {-		logger.Logf("running command: %q", cmd.Command)--		bg, err := RunCommand(context.TODO(), namespace, cmd, workdir, logger, logger)+		bg, err := RunCommand(context.TODO(), namespace, cmd, workdir, logger, logger, logger)

😆

porridge

comment created time in 16 days

delete branch kudobuilder/kudo

delete branch : ad/required-webhook

delete time in 17 days

push eventkudobuilder/kudo

Aleksey Dukhovniy

commit sha a7b98bf97c0a3ca9a4729b27d412b90c8a90a1ef

Removed `--webhook` option (#1497) Summary: Now that KUDO moves towards a better support of multiple plans (e.g. `kudoctl plan trigger` command), the existing instance admission webhook becomes necessary to guarantee the plan execution consistency. More on KUDO [admission controller](https://kudo.dev/docs/developing-operators/plans.html#admission-controllers) in the documentation. This PR removes the `--webhook` option and thus makes the instance admission webhook required. This is a breaking change since the users will have to either have [cert-manager](https://cert-manager.io/) installed or use the `--unsafe-self-signed-webhook-ca` option when initializing KUDO. For existing installations, one would need to run [kudo init](https://kudo.dev/docs/cli.html#examples) to create missing secret/webhook configuration. Signed-off-by: Aleksey Dukhovniy <alex.dukhovniy@googlemail.com>

view details

push time in 17 days

PR merged kudobuilder/kudo

Reviewers
Removed `--webhook` option release/breaking-change

Summary: Now that KUDO moves towards a better support of multiple plans (e.g. kudoctl plan trigger command), the existing instance admission webhook becomes necessary to guarantee the plan execution consistency. More on KUDO admission controller in the documentation.

This PR removes the --webhook option and thus makes the instance admission webhook required. This is a breaking change since the users will have to either install cert-manager installed or use the --unsafe-self-signed-webhook-ca option when initializing KUDO. For existing installations, one would need to run kudo init to create the missing secret/webhook configuration.

Signed-off-by: Aleksey Dukhovniy alex.dukhovniy@googlemail.com

+458 -497

2 comments

49 changed files

zen-dog

pr closed time in 17 days

push eventkudobuilder/kudo

Aleksey Dukhovniy

commit sha 14dca5fffa09dae704bff9efbcc42ac32a6fe712

going back to operators master branch Signed-off-by: Aleksey Dukhovniy <alex.dukhovniy@googlemail.com>

view details

push time in 17 days

push eventkudobuilder/operators

Aleksey Dukhovniy

commit sha 144e994b76ade168fd54258e608b1df4ffaf5b8b

Run tests with manager as a pod inside the cluster (#261) Summary: Now that KUDO is moving towards a required webhook, the manager needs to run inside the cluster as a pod. As a side effect, this aligns the tests closer with the KUDO's own e2e tests and better encapsulates the test environment.

view details

push time in 17 days

PR merged kudobuilder/operators

Reviewers
Run tests with manager as a pod inside the cluster

Summary: Now that KUDO is moving towards a required webhook, the manager needs to run inside the cluster as a pod. As a side effect, this aligns the tests closer with the KUDO's own e2e tests and better encapsulates the test environment.

Signed-off-by: Aleksey Dukhovniy alex.dukhovniy@googlemail.com

+34 -20

0 comment

6 changed files

zen-dog

pr closed time in 17 days

Pull request review commentkudobuilder/kuttl

KEP04: TestStep.Apply Feature

+package file++import (+	"fmt"+	"io/ioutil"+	"os"+	"path/filepath"++	"k8s.io/apimachinery/pkg/runtime"++	testutils "github.com/kudobuilder/kuttl/pkg/test/utils"+)++// from a list of paths, returns an array of runtime objects+func ToRuntimeObjects(paths []string) ([]runtime.Object, error) {+	apply := []runtime.Object{}++	for _, path := range paths {+		objs, err := testutils.LoadYAML(path)+		if err != nil {+			return nil, fmt.Errorf("file %q load yaml error", path)+		}+		apply = append(apply, objs...)+	}++	return apply, nil+}++// From a file or dir path returns an array of flat file paths+func FromPath(path string) ([]string, error) {+	files := []string{}++	fi, err := os.Stat(path)+	if err != nil {+		return nil, fmt.Errorf("file mode issue with %w", err)+	}+	if fi.IsDir() {+		fileInfos, err := ioutil.ReadDir(path)+		if err != nil {+			return nil, err+		}+		for _, fileInfo := range fileInfos {+			if !fileInfo.IsDir() {+				files = append(files, filepath.Join(path, fileInfo.Name()))

Do we want to filter for *.yaml files only? Similar to how harness ignores certain files for test steps?

kensipe

comment created time in 17 days

Pull request review commentkudobuilder/kuttl

KEP04: TestStep.Apply Feature

+package file++import (+	"testing"++	"github.com/stretchr/testify/assert"+)++func TestFromPath(t *testing.T) {++	paths, err := FromPath("testdata/path")+	assert.NoError(t, err)+	assert.Equal(t, 2, len(paths))+	assert.Equal(t, "testdata/path/test1.yaml", paths[0])++	_, err = FromPath("testdata/badpath")+	assert.Error(t, err, "file mode issue with stat testdata/badpath: no such file or directory")

nit: it's a small test but I'd still prefer a table-based approach. And should you filter out all non-yaml files, then please also include a case for an ignored file.

kensipe

comment created time in 17 days

push eventkudobuilder/kudo

Aleksey Dukhovniy

commit sha 03e45b7ee58c473e7aaf9db6628642919317a729

and with proper sed-ing Signed-off-by: Aleksey Dukhovniy <alex.dukhovniy@googlemail.com>

view details

push time in 17 days

Pull request review commentkudobuilder/kudo

KEP-31: Impl of `--create-namespace`

 func (c *Client) ValidateServerForOperator(operator *v1beta1.Operator) error { 	return nil } +func (c *Client) CreateNamespace(namespace, manifest string) error {++	ns := &v1core.Namespace{}+	if manifest != "" {+		if err := yaml.Unmarshal([]byte(manifest), ns); err != nil {+			return fmt.Errorf("unmarshalling namespace manifest file: %w", err)+		}+	}+	ns.TypeMeta.Kind = "Namespace"+	ns.Name = namespace+	ns.Annotations["created-by"] = "kudo-cli"

What is this annotation needed for? And if it's actually needed, let's put it into labels.go with other annotations and prefix it with kudo.dev/???

kensipe

comment created time in 17 days

push eventkudobuilder/kudo

Aleksey Dukhovniy

commit sha 6b1ffd0deea6df2a22618ebd8f0e427ee782c8e1

Running operators tests with the new `kudo-test.yaml.tmpl` Signed-off-by: Aleksey Dukhovniy <alex.dukhovniy@googlemail.com>

view details

push time in 17 days

push eventkudobuilder/kudo

Aleksey Dukhovniy

commit sha 41237767e0a9d2751c41044adc9f3012eb874c35

Removed `crdDir` and `manifestDirs` from the e2e test configuration as they are generated by `kudo init` or not needed anymore Signed-off-by: Aleksey Dukhovniy <alex.dukhovniy@googlemail.com>

view details

push time in 17 days

push eventkudobuilder/operators

Aleksey Dukhovniy

commit sha b285d0708b1e1d57e6f12b36165688b9d31a8efe

checking kudo statefulset `readyReplicas` instead of pod status phase Signed-off-by: Aleksey Dukhovniy <alex.dukhovniy@googlemail.com>

view details

push time in 17 days

more