profile
viewpoint
Jason Hall imjasonh Red Hat Brooklyn, NY https://imjasonh.com Senior Principal Software Engineer at @redhat, working on @tektoncd, @shipwright-io, @kcp-dev. Formerly, @GoogleCloudBuild

GoogleContainerTools/kaniko 9363

Build Container Images In Kubernetes

google/ko 3606

Build and deploy Go applications on Kubernetes

google/go-containerregistry 1375

Go library and CLIs for working with container registries

imjasonh/cards 1

Simple Go library for abstracting playing cards

imjasonh/.github-1 0

Twitter GitHub Organization-wide files

imjasonh/appengine-goplus 0

Getting Started with Go, App Engine and Google+ API

imjasonh/appengine-value 0

Utility library to handle simple secret values in an App Engine app

imjasonh/arduclock 0

Code for a clock that displays Unix time, synced to GPS (WIP)

pull request commentkcp-dev/kcp

ci: add a check for properly wrapped errors

I think this is something we can get golangci-lint to check for us instead of involving another tool

https://golangci-lint.run/usage/linters/#errorlint

stevekuznetsov

comment created time in 41 minutes

pull request commentkcp-dev/kcp

ci: add a check for properly wrapped errors

I think this is something we can get golangci-lint to check for us instead of involving another tool

stevekuznetsov

comment created time in 42 minutes

PullRequestReviewEvent

issue closedkcp-dev/kcp

Community Meeting November 23, 2021

At 11am Eastern, 8am Pacific, 3pm UTC; find your time

Video call link: https://meet.google.com/ohf-kwvd-mrp Or dial: ‪(US) +1 617-675-4444‬ PIN: ‪936 182 087 7398‬# More phone numbers: https://tel.meet/ohf-kwvd-mrp?pin=9361820877398 Or join via SIP: sip:9361820877398@gmeet.redhat.com

This issue will collect prospective agenda topics. Add topics you'd like to discuss below!

closed time in 2 hours

imjasonh

issue commentkcp-dev/kcp

Community Meeting November 23, 2021

Recording: https://youtu.be/OFPNKZ0XWe8

imjasonh

comment created time in 2 hours

issue commentkcp-dev/kcp

Community Meeting November 23, 2021

Next: https://github.com/kcp-dev/kcp/issues/254

imjasonh

comment created time in 2 hours

issue openedkcp-dev/kcp

Community Meeting November 30, 2021

At 11am Eastern, 8am Pacific, 3pm UTC; find your time

Video call link: https://meet.google.com/ohf-kwvd-mrp Or dial: ‪(US) +1 617-675-4444‬ PIN: ‪936 182 087 7398‬# More phone numbers: https://tel.meet/ohf-kwvd-mrp?pin=9361820877398 Or join via SIP: sip:9361820877398@gmeet.redhat.com

This issue will collect prospective agenda topics. Add topics you'd like to discuss below!

created time in 2 hours

push eventimjasonh/kcp

Jason Hall

commit sha 59321b262bbc284e31b7f892a8254b26f9167579

appease lint gods

view details

push time in 3 hours

issue commentsigstore/sget

Make passing CI check a requirement for merging PRs

See #7 at https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/defining-the-mergeability-of-pull-requests/managing-a-branch-protection-rule

I'm also happy to click through and set this up if you make me a repo admin, or I can walk through it with someone over a video chat if there's any difficulty.

lkatalin

comment created time in 3 hours

PullRequestReviewEvent

push eventimjasonh/kcp

Jason Hall

commit sha 3ac93f8fe8c57c17525ee0379f05bee51432ec18

slather reconcileNamespace in some unit test coverage

view details

push time in 3 hours

push eventgoogle/go-containerregistry

Jason Hall

commit sha 92eabac95297cfd5b56bb5e3fd79274e3f93f666

Add issue templates (#1192)

view details

push time in 3 hours

push eventimjasonh/kcp

Jason Hall

commit sha d35300bb454c931286155d260247abd705976d71

appease lint gods

view details

push time in 3 hours

push eventimjasonh/kcp

Jason Hall

commit sha 6539b16ab3d545ac2bb5a4db0061f81ed5b684e5

Mostly fix the unit test, fake dynamic client still fails but at least the patches are tested

view details

push time in 4 hours

pull request commentsigstore/sget

Fix CI: workflow can't specify both uses: and run:

Thanks for merging! Now that CI works, a repo admin can make a passing CI check a requirement for merging future PRs, which should prevent regressions: https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/collaborating-on-repositories-with-code-quality-features/about-status-checks

imjasonh

comment created time in 4 hours

push eventimjasonh/kcp

Jason Hall

commit sha 5d49168b98b0ea3091f9b649ff97177274ebc476

Use separate queues for resources, namespaces and clusters

view details

push time in 4 hours

push eventimjasonh/kcp

Jason Hall

commit sha 3370ca759753a052d2a87b639e1529361c262d8f

Drop unsplittable keys, make defensive deepcopies

view details

push time in 4 hours

Pull request review commentkcp-dev/kcp

WIP: Schedule namespaces and all resources in them to clusters

+package namespace++import (+	"context"+	"strings"+	"time"++	"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"+	"k8s.io/apimachinery/pkg/runtime/schema"+	"k8s.io/apimachinery/pkg/util/runtime"+	"k8s.io/apimachinery/pkg/util/sets"+	"k8s.io/apimachinery/pkg/util/wait"+	"k8s.io/client-go/discovery"+	"k8s.io/client-go/dynamic"+	"k8s.io/client-go/informers"+	"k8s.io/client-go/kubernetes"+	coreclient "k8s.io/client-go/kubernetes/typed/core/v1"+	corelister "k8s.io/client-go/listers/core/v1"+	"k8s.io/client-go/rest"+	"k8s.io/client-go/tools/cache"+	"k8s.io/client-go/tools/clusters"+	"k8s.io/client-go/util/workqueue"+	"k8s.io/klog/v2"++	clusterclient "github.com/kcp-dev/kcp/pkg/client/clientset/versioned"+	"github.com/kcp-dev/kcp/pkg/client/informers/externalversions"+	clusterlisters "github.com/kcp-dev/kcp/pkg/client/listers/cluster/v1alpha1"+	"github.com/kcp-dev/kcp/pkg/gvk"+	"github.com/kcp-dev/kcp/pkg/informer"+	kerrors "github.com/kcp-dev/kcp/pkg/util/errors"+)++const resyncPeriod = 10 * time.Hour++// NewController returns a new Controller which schedules namespaced resources to a Cluster.+func NewController(ctx context.Context, cfg *rest.Config) *Controller {++	// TODO: Better dependency injection, don't take a rest.Config.++	disco := discovery.NewDiscoveryClientForConfigOrDie(cfg)+	dynClient := dynamic.NewForConfigOrDie(cfg)+	kubeClient := kubernetes.NewForConfigOrDie(cfg)+	clusterClient := clusterclient.NewForConfigOrDie(cfg)+	queue := workqueue.NewRateLimitingQueue(workqueue.DefaultControllerRateLimiter())++	c := &Controller{+		queue:           queue,+		dynClient:       dynClient,+		namespaceClient: kubeClient.CoreV1().Namespaces(),+		gvkTrans:        gvk.NewGVKTranslator(cfg),+	}++	c.ddsif = informer.NewDynamicDiscoverySharedInformerFactory(disco, dynClient,+		filterResource,+		informer.GVREventHandlerFuncs{+			AddFunc:    func(gvr schema.GroupVersionResource, obj interface{}) { c.enqueueResource(gvr, obj) },+			UpdateFunc: func(gvr schema.GroupVersionResource, _, obj interface{}) { c.enqueueResource(gvr, obj) },+			DeleteFunc: nil, // Nothing to do.+		})+	c.ddsif.Start(ctx)++	csif := externalversions.NewSharedInformerFactory(clusterClient, resyncPeriod)+	csif.Cluster().V1alpha1().Clusters().Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{+		AddFunc:    func(obj interface{}) { c.enqueueCluster(obj) },+		UpdateFunc: func(_, obj interface{}) { c.enqueueCluster(obj) },+		DeleteFunc: func(obj interface{}) { c.enqueueCluster(obj) },+	})+	c.clusterLister = csif.Cluster().V1alpha1().Clusters().Lister()+	csif.WaitForCacheSync(ctx.Done())+	csif.Start(ctx.Done())++	nsif := informers.NewSharedInformerFactory(kubeClient, resyncPeriod)+	nsif.Core().V1().Namespaces().Informer().AddEventHandler(cache.FilteringResourceEventHandler{+		FilterFunc: filterNamespace,+		Handler: cache.ResourceEventHandlerFuncs{+			AddFunc:    func(obj interface{}) { c.enqueueNamespace(obj) },+			UpdateFunc: func(_, obj interface{}) { c.enqueueNamespace(obj) },+			DeleteFunc: nil, // Nothing to do.+		},+	})+	c.namespaceLister = nsif.Core().V1().Namespaces().Lister()+	nsif.WaitForCacheSync(ctx.Done())+	nsif.Start(ctx.Done())++	return c+}++type Controller struct {+	queue workqueue.RateLimitingInterface++	dynClient dynamic.Interface++	clusterLister   clusterlisters.ClusterLister+	namespaceLister corelister.NamespaceLister+	namespaceClient coreclient.NamespaceInterface+	ddsif           informer.DynamicDiscoverySharedInformerFactory+	gvkTrans        *gvk.GVKTranslator+}++func filterResource(obj interface{}) bool {+	key, err := cache.MetaNamespaceKeyFunc(obj)+	if err != nil {+		runtime.HandleError(err)+		return false+	}+	_, clusterAwareName, err := cache.SplitMetaNamespaceKey(key)+	if err != nil {+		runtime.HandleError(err)+		return false+	}+	clusterName, _ := clusters.SplitClusterAwareKey(clusterAwareName)+	if clusterName != "admin" {+		//	klog.V(2).Infof("Skipping update for non-admin cluster %q", clusterName)+		//	return false+	}++	current, ok := obj.(*unstructured.Unstructured)+	if !ok {+		klog.V(2).Infof("Object was not Unstructured: %T", obj)+		return false+	}++	if namespaceBlocklist.Has(current.GetNamespace()) {+		klog.V(2).Infof("Skipping syncing namespace %q", current.GetNamespace())+		return false+	}+	return true+}++func filterNamespace(obj interface{}) bool {+	key, err := cache.MetaNamespaceKeyFunc(obj)+	if err != nil {+		runtime.HandleError(err)+		return false+	}+	_, clusterAwareName, err := cache.SplitMetaNamespaceKey(key)+	if err != nil {+		runtime.HandleError(err)+		return false+	}+	clusterName, name := clusters.SplitClusterAwareKey(clusterAwareName)+	if clusterName != "admin" {+		//	klog.Infof("Skipping update for non-admin cluster %q", clusterName)+		//	return false+	}+	if namespaceBlocklist.Has(name) {+		klog.V(2).Infof("Skipping syncing namespace %q", name)+		return false+	}+	return true+}++func (c *Controller) enqueueResource(gvr schema.GroupVersionResource, obj interface{}) {+	key, err := cache.MetaNamespaceKeyFunc(obj)+	if err != nil {+		runtime.HandleError(err)+		return+	}+	gvrstr := strings.Join([]string{gvr.Resource, gvr.Version, gvr.Group}, ".")+	c.queue.AddRateLimited(gvrstr + "::" + key)+}++func (c *Controller) enqueueNamespace(obj interface{}) {+	key, err := cache.MetaNamespaceKeyFunc(obj)+	if err != nil {+		runtime.HandleError(err)+		return+	}+	c.queue.AddRateLimited("namespace::" + key)+}++func (c *Controller) enqueueCluster(obj interface{}) {+	key, err := cache.MetaNamespaceKeyFunc(obj)+	if err != nil {+		runtime.HandleError(err)+		return+	}+	c.queue.AddRateLimited("cluster::" + key)+}++func (c *Controller) Start(ctx context.Context, numThreads int) {+	defer c.queue.ShutDown()+	for i := 0; i < numThreads; i++ {+		go wait.Until(func() { c.startWorker(ctx) }, time.Second, ctx.Done())+	}+	klog.Infof("Starting workers")+	<-ctx.Done()+	klog.Infof("Stopping workers")+}++func (c *Controller) startWorker(ctx context.Context) {+	for c.processNextWorkItem(ctx) {+	}+}++func (c *Controller) processNextWorkItem(ctx context.Context) bool {+	// Wait until there is a new item in the working queue+	k, quit := c.queue.Get()+	if quit {+		return false+	}+	key := k.(string)++	// No matter what, tell the queue we're done with this key, to unblock+	// other workers.+	defer c.queue.Done(key)++	var err error+	if strings.HasPrefix(key, "namespace::") {+		err = c.processNamespace(ctx, strings.TrimPrefix(key, "namespace::"))

It's good advice. I don't think we'd be able to have a queue per-dynamic-resource-type, but we could definitely split out namespace and cluster items from arbitrary-resource updates. I'll do that now.

imjasonh

comment created time in 5 hours

PullRequestReviewEvent

Pull request review commentkcp-dev/kcp

WIP: Schedule namespaces and all resources in them to clusters

+package namespace++import (+	"context"+	"fmt"+	"math/rand"++	corev1 "k8s.io/api/core/v1"+	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"+	"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"+	"k8s.io/apimachinery/pkg/labels"+	"k8s.io/apimachinery/pkg/runtime/schema"+	"k8s.io/apimachinery/pkg/types"+	"k8s.io/client-go/tools/clusters"+	"k8s.io/klog/v2"++	"github.com/kcp-dev/kcp/pkg/apis/cluster/v1alpha1"+	kerrors "github.com/kcp-dev/kcp/pkg/util/errors"+)++const clusterLabel = "kcp.dev/cluster"++// reconcileResource is responsible for setting the cluster for a resource of+// any type, to match the cluster where it's namespace is assigned.+func (c *Controller) reconcileResource(ctx context.Context, lclusterName string, unstr *unstructured.Unstructured, gvr *schema.GroupVersionResource) error {+	klog.Infof("Reconciling %s %s/%s", gvr.String(), unstr.GetNamespace(), unstr.GetName())++	// If the resource is not namespaced (incl if the resource is itself a+	// namespace), ignore it.+	if unstr.GetNamespace() == "" {+		klog.Infof("%s %s had no namespace; ignoring", gvr.String(), unstr.GetName())+		return nil+	}++	// Align the resource's assigned cluster with the namespace's assigned+	// cluster.+	// First, get the namespace object (from the cached lister).+	ns, err := c.namespaceLister.Get(clusters.ToClusterAwareKey(lclusterName, unstr.GetNamespace()))+	if err != nil {+		return kerrors.NewRetryableError(err)+	}++	lbls := unstr.GetLabels()+	if lbls == nil {+		lbls = map[string]string{}+	}++	old, new := lbls[clusterLabel], ns.Labels[clusterLabel]+	if old == new {+		// Already assigned to the right cluster.+		return nil+	}++	// Update the resource's assignment.+	klog.Infof("Patching to update cluster assignment for %s %s/%s: %q -> %q", gvr, ns.Name, unstr.GetName(), old, new)+	if _, err := c.dynClient.Resource(*gvr).Namespace(ns.Name).Patch(ctx, unstr.GetName(), types.MergePatchType, clusterLabelPatchBytes(new), metav1.PatchOptions{}); err != nil {+		return kerrors.NewRetryableError(err)+	}+	return nil+}++// reconcileNamespace is responsible for assigning a namespace to a cluster, if+// it does not already have one.+//+// After assigning (or if it's already assigned), this also updates all+// resources in the namespace to be assigned to the namespace's cluster.+func (c *Controller) reconcileNamespace(ctx context.Context, lclusterName string, ns *corev1.Namespace) error {+	klog.Infof("Reconciling Namespace %s", ns.Name)++	if ns.Labels == nil {+		ns.Labels = map[string]string{}+	}++	clusterName := ns.Labels[clusterLabel]+	if clusterName == "" {+		// Namespace is not assigned to a cluster; assign one.+		// First, list all clusters.+		cls, err := c.clusterLister.List(labels.Everything())+		if err != nil {+			return kerrors.NewRetryableError(err)+		}++		// TODO: Filter out un-Ready clusters so a namespace doesn't

Do we also apply a label on ready clusters? This might be a good followup.

imjasonh

comment created time in 5 hours

PullRequestReviewEvent

Pull request review commentkcp-dev/kcp

WIP: Schedule namespaces and all resources in them to clusters

+package namespace++import (+	"context"+	"strings"+	"time"++	"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"+	"k8s.io/apimachinery/pkg/runtime/schema"+	"k8s.io/apimachinery/pkg/util/runtime"+	"k8s.io/apimachinery/pkg/util/sets"+	"k8s.io/apimachinery/pkg/util/wait"+	"k8s.io/client-go/discovery"+	"k8s.io/client-go/dynamic"+	"k8s.io/client-go/informers"+	"k8s.io/client-go/kubernetes"+	coreclient "k8s.io/client-go/kubernetes/typed/core/v1"+	corelister "k8s.io/client-go/listers/core/v1"+	"k8s.io/client-go/rest"+	"k8s.io/client-go/tools/cache"+	"k8s.io/client-go/tools/clusters"+	"k8s.io/client-go/util/workqueue"+	"k8s.io/klog/v2"++	clusterclient "github.com/kcp-dev/kcp/pkg/client/clientset/versioned"+	"github.com/kcp-dev/kcp/pkg/client/informers/externalversions"+	clusterlisters "github.com/kcp-dev/kcp/pkg/client/listers/cluster/v1alpha1"+	"github.com/kcp-dev/kcp/pkg/gvk"+	"github.com/kcp-dev/kcp/pkg/informer"+	kerrors "github.com/kcp-dev/kcp/pkg/util/errors"+)++const resyncPeriod = 10 * time.Hour++// NewController returns a new Controller which schedules namespaced resources to a Cluster.+func NewController(ctx context.Context, cfg *rest.Config) *Controller {++	// TODO: Better dependency injection, don't take a rest.Config.++	disco := discovery.NewDiscoveryClientForConfigOrDie(cfg)+	dynClient := dynamic.NewForConfigOrDie(cfg)+	kubeClient := kubernetes.NewForConfigOrDie(cfg)+	clusterClient := clusterclient.NewForConfigOrDie(cfg)+	queue := workqueue.NewRateLimitingQueue(workqueue.DefaultControllerRateLimiter())++	c := &Controller{+		queue:           queue,+		dynClient:       dynClient,+		namespaceClient: kubeClient.CoreV1().Namespaces(),+		gvkTrans:        gvk.NewGVKTranslator(cfg),+	}++	c.ddsif = informer.NewDynamicDiscoverySharedInformerFactory(disco, dynClient,+		filterResource,+		informer.GVREventHandlerFuncs{+			AddFunc:    func(gvr schema.GroupVersionResource, obj interface{}) { c.enqueueResource(gvr, obj) },+			UpdateFunc: func(gvr schema.GroupVersionResource, _, obj interface{}) { c.enqueueResource(gvr, obj) },+			DeleteFunc: nil, // Nothing to do.+		})+	c.ddsif.Start(ctx)++	csif := externalversions.NewSharedInformerFactory(clusterClient, resyncPeriod)+	csif.Cluster().V1alpha1().Clusters().Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{+		AddFunc:    func(obj interface{}) { c.enqueueCluster(obj) },+		UpdateFunc: func(_, obj interface{}) { c.enqueueCluster(obj) },+		DeleteFunc: func(obj interface{}) { c.enqueueCluster(obj) },+	})+	c.clusterLister = csif.Cluster().V1alpha1().Clusters().Lister()+	csif.WaitForCacheSync(ctx.Done())+	csif.Start(ctx.Done())++	nsif := informers.NewSharedInformerFactory(kubeClient, resyncPeriod)+	nsif.Core().V1().Namespaces().Informer().AddEventHandler(cache.FilteringResourceEventHandler{+		FilterFunc: filterNamespace,+		Handler: cache.ResourceEventHandlerFuncs{+			AddFunc:    func(obj interface{}) { c.enqueueNamespace(obj) },+			UpdateFunc: func(_, obj interface{}) { c.enqueueNamespace(obj) },+			DeleteFunc: nil, // Nothing to do.+		},+	})+	c.namespaceLister = nsif.Core().V1().Namespaces().Lister()+	nsif.WaitForCacheSync(ctx.Done())+	nsif.Start(ctx.Done())++	return c+}++type Controller struct {+	queue workqueue.RateLimitingInterface++	dynClient dynamic.Interface++	clusterLister   clusterlisters.ClusterLister+	namespaceLister corelister.NamespaceLister+	namespaceClient coreclient.NamespaceInterface+	ddsif           informer.DynamicDiscoverySharedInformerFactory+	gvkTrans        *gvk.GVKTranslator+}++func filterResource(obj interface{}) bool {+	key, err := cache.MetaNamespaceKeyFunc(obj)+	if err != nil {+		runtime.HandleError(err)+		return false+	}+	_, clusterAwareName, err := cache.SplitMetaNamespaceKey(key)+	if err != nil {+		runtime.HandleError(err)+		return false+	}+	clusterName, _ := clusters.SplitClusterAwareKey(clusterAwareName)+	if clusterName != "admin" {+		//	klog.V(2).Infof("Skipping update for non-admin cluster %q", clusterName)+		//	return false+	}++	current, ok := obj.(*unstructured.Unstructured)+	if !ok {+		klog.V(2).Infof("Object was not Unstructured: %T", obj)+		return false+	}++	if namespaceBlocklist.Has(current.GetNamespace()) {+		klog.V(2).Infof("Skipping syncing namespace %q", current.GetNamespace())+		return false+	}+	return true+}++func filterNamespace(obj interface{}) bool {+	key, err := cache.MetaNamespaceKeyFunc(obj)+	if err != nil {+		runtime.HandleError(err)+		return false+	}+	_, clusterAwareName, err := cache.SplitMetaNamespaceKey(key)+	if err != nil {+		runtime.HandleError(err)+		return false+	}+	clusterName, name := clusters.SplitClusterAwareKey(clusterAwareName)+	if clusterName != "admin" {+		//	klog.Infof("Skipping update for non-admin cluster %q", clusterName)+		//	return false+	}+	if namespaceBlocklist.Has(name) {+		klog.V(2).Infof("Skipping syncing namespace %q", name)+		return false+	}+	return true+}++func (c *Controller) enqueueResource(gvr schema.GroupVersionResource, obj interface{}) {+	key, err := cache.MetaNamespaceKeyFunc(obj)+	if err != nil {+		runtime.HandleError(err)+		return+	}+	gvrstr := strings.Join([]string{gvr.Resource, gvr.Version, gvr.Group}, ".")+	c.queue.AddRateLimited(gvrstr + "::" + key)+}++func (c *Controller) enqueueNamespace(obj interface{}) {+	key, err := cache.MetaNamespaceKeyFunc(obj)+	if err != nil {+		runtime.HandleError(err)+		return+	}+	c.queue.AddRateLimited("namespace::" + key)+}++func (c *Controller) enqueueCluster(obj interface{}) {+	key, err := cache.MetaNamespaceKeyFunc(obj)+	if err != nil {+		runtime.HandleError(err)+		return+	}+	c.queue.AddRateLimited("cluster::" + key)+}++func (c *Controller) Start(ctx context.Context, numThreads int) {+	defer c.queue.ShutDown()+	for i := 0; i < numThreads; i++ {+		go wait.Until(func() { c.startWorker(ctx) }, time.Second, ctx.Done())+	}+	klog.Infof("Starting workers")+	<-ctx.Done()+	klog.Infof("Stopping workers")+}++func (c *Controller) startWorker(ctx context.Context) {+	for c.processNextWorkItem(ctx) {+	}+}++func (c *Controller) processNextWorkItem(ctx context.Context) bool {+	// Wait until there is a new item in the working queue+	k, quit := c.queue.Get()+	if quit {+		return false+	}+	key := k.(string)++	// No matter what, tell the queue we're done with this key, to unblock+	// other workers.+	defer c.queue.Done(key)++	var err error+	if strings.HasPrefix(key, "namespace::") {+		err = c.processNamespace(ctx, strings.TrimPrefix(key, "namespace::"))+	} else if strings.HasPrefix(key, "cluster::") {+		err = c.processCluster(ctx, strings.TrimPrefix(key, "cluster::"))+	} else {+		err = c.processResource(ctx, key)+	}+	c.handleErr(err, key)+	return true+}++func (c *Controller) handleErr(err error, key string) {+	// Reconcile worked, nothing else to do for this workqueue item.+	if err == nil {+		c.queue.Forget(key)+		return+	}++	if _, ok := err.(*kerrors.RetryableError); ok {+		num := c.queue.NumRequeues(key)+		klog.Errorf("Error reconciling key %q, retrying... (#%d): %v", key, num, err)+		c.queue.AddRateLimited(key)+		return+	}++	// Give up and report error elsewhere.+	c.queue.Forget(key)+	runtime.HandleError(err)+	klog.Errorf("Dropping key %q after non-retryable error: %v", key, err)+}++// key is gvr::KEY+func (c *Controller) processResource(ctx context.Context, key string) error {+	parts := strings.SplitN(key, "::", 2)+	if len(parts) != 2 {+		klog.Errorf("Error parsing key %q; dropping", key)+		return nil+	}+	gvrstr := parts[0]+	gvr, _ := schema.ParseResourceArg(gvrstr)+	if gvr == nil {+		klog.Errorf("Error parsing GVR %q; dropping", gvrstr)+		return nil+	}+	key = parts[1]++	obj, exists, err := c.ddsif.IndexerFor(*gvr).GetByKey(key)

Good call, done.

imjasonh

comment created time in 5 hours

PullRequestReviewEvent

push eventimjasonh/kcp

Jason Hall

commit sha 3a7ac9b230b05414aab8dc3bbed2ec73b7b830b3

WIP: Schedule namespaces and all resources in them to clusters This does not yet include: - tests covering this behavior - syncer transforming namespace names when applying to physical clusters

view details

push time in 5 hours

push eventimjasonh/kcp

Andy Goldstein

commit sha 0fa88d682c74e233b3253f7ba50fb8beffb9cab4

Pin controller-gen to v0.7.0

view details

Andy Goldstein

commit sha 56d6a5e44dadaec39199df5ccee6e2da8adbbd3b

Regenerate using controller-gen 0.7

view details

Steve Kuznetsov

commit sha 8613fb0ddd500827dfd6feb7d2cdb6de2558e1bc

config: add a simple bootstrapping routine for CRDs Today, we have a helper to apply some YAML to the default logical cluster. However, we would like to be able to apply specific CRDs, as not every controller needs all possible CRDs, we would like to direct the application to specific logical clusters, and we would like to wait for the system to establish the CRDs before moving on, or client calls to these APIs will fail if made quickly after they are introduced. This patch adds all of that functionality, as well as using Go's embedding for the data so that the routine is portable. Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>

view details

Steve Kuznetsov

commit sha 5fa5d5b39280f38d8ca814de3c3ea4b070234ce7

server: improve cluster controller correctness We noticed that informers based on CRDs need to be started *after* the CRDs exist and are established. Also, looping over _all_ contexts in the server-generated kubeconfig is no longer correct, and we should choose the specific ones we needed. Of note, looping over all was creating CRDs with cluster=*. Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>

view details

pengli

commit sha 5b44433b12b0755e4c667955930eae9f4e5a6ed9

start controllers in its goroutines

view details

Andy Goldstein

commit sha 269922e5bf3bedc178695b46fe0cde4430edb824

Merge pull request #226 from vincent-pli/add-goroutine Start controllers in its goroutines

view details

Andy Goldstein

commit sha 5aa8031c8047602b20501ad4c1520ba2c43ad1af

Merge pull request #224 from stevekuznetsov/skuznets/crd-bootstrapping config: add a simple bootstrapping routine for CRDs

view details

Andy Goldstein

commit sha 1f4a701f0c04f4a042f5ec57f5cc39a48d5a1122

CI: check kubernetes out after verify-codegen Because verify-codegen updates imports under the covers, and it affects everything in the working directory, we have to check out kubernetes after verifying codegen; otherwise, kubernetes code will get modified, and we'll get compiler errors trying to compile e2e.test. Signed-off-by: Andy Goldstein <andy.goldstein@redhat.com>

view details

Steve Kuznetsov

commit sha 2c7d4d448f6d0d50c1b8872c47eb6924f0c37479

Merge pull request #223 from ncdc/pinned-controller-gen Pin controller-gen to v0.7.0

view details

Andy Goldstein

commit sha c1969a88f0be72e64c6d0811187c847f4047d666

Add code-generator to tools.go Add the generators from code-generator to tools.go so the 'go list' command in hack/update-codegent-client.sh can find k8s.io/code-generator correctly. Signed-off-by: Andy Goldstein <andy.goldstein@redhat.com>

view details

Steve Kuznetsov

commit sha 989ffd56f0b91c1d15b58cd681f950eae172a7a3

Merge pull request #227 from ncdc/add-code-generator-to-tools Add code-generator to tools.go

view details

Steve Kuznetsov

commit sha c514ebfe85a1410e615f0e08e42731f78e69eb48

apis/tenancy: add WorkspaceShards to the Scheme These types were omitted in error. Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>

view details

Steve Kuznetsov

commit sha 1310d59cf35662682aa9d51d814364fff07da318

apis/tenancy: improve Workspace structure Add a Location object to encode current and future shard placement, along with Conditions to explain the current state. Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>

view details

Steve Kuznetsov

commit sha f0d7bca323d35b2e50dc43398dbb1b5fc4b8d86f

chore: regenerate CRD YAML and client code Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>

view details

Steve Kuznetsov

commit sha c193b978ad86f228eff89ede426f1080681396da

controllers: cleanup from other comments Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>

view details

Andy Goldstein

commit sha dfe454e7b7006081346d40606364d4865aea0305

Merge pull request #219 from stevekuznetsov/skuznets/fix-nits controllers: cleanup from other comments

view details

Steve Kuznetsov

commit sha 5b1be8af205a11de6ce0c1660d04013466395dda

Merge pull request #229 from stevekuznetsov/skuznets/api-fixes Misc fixes to api

view details

Joaquim Moreno

commit sha 4e880f5ec0238e6d2ea10213c9bb2e913efeaf1b

Bump kubernetes version Signed-off-by: Joaquim Moreno <joaquim@redhat.com>

view details

Andy Goldstein

commit sha 3c354d06018e43cab6f56d3c66260d5042f358b1

Fix CRD bootstrapping for already existing When bootstrapping CRDs, if they already exist, don't consider it an error. Signed-off-by: Andy Goldstein <andy.goldstein@redhat.com>

view details

Jason Hall

commit sha 69ae864465070b4eb2941bfe54a8bdd15b297e59

Merge pull request #235 from ncdc/bootstrap-crd-fix Fix CRD bootstrapping for already existing

view details

push time in 5 hours

PullRequestReviewEvent

Pull request review commentkcp-dev/kcp

WIP: Schedule namespaces and all resources in them to clusters

+package gvk++import (+	"k8s.io/apimachinery/pkg/runtime/schema"+	"k8s.io/client-go/discovery"+	"k8s.io/client-go/discovery/cached/memory"+	"k8s.io/client-go/rest"+	"k8s.io/client-go/restmapper"+)++type GVKTranslator struct {+	m *restmapper.DeferredDiscoveryRESTMapper+}++func NewGVKTranslator(cfg *rest.Config) *GVKTranslator {+	return &GVKTranslator{m: restmapper.NewDeferredDiscoveryRESTMapper(+		memory.NewMemCacheClient(+			discovery.NewDiscoveryClientForConfigOrDie(cfg)))}

Yeah, it does for now. This PR only works against a single lcluster for now.

imjasonh

comment created time in 5 hours

push eventimjasonh/sget

Lily Sturmann

commit sha e9801a64b6ae0ffa7e430509f4f96664bda549a5

Temporary fix for banned unwrap functions (#36) Signed-off-by: Lily Sturmann <lsturman@redhat.com>

view details

Lily Sturmann

commit sha 93825ed895fff3ea9fad27e0c7191b3c79b7980c

Do not execute script on Windows (#37) It's currently failing CI for unknown reasons. Signed-off-by: Lily Sturmann <lsturman@redhat.com>

view details

Jason Hall

commit sha 3b217ed884a6d6afa5c8984ccad961b743acbbd2

Fix CI: workflow can't specify both uses: and run: Signed-off-by: Jason Hall <jasonhall@redhat.com>

view details

Jason Hall

commit sha f4f710dbe21d0523372dbee3f9a5e088060345be

remove --show-output Signed-off-by: Jason Hall <jasonhall@redhat.com>

view details

push time in 5 hours

PullRequestReviewEvent
more