profile
viewpoint

cloudevents/spec 1636

CloudEvents Specification

docker/go-connections 126

Utility package to work with network connections

cloudviz/agentless-system-crawler 96

A tool to crawl systems like crawlers for the web

cloudevents/sdk-java 92

Java SDK for CloudEvents

docker/go-units 91

Parse and print size and time units in human-readable format

cloudevents/sdk-javascript 64

Javascript SDK for CloudEvents

cloudevents/sdk-csharp 40

CSharp SDK for CloudEvents

cloudevents/sdk-python 37

Python SDK for CloudEvents

issue commentknative/serving

Add K_NAMESPACE env var to pods

True - it would be one more in that table.

duglin

comment created time in a day

issue commentknative/serving

Add K_NAMESPACE env var to pods

you got my hopes up :-) that's on the queue proxy.

you can always setup your service with downward API?

Do you mean I can pass it in as an env var when I create the service? Sure, but then I need to remember to do that and so does anyone else who uses my image. This just seems like an easy thing that we can share with the user since we already have the info.

duglin

comment created time in a day

issue openedknative/serving

Add K_NAMESPACE env var to pods

In what area(s)?

/area API

Other classifications: /kind good-first-issue

Describe the feature

While this actually isn't a Knative concern, it's more Kube, I think we can help users a bit by adding a K_NAMESPACE env var to the user's pod so that they'll know which namespace they're in. I might have missed it, but I couldn't find an existing env var (or any other way) for my KnService to know which namespace it's running in and this made it harder for me to invoke a 2nd kSvc running in the same namespace by app.NS.svc.cluster.local. I ended up having to extract the namespace from the incoming HTTP request's URL, which could be hard (if not impossible) in some envs due to the ability for the URL to be configurable.

Is there some other mechanism to get this info that I'm forgetting about?

created time in 2 days

issue openedknative/serving

minScale causes pods to be killed during deploy

In what area(s)?

/area autoscale

What version of Knative?

v0.12.0

Expected Behavior

minScale pods to be created and for them to stick around

Actual Behavior

minScale pods are created, then 1/2 are killed and recreated almost immediately

Steps to Reproduce the Problem

kn service create echo --min-scale=10 --image duglin/echo --async && watch kubectl get pods

or

$ cat a
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: echo
spec:
  template:
    metadata:
      annotations:
        autoscaling.knative.dev/minScale: "10"
    spec:
      containers:
      - image: duglin/echo

$ kubectl apply -f a && watch kubectl get pods

You should see 10 pods get created, and then 1/2 of them will immediately be killed, and 5 more will be created to take their place.

While with minScale there's no guarantee that things will not go below that number, it seems like something isn't quite right that we seem to purposely kill pods that put us below the minimum requested.

created time in 2 days

pull request commentcloudevents/spec

Fix distributed tracing example.

LGTM thanks! Can I get one more set of eyes just to make sure... then I'll merge.

ian-mi

comment created time in 3 days

push eventduglin/serving

Markus Thömmes

commit sha 0d2fada0070d7857c935401b866a4a36da038035

Add AugmentWithResponse to our context factory and unify usages. (#6823) * Add AugmentWithResponse and unify usages. * Remove error returns from stats_reporter. * Add test for AugmentWithResponse. * Fix nits. * Use cancel function.

view details

Julian Friedman

commit sha 03121e2e8966d9d6daa494b835350c0704d49fcc

Move autoscaler config to subpackage (#6822) * Move autoscaler config to its own package * Fix import ordering

view details

Scott Nichols

commit sha 0521fea48c530a912e75b10b73027814946c516e

Migrate reconcilers to use configstore from generated reconcilers. (#6828) * migrate to use configstore from generated reconcilers. * Use pkg reconciler.ConfigStore. Migrate to use rec gen configStore * cleanup.

view details

Victor Agababov

commit sha 9a0191d6bb52269c2c3924b9a97b0fb881619b2d

Fix the GR test flake (#6830) * Fix the GR test flake The data race was ocurring when registring new hooks conflated with the background Create call from the SKS controller not _yet_ terminated. The hook list is not guarded to write, but only to read in fake, it seems. https://github.com/knative/serving/blob/master/vendor/k8s.io/client-go/testing/fake.go#L104 vs https://github.com/knative/serving/blob/master/vendor/k8s.io/client-go/testing/fake.go#L131 * rename hooks

view details

Victor Agababov

commit sha 40c87bc59ec5d1e219661181b4fee93ed34681a5

attempt to deflake the test (#6833)

view details

Matt Moore

commit sha 7d0a8c66229326cd82ab056ce61901f1217fe401

Migrate the Revision controller to genreconciler. (#6834) * Migrate the Revision controller to genreconciler. * Fix coverage * Drop revisionLister too.

view details

Matt Moore

commit sha 04d5b2864aa724a0ee0dc983b45fad9ce6e97478

Migrate the certificate reconciler to genreconciler. (#6835) * Migrate the certificate reconciler to genreconciler. * Drop knCertLister too.

view details

Matt Moore

commit sha 5bb6e417e654e9ca5aa54a117f047475414798fe

Drop primary-key listers. (#6836) The way genreconciler is set up, most of these types no longer need their own listers for their primary key resource because it is simple passed in to and mutations returned from ReconcileKind (and it's ilk).

view details

Julian Friedman

commit sha 5f1507048d9d634276a2023df2f12326fbfcbce8

Move BucketSize constant back in to autoscaler config (#6838) This avoids a circular dependency importing autoscaler/config in certain places.

view details

Dave Protasowski

commit sha 1208aca19b1619799ef24d20c45c9fdce3367609

the revision controller's child informers now ignore resource version (#6825) The revision controller will reconcile revisions when child resources change. This is accomplished by having informers for the child resource types and filtering them based on the controlling owner. The filtering of the owner would take into account it's version (which is included in the apiVersion). Now that we support different versions of our resources this PR relaxes the version constraint by only comparing Group & Kind.

view details

Markus Thömmes

commit sha 15042c3775723df267635dbdf70adb6296bc8ea0

Silence error logs of probe errors. (#6839)

view details

Dave Protasowski

commit sha 1a009450ea2cb10066cb172c357995697e28fd4f

Configuration controller uses v1 API (#6826) * configuration controller uses v1 API * improve coverage * include note to dedupe

view details

Markus Thömmes

commit sha c2608a227994079965f57e8965aa43e95821fb21

Remove extra types and use batch metric recording in the activator. (#6843)

view details

Matt Moore

commit sha 1a4544f73e11fc56a2a5ed76ed3c103658250d77

Auto-update dependencies (#6845) Produced via: `dep ensure -update knative.dev/test-infra knative.dev/pkg knative.dev/caching` /assign vagababov /cc vagababov

view details

Matt Moore

commit sha 931ad549457ed100d49cb6e7da29f4961a966eca

golang format tools (#6844) Produced via: `gofmt -s -w $(find -path './vendor' -prune -o -path './third_party' -prune -o -type f -name '*.go' -print)` `goimports -w $(find -name '*.go' | grep -v vendor | grep -v third_party)` /assign vagababov /cc vagababov

view details

Victor Agababov

commit sha d161feb8979a6f44b950270f71faee45df70ea84

ff pkg (#6850)

view details

Matt Moore

commit sha 4737a3ea42538d876237ee4d5b87e0216a974bb7

Migrate Configuration controller to genreconciler. (#6849)

view details

Matt Moore

commit sha f55f9cb521fbc14179023d6437936a3351075135

Auto-update dependencies (#6854) Produced via: `dep ensure -update knative.dev/test-infra knative.dev/pkg knative.dev/caching` /assign vagababov /cc vagababov

view details

Victor Agababov

commit sha 9fb7f59536e934790b43285823f069cc34d924fe

Clean up metric reporting in the queue proxy. (#6852) * Clean up metric reporting in the queue proxy. This is the most complex of them, since we report two sets of metrics. The code has some duplication (which probably we can clean up later, once this settles in), but it's much easier to follow and we got rid of two files and superfluous interfaces. * fix bug

view details

Victor Agababov

commit sha c0a45e08a35e3c2702ed15dcdf01b34f7b98be71

Return 504 on error and other sweeps (#6859) * Return 504 on error and other sweeps There exists a more proper response code than 503 to report a timeout that is 504. QP is a proxy and hence this code is more than suiting for this use case. We do not guarantee which reponse code is going to be returned in this case (or I could not find). Other minor nits and bits * nit:

view details

push time in 4 days

push eventcloudevents/spec

Felipe B. Conti

commit sha 2024190bd8bb03de7980bb4736bbc245f8a203da

Remove duplicated word Remove duplicated word "protocol". Signed-off-by: Felipe Bonvicini Conti <felipe.conti@totvs.com.br>

view details

Doug Davis

commit sha 503030698072c0b0ede2bece393c2764a4195355

Merge pull request #568 from felipeconti/patch-1 Remove duplicated word

view details

push time in 4 days

PR merged cloudevents/spec

Remove duplicated word

Remove duplicated word "protocol".

+1 -1

1 comment

1 changed file

felipeconti

pr closed time in 4 days

pull request commentcloudevents/spec

Remove duplicated word

thanks! Merging an obvious typo

felipeconti

comment created time in 4 days

issue commentknative/eventing

Use cloudevents sdk

@n3wscott we can close this now, right?

n3wscott

comment created time in 4 days

push eventduglin/serving

Markus Thömmes

commit sha 0d2fada0070d7857c935401b866a4a36da038035

Add AugmentWithResponse to our context factory and unify usages. (#6823) * Add AugmentWithResponse and unify usages. * Remove error returns from stats_reporter. * Add test for AugmentWithResponse. * Fix nits. * Use cancel function.

view details

Julian Friedman

commit sha 03121e2e8966d9d6daa494b835350c0704d49fcc

Move autoscaler config to subpackage (#6822) * Move autoscaler config to its own package * Fix import ordering

view details

Scott Nichols

commit sha 0521fea48c530a912e75b10b73027814946c516e

Migrate reconcilers to use configstore from generated reconcilers. (#6828) * migrate to use configstore from generated reconcilers. * Use pkg reconciler.ConfigStore. Migrate to use rec gen configStore * cleanup.

view details

Victor Agababov

commit sha 9a0191d6bb52269c2c3924b9a97b0fb881619b2d

Fix the GR test flake (#6830) * Fix the GR test flake The data race was ocurring when registring new hooks conflated with the background Create call from the SKS controller not _yet_ terminated. The hook list is not guarded to write, but only to read in fake, it seems. https://github.com/knative/serving/blob/master/vendor/k8s.io/client-go/testing/fake.go#L104 vs https://github.com/knative/serving/blob/master/vendor/k8s.io/client-go/testing/fake.go#L131 * rename hooks

view details

Victor Agababov

commit sha 40c87bc59ec5d1e219661181b4fee93ed34681a5

attempt to deflake the test (#6833)

view details

Matt Moore

commit sha 7d0a8c66229326cd82ab056ce61901f1217fe401

Migrate the Revision controller to genreconciler. (#6834) * Migrate the Revision controller to genreconciler. * Fix coverage * Drop revisionLister too.

view details

Matt Moore

commit sha 04d5b2864aa724a0ee0dc983b45fad9ce6e97478

Migrate the certificate reconciler to genreconciler. (#6835) * Migrate the certificate reconciler to genreconciler. * Drop knCertLister too.

view details

Matt Moore

commit sha 5bb6e417e654e9ca5aa54a117f047475414798fe

Drop primary-key listers. (#6836) The way genreconciler is set up, most of these types no longer need their own listers for their primary key resource because it is simple passed in to and mutations returned from ReconcileKind (and it's ilk).

view details

Julian Friedman

commit sha 5f1507048d9d634276a2023df2f12326fbfcbce8

Move BucketSize constant back in to autoscaler config (#6838) This avoids a circular dependency importing autoscaler/config in certain places.

view details

Dave Protasowski

commit sha 1208aca19b1619799ef24d20c45c9fdce3367609

the revision controller's child informers now ignore resource version (#6825) The revision controller will reconcile revisions when child resources change. This is accomplished by having informers for the child resource types and filtering them based on the controlling owner. The filtering of the owner would take into account it's version (which is included in the apiVersion). Now that we support different versions of our resources this PR relaxes the version constraint by only comparing Group & Kind.

view details

Markus Thömmes

commit sha 15042c3775723df267635dbdf70adb6296bc8ea0

Silence error logs of probe errors. (#6839)

view details

Dave Protasowski

commit sha 1a009450ea2cb10066cb172c357995697e28fd4f

Configuration controller uses v1 API (#6826) * configuration controller uses v1 API * improve coverage * include note to dedupe

view details

Markus Thömmes

commit sha c2608a227994079965f57e8965aa43e95821fb21

Remove extra types and use batch metric recording in the activator. (#6843)

view details

Matt Moore

commit sha 1a4544f73e11fc56a2a5ed76ed3c103658250d77

Auto-update dependencies (#6845) Produced via: `dep ensure -update knative.dev/test-infra knative.dev/pkg knative.dev/caching` /assign vagababov /cc vagababov

view details

Matt Moore

commit sha 931ad549457ed100d49cb6e7da29f4961a966eca

golang format tools (#6844) Produced via: `gofmt -s -w $(find -path './vendor' -prune -o -path './third_party' -prune -o -type f -name '*.go' -print)` `goimports -w $(find -name '*.go' | grep -v vendor | grep -v third_party)` /assign vagababov /cc vagababov

view details

Victor Agababov

commit sha d161feb8979a6f44b950270f71faee45df70ea84

ff pkg (#6850)

view details

Matt Moore

commit sha 4737a3ea42538d876237ee4d5b87e0216a974bb7

Migrate Configuration controller to genreconciler. (#6849)

view details

Matt Moore

commit sha f55f9cb521fbc14179023d6437936a3351075135

Auto-update dependencies (#6854) Produced via: `dep ensure -update knative.dev/test-infra knative.dev/pkg knative.dev/caching` /assign vagababov /cc vagababov

view details

Victor Agababov

commit sha 9fb7f59536e934790b43285823f069cc34d924fe

Clean up metric reporting in the queue proxy. (#6852) * Clean up metric reporting in the queue proxy. This is the most complex of them, since we report two sets of metrics. The code has some duplication (which probably we can clean up later, once this settles in), but it's much easier to follow and we got rid of two files and superfluous interfaces. * fix bug

view details

Victor Agababov

commit sha c0a45e08a35e3c2702ed15dcdf01b34f7b98be71

Return 504 on error and other sweeps (#6859) * Return 504 on error and other sweeps There exists a more proper response code than 503 to report a timeout that is 504. QP is a proxy and hence this code is more than suiting for this use case. We do not guarantee which reponse code is going to be returned in this case (or I could not find). Other minor nits and bits * nit:

view details

push time in 4 days

Pull request review commentcncf/wg-serverless

Event State Updates

             "$ref": "#/definitions/eventactions"           }         },+        "timeout": {+          "type": "string",+          "description": "Time period to wait for incoming events (ISO 8601 format)"

What I was more wondering about is whether an absolute time (eg. 2020-02-18T11:39:27+00:00) makes sense? I suppose someone could use it but it would seem to be pretty limited in value. Perhaps the spec should say that while any valid ISO 8601 value is allow, it is expected to be a duration for most cases - just to provide a bit of guidance for how we expect that field to be used. Not a biggie.

tsurdilo

comment created time in 5 days

create barnchcloudevents/sdk-rust

branch : master

created branch time in 5 days

created repositorycloudevents/sdk-rust

created time in 5 days

issue commentknative/client

Add option for adding labels only to service and/or template

re: The fix in #672... I do wonder if it still makes sense to not allow well-known labels on the Revision if we know it'll generate an error. So, after this feature is implemented people can still do -label serving.knative.dev/visibility=cluster-local if they wanted, so should we keep the logic from #672 anyway? Just a thought.

rhuss

comment created time in 5 days

pull request commentknative/client

Prevent service-specific labels from being put on the revision template

thanks @rhuss

duglin

comment created time in 5 days

delete branch duglin/client

delete branch : fixPrivate

delete time in 5 days

pull request commentknative/client

Prevent service-specific labels from being put on the revision template

Let me ask this... if we did the split label feature you mentioned... would we still want code to skip "known" labels that don't belong on Revisions if the user used --label ? Might be good, dunno

duglin

comment created time in 5 days

pull request commentknative/client

Prevent service-specific labels from being put on the revision template

Do we know when https://github.com/knative/client/pull/629 will be merged? It seems to be taking a while and this is kind of a blocker for us. I'm ok with removing this one #629 is merged.

duglin

comment created time in 6 days

pull request commentknative/client

Prevent service-specific labels from being put on the revision template

/test pull-knative-client-integration-tests

duglin

comment created time in 6 days

pull request commentknative/serving

fix support for private revisions

@mattmoor are there other labels we should exclude?

duglin

comment created time in 6 days

PR closed knative/serving

Reviewers
fix support for private revisions area/API cla: yes size/M

https://github.com/knative/serving/pull/6770 introduced a bug where:

kn service create echo-internal --image gcr.io/knative-samples/helloworld-go -l serving.knative.dev/visibility=cluster-local

would fail with this error:

Revision creation failed with message: admission webhook "validation.webhook.serving.knative.dev" denied the request: validation failed: invalid key name "serving.knative.dev/visibility": metadata.labels.

This PR will add this label to the Revision.ValidateLabels() func. This is a hotfix PR to unblock people who might be using HEAD. I think a more correct long term approach would be to have the code build up (register) the list of valid labels so that this func can just loop over an array of valid labels instead of having them hard-coded in the switch/case statement.

@dprotaso @mattmoor

Signed-off-by: Doug Davis dug@us.ibm.com

/lint

Release Note

NONE
+30 -2

12 comments

4 changed files

duglin

pr closed time in 6 days

pull request commentknative/serving

fix support for private revisions

https://github.com/knative/client/pull/672 appears to fix the issue - so closing this one. Can reopen if I'm mistaken.

duglin

comment created time in 6 days

PR opened knative/client

Prevent service-specific labels from being put on the revision template

See: https://github.com/knative/serving/pull/6865

Signed-off-by: Doug Davis dug@us.ibm.com

/lint

+32 -3

0 comment

2 changed files

pr created time in 6 days

push eventduglin/client

Doug Davis

commit sha 2e5223936c7cdaa9f4cc22c169d3bd0570634f5c

Prevent service-specific labels from being put on the revision template See: https://github.com/knative/serving/pull/6865 Signed-off-by: Doug Davis <dug@us.ibm.com>

view details

push time in 6 days

create barnchduglin/client

branch : fixPrivate

created branch time in 6 days

pull request commentknative/serving

fix support for private revisions

I didn't realize certain labels were "service" only.... testing a fix for CLI now...

duglin

comment created time in 6 days

pull request commentknative/serving

fix support for private revisions

It looks like the CLI's UpdateLabels func does put the same labels in both spots, blindly.

https://github.com/knative/client/blob/master/pkg/serving/config_changes.go#L367

duglin

comment created time in 6 days

Pull request review commentknative/serving

fix support for private revisions

 func (rs *RevisionStatus) Validate(ctx context.Context) *apis.FieldError { func (r *Revision) ValidateLabels() (errs *apis.FieldError) { 	for key, val := range r.GetLabels() { 		switch {-		case key == serving.RouteLabelKey || key == serving.ServiceLabelKey || key == serving.ConfigurationGenerationLabelKey:+		case key == serving.RouteLabelKey || key == serving.ServiceLabelKey || key == serving.ConfigurationGenerationLabelKey || key == config.VisibilityLabelKey:

ok added test

duglin

comment created time in 6 days

push eventduglin/serving

Doug Davis

commit sha d103d172a7600b1a1919b2cd21b149f93c907088

add test Signed-off-by: Doug Davis <dug@us.ibm.com>

view details

push time in 6 days

issue closedknative/serving

service/revision labels are not verified on create - only updates

In what area(s)?

/area API

What version of Knative?

v0.12

Expected Behavior

I expected the kn service create to fail

Actual Behavior

It worked, but then a subsequent update on a revision failed, complaining about something from the create.

Steps to Reproduce the Problem

$ kn service create echo --image duglin/echo -l serving.knative.dev/foo=bar
$ kubectl label revision foo=bar --all

Error from server (BadRequest): admission webhook "validation.webhook.serving.knative.dev"
denied the request: validation failed: invalid key name "serving.knative.dev/foo":
metadata.labels

The first command should have failed with an error message similar to what is generated by the kubectl command. Meaning, it should have complained about the bad serving.knative.dev/foo label during the create.

closed time in 6 days

duglin

issue commentknative/serving

service/revision labels are not verified on create - only updates

hmmm I'm seeing different results today using HEAD than what I was seeing yesterday. Gonna close until I can reproduce it....

duglin

comment created time in 6 days

issue openedknative/serving

service/revision labels are not verified on create - only updates

In what area(s)?

/area API

What version of Knative?

HEAD

Expected Behavior

I expected the kn service create to fail

Actual Behavior

It worked, but then a subsequent update on a revision failed, complaining about something from the create.

Steps to Reproduce the Problem

$ kn service create echo --image duglin/echo -l serving.knative.dev/foo=bar
$ kubectl label revision foo=bar --all

Error from server (BadRequest): admission webhook "validation.webhook.serving.knative.dev" denied the request: validation failed: invalid key name "serving.knative.dev/foo": metadata.labels

The first command should have failed with an error message similar to what is generated by the kubectl command. Meaning, it should have complained about the bad serving.knative.dev/foo label during the create.

created time in 6 days

pull request commentknative/serving

fix support for private revisions

@mattmoor should be ready now

duglin

comment created time in 6 days

issue commentknative/serving

Prefix system environment variables with KNATIVE_

/remove-lifecycle rotten

I like the idea of allowing people to make Kn values to their own env vars, but I still think there's value in having the KNATIVE_* ones too.

dgerd

comment created time in 6 days

pull request commentknative/serving

fix support for private revisions

v1beta is fixed now too

duglin

comment created time in 6 days

push eventduglin/serving

Doug Davis

commit sha c5ebc67070bd4bbe8a8da1ce4dbd2f75238640b2

fix support for private revisions Signed-off-by: Doug Davis <dug@us.ibm.com>

view details

push time in 6 days

Pull request review commentknative/serving

fix support for private revisions

 import ( 	"knative.dev/pkg/kmp" 	"knative.dev/serving/pkg/apis/autoscaling" 	"knative.dev/serving/pkg/apis/serving"+	"knative.dev/serving/pkg/reconciler/route/config"

sigh - moving it gives me an import cycle for config/activator.yaml:

import cycle not allowed
package knative.dev/serving/cmd/activator
        imports knative.dev/serving/pkg/activator/handler
        imports knative.dev/serving/pkg/activator/net
        imports knative.dev/serving/pkg/activator/util
        imports knative.dev/serving/pkg/apis/serving/v1alpha1
        imports knative.dev/serving/pkg/apis/autoscaling/v1alpha1
        imports knative.dev/serving/pkg/apis/serving
        imports knative.dev/serving/pkg/reconciler/route/config
        imports knative.dev/serving/pkg/apis/serving

will keep digging

duglin

comment created time in 7 days

pull request commentknative/serving

fix support for private revisions

A testcase for this situation would be good too :-) One thing I don't understand is why this validate code didn't run into this issue before? Is it because these labels used to be set during a create() and this validate check isn't called during a create()?

duglin

comment created time in 7 days

PR opened knative/serving

fix support for private revisions

https://github.com/knative/serving/pull/6770 introduced a bug where:

kn service create echo-internal --image gcr.io/knative-samples/helloworld-go -l serving.knative.dev/visibility=cluster-local

would fail with this error:

Revision creation failed with message: admission webhook "validation.webhook.serving.knative.dev" denied the request: validation failed: invalid key name "serving.knative.dev/visibility": metadata.labels.

This PR will add this label to the Revision.ValidateLabels() func. This is a hotfix PR to unblock people who might be using HEAD. I think a more correct long term approach would be to have the code build up (register) the list of valid labels so that this func can just loop over an array of valid labels instead of having them hard-coded in the switch/case statement.

@dprotaso @mattmoor

Signed-off-by: Doug Davis dug@us.ibm.com

/lint

Release Note

NONE
+2 -1

0 comment

1 changed file

pr created time in 7 days

create barnchduglin/serving

branch : fixPrivate

created branch time in 7 days

Pull request review commentcncf/wg-serverless

Event State Updates

             "$ref": "#/definitions/eventactions"           }         },+        "timeout": {+          "type": "string",+          "description": "Time period to wait for incoming events (ISO 8601 format)"

This is actually the time period to wait for the 2nd+ events in the "and" case, right? in the "or" case this timeout would have not effect I assume. I so, it might be good to state that the start of the timeout is when the first event is seen.

tsurdilo

comment created time in 8 days

Pull request review commentcncf/wg-serverless

Event State Updates

           ],           "description": "State type"         },-        "end": {-          "$ref": "#/definitions/end",-          "description": "State end definition"+        "exclusive": {+          "type": "boolean",+          "default": true,+          "description": "If true consuming one of the defined events causes its associated actions to be performed. If false all of the defined events must be consumed in order for actions to be performed"

in the "true"/or case, while I think it's implied, I think it might be best to be more explicit and add something about how if multiple events that match the array are seen at the same time, then each event would cause the actions to be performed independently and they are not grouped together into one workflow execution - unlike the "and" case. That is correct, right?

tsurdilo

comment created time in 8 days

Pull request review commentcncf/wg-serverless

Event State Updates

             "$ref": "#/definitions/eventactions"           }         },+        "timeout": {+          "type": "string",+          "description": "Time period to wait for incoming events (ISO 8601 format)"

Does this need to be specified as a duration rather than an absolute time?

tsurdilo

comment created time in 8 days

pull request commentcncf/wg-serverless

Enhance cloudevent matching in event states

the advantage of an array is that there's no parsing, no expression language and people won't naturally think.... hey, let's also support this other syntax option

tsurdilo

comment created time in 9 days

push eventcloudevents/spec

Klaus Deißner

commit sha 8fe59fc1971db3611ef4c70c3c9b396710659ff6

Paragraph about nested events to the primer (#567) * Added a paragraph about nested events to the primer Signed-off-by: Klaus Deissner <klaus.deissner@sap.com> * Added a paragraph about nested events to the primer Signed-off-by: Klaus Deissner <klaus.deissner@sap.com> * line length <= 80 characters Signed-off-by: Klaus Deissner <klaus.deissner@sap.com>

view details

push time in 9 days

PR merged cloudevents/spec

Paragraph about nested events to the primer

To finally resolve #72 , I added a paragraph to the primer.

+25 -0

1 comment

1 changed file

deissnerk

pr closed time in 9 days

issue closedcloudevents/spec

Nested events?

Events occur on different layers of a system. This might lead to nesting of events. E.g. an API gateway may receive an HTTP POST from a web hook, which already contains event data from a SaaS application. In addition the API gateway may be configured to raise an event to a function, when a POST request to a certain URL is received. How is this situation handled?

Alternative 1: Nesting

The API gateway wraps the SaaS application event into an API gateway event and sends it to the function.

Alternative 2: Forwarding

The API gateway recognizes the event context in the HTTP header and forwards the application event to the function using the protocol of choice. No API gateway event would be triggered.

Alternative 3: ?

closed time in 9 days

deissnerk

pull request commentcloudevents/spec

Paragraph about nested events to the primer

Approved on the 2/13 call

deissnerk

comment created time in 9 days

issue commentknative/client

Allow an option whether to use retries on modified errors in update operations

I think "A" might be the best approach. I didn't realize we'd retry today so exposing that to the user (as an advanced option) so that they can control if they want that at all would be good.

"B" is interesting but I'm not sure it's really worth all of the effort, plus I'd still want a way for the user to say "never retry even if just status was updated", which gets back to "A". Also, "B" might force a user to understand too much of the underling Kube system (spec vs status) and I think might be best to just abstract it away to just "the service was modified".

rhuss

comment created time in 10 days

issue openedknative/website

Add editable search box to search results page

Describe the change you'd like to see From this page: https://knative.dev/docs/ if I enter a search phrase I get a new tab/window with the results. However, the search phrase is not shown to me on that results page in an editable form. Which means if I had a typo in my search I need to close the window and use the old/previous window. It would be nice if I could just edit the search phrase on this new results page.

Here's what the page looks like now: image

created time in 10 days

push eventduglin/serving

Doug Davis

commit sha 26549650da93a767f9dc3b13d2574f7521c39dde

async support Signed-off-by: Doug Davis <dug@us.ibm.com>

view details

Doug Davis

commit sha f620423bc52cf4be4ebd5acda689892502842c46

add support for tasks extract entrpoint/cmd add support for flavor add support for keda add support for taints move batch updater logic into QP Signed-off-by: Doug Davis <dug@us.ibm.com>

view details

push time in 10 days

push eventduglin/serving

Doug Davis

commit sha 26549650da93a767f9dc3b13d2574f7521c39dde

async support Signed-off-by: Doug Davis <dug@us.ibm.com>

view details

push time in 10 days

push eventduglin/serving

Kenjiro Nakayama

commit sha 102be86c30eea32db23b79f0ea13818216ec1c4e

Set prober target port from gateway instead of endpoint (#6532) * Set prober target port from gateway instead of endpoint Currently target port to probe is set from endpoint's port. However, it does not work when endpoint has different port number from gateway. So, this patch sets prober target port from gateway instead of endpoint. * Fix typo * Use not common port 80 but 8080 for unit test

view details

Victor Agababov

commit sha dfc54444674ae54571036d6f037666c9f5134046

Remove the outdated go version check. (#6539) The current codebase won't compile with go version less than 1.13 due to the APIs we're using (e.g. duration.milliseconds or %w formatter). Thus it no longer makes sense to verify we run at least 1.12. I could upgraded to 1.13 check, but it's 1 compilation error vs another, so I'd rather have less code than more. /assign @mattmoor

view details

Victor Agababov

commit sha b4146e9e03b1584977de6a994268def9f28c8769

Add process uptime to the queue. (#6540) * Add process uptime to the queue. This is the first step in getting autoscaler to try to avoid young pods, in order not to suggest lower scale, due to the fact that new pods are not full yet * nits

view details

Zhimin Xiang

commit sha 29ba4e52d4f300844ef38df4c0f7bbecb57c6e9b

Add Auto TLS E2E test for per ksvc Certificate provision by using local CA (#6439) * E2E tests for per ksvc cert provision * fix the year * address comments * fix typo

view details

Markus Thömmes

commit sha 7dc6e011e9f79a1069a79482192993c222f3a4d5

Avoid tracing allocations if tracing is disabled anyway. (#6538) * Avoid unnecessary allocation of spans in the activator's handler. * Avoid tracing allocations in queue-proxy if tracing is disabled. * Fix handler test. * Fix nit.

view details

Tara Gu

commit sha fe0448bcd75f2b422a3e123700c9293822aafac1

Update test for checking last pinned timestamp (#5830) This PR updates the tests for checking last pinned timestamp for both valid revisions and invalid revisions.

view details

Markus Thömmes

commit sha 20f54b30640f8962bbcfac74986b0d222071c765

Add benchmarks for the request log handler and reduce contention. (#6547) * Add a benchmark for the request log handler. * Use unsafe.Pointer instead of RWMutex for the template to get the least possible contention. We don't care about consistency of the template variable at all. We only want it to be picked up after it is set eventually. Using an atomically handled pointer is quicker than an RWMutex in this case. * Add a benchmark that actually uses a template. * Some code deduplication. * Fix comments.

view details

Markus Thömmes

commit sha cfe66886ffd5262d5c1b9b5b844dc4da15b76e55

Create SKS before KPA in autoscaler GlobalResync test. (#6557) This test runs a "proper" controller process so we don't control when a reconcilation is kicked off. Creating the KPA first therefore leads to a race where reconcilation might happen before the test creates a dummy SKS with a dummy PrivateServiceName filled in. That will never resolve itself because we don't run an SKS controller here so nothing will fill in the missing PrivateServiceName and no decider will ever be created. Creating the SKS first should deterministically solve this as the reconciler should now see the SKS we created in any case.

view details

Matt Moore

commit sha f3feaac527db6d3c507c929bbce479bd9c15566b

Auto-update dependencies (#6559) Produced via: `dep ensure -update knative.dev/test-infra knative.dev/pkg knative.dev/caching` /assign vagababov /cc vagababov

view details

Zhimin Xiang

commit sha 242a31c4099a8d227b82624c5120e0b1e29dd342

Call Start function for configmapwatcher so the watcher will be shut (#6555) down when the test ends.

view details

Markus Thömmes

commit sha 637dc521323f908362c290da66bf9ebd3c498b64

Use correct tickProvider in multiscaler to allow override in test. (#6556) The tests override the tickProvider of the multiscaler to be able to accurately control the tick timings and not rely on wallclock time. The multiscaler however used an old global tickProvider so the tests could not override it at all, leading to unexpected ticks and races.

view details

Andrew Su

commit sha 2499f40532f53d8be4cce5e203b83cfca8968228

Update httpproxy with port and header, so it can be used in Ingress conformance tests without queue-proxy (#6560) * Update httpproxy - Enable port selection - Append http header “K-Network-Hash” * Update PR comments

view details

Evan Anderson

commit sha 7b37bce990d4ab8325b54c9be57bae3bbceab759

Convert tag.Insert to tag.Upsert (#6561)

view details

Victor Agababov

commit sha 4505ff00396d4acd85cd079ce7f20d815927d157

Add parsing bit to the new process uptime metric. (#6563) * Add parsing bit to the new process uptime metric. This parses the metrics in the scraper. Since this is a new metric, the old revision pods won't have it [another good reason for the QP restarts]. So parsing won't fail if the metric is absent. * fix comments

view details

Victor Agababov

commit sha 1465bd4134b794bc270a9079fd7d1fba8e32e278

Fix a bug with the rounding of the small numbers during metric computation (#6573) * Fix a bug with the rounding of the small numbers during metric computation. Make sure we round to 3 decimal digits. Should be completely enought for our computations, given all our inputs are integers. * Fix tests and test reporting.

view details

Markus Thömmes

commit sha 60e2801d76553a03ee767b905e00af537dcdeeb8

Actually handle revision get error. (#6574)

view details

Julian Friedman

commit sha ef944e45df26ae24738a45a4af80d4fd6ddbf155

Clarify how to check Istio gateway is installed (#6579) Added a little hint for newbies like me who struggle to remember that `cluster-local-gateway` is a `service` in order to know what type of resource to ask `kubectl get` for.

view details

Victor Agababov

commit sha e70b4786b3619bf252e800fd50932909b551b1ea

Improve readability in QP metrics and add tests (#6581) This change does not really touch functionality, but improves readability a bit and most importantly adds a test, that permits easier reasoning about the weightedAverage computations.

view details

Markus Thömmes

commit sha 93ad43a045474b0934d56a62da8f83a83413cc92

Wait for reconcilation events in nscert tests. (#6578) These tests used to call `Reconcile` through the backdoor by abusing the ability to have the controller object available. While that's usually not a big deal, this can cause races between actual reconcilation and the "manual" reconcilation triggered by the tests themselves. This has become most apparent in the failures of nscert.Failure where the configmap watcher of `TestDomainConfigExplicitDefaultDomain` triggered after that test was actually done. This removes all manual calls to `Reconcile` and instead makes the whole flow dependent on events coming from "proper" `Reconcile` calls. That should make sure that the state of the test controller is consistent and that all updates are processed in proper order and actually done once the test exits.

view details

Victor Agababov

commit sha 50852d97fd37b82aca67b8d19b4817e72f1951d6

Generalize precision func and raise default to 6 digits (#6585) * Generalize precision func and raise default to 6 digits Since there may be edgecases when 3 digits is not enough. e.g. big window (1hr) and 1 request, or there might be many pods and just one request e.g. after a big drop off in traffic. Hence raise it to 6 digits, which should be correct for all intents and purposes. * Fix test name * Use const

view details

push time in 10 days

push eventduglin/serving

Kenjiro Nakayama

commit sha 102be86c30eea32db23b79f0ea13818216ec1c4e

Set prober target port from gateway instead of endpoint (#6532) * Set prober target port from gateway instead of endpoint Currently target port to probe is set from endpoint's port. However, it does not work when endpoint has different port number from gateway. So, this patch sets prober target port from gateway instead of endpoint. * Fix typo * Use not common port 80 but 8080 for unit test

view details

Victor Agababov

commit sha dfc54444674ae54571036d6f037666c9f5134046

Remove the outdated go version check. (#6539) The current codebase won't compile with go version less than 1.13 due to the APIs we're using (e.g. duration.milliseconds or %w formatter). Thus it no longer makes sense to verify we run at least 1.12. I could upgraded to 1.13 check, but it's 1 compilation error vs another, so I'd rather have less code than more. /assign @mattmoor

view details

Victor Agababov

commit sha b4146e9e03b1584977de6a994268def9f28c8769

Add process uptime to the queue. (#6540) * Add process uptime to the queue. This is the first step in getting autoscaler to try to avoid young pods, in order not to suggest lower scale, due to the fact that new pods are not full yet * nits

view details

Zhimin Xiang

commit sha 29ba4e52d4f300844ef38df4c0f7bbecb57c6e9b

Add Auto TLS E2E test for per ksvc Certificate provision by using local CA (#6439) * E2E tests for per ksvc cert provision * fix the year * address comments * fix typo

view details

Markus Thömmes

commit sha 7dc6e011e9f79a1069a79482192993c222f3a4d5

Avoid tracing allocations if tracing is disabled anyway. (#6538) * Avoid unnecessary allocation of spans in the activator's handler. * Avoid tracing allocations in queue-proxy if tracing is disabled. * Fix handler test. * Fix nit.

view details

Tara Gu

commit sha fe0448bcd75f2b422a3e123700c9293822aafac1

Update test for checking last pinned timestamp (#5830) This PR updates the tests for checking last pinned timestamp for both valid revisions and invalid revisions.

view details

Markus Thömmes

commit sha 20f54b30640f8962bbcfac74986b0d222071c765

Add benchmarks for the request log handler and reduce contention. (#6547) * Add a benchmark for the request log handler. * Use unsafe.Pointer instead of RWMutex for the template to get the least possible contention. We don't care about consistency of the template variable at all. We only want it to be picked up after it is set eventually. Using an atomically handled pointer is quicker than an RWMutex in this case. * Add a benchmark that actually uses a template. * Some code deduplication. * Fix comments.

view details

Markus Thömmes

commit sha cfe66886ffd5262d5c1b9b5b844dc4da15b76e55

Create SKS before KPA in autoscaler GlobalResync test. (#6557) This test runs a "proper" controller process so we don't control when a reconcilation is kicked off. Creating the KPA first therefore leads to a race where reconcilation might happen before the test creates a dummy SKS with a dummy PrivateServiceName filled in. That will never resolve itself because we don't run an SKS controller here so nothing will fill in the missing PrivateServiceName and no decider will ever be created. Creating the SKS first should deterministically solve this as the reconciler should now see the SKS we created in any case.

view details

Matt Moore

commit sha f3feaac527db6d3c507c929bbce479bd9c15566b

Auto-update dependencies (#6559) Produced via: `dep ensure -update knative.dev/test-infra knative.dev/pkg knative.dev/caching` /assign vagababov /cc vagababov

view details

Zhimin Xiang

commit sha 242a31c4099a8d227b82624c5120e0b1e29dd342

Call Start function for configmapwatcher so the watcher will be shut (#6555) down when the test ends.

view details

Markus Thömmes

commit sha 637dc521323f908362c290da66bf9ebd3c498b64

Use correct tickProvider in multiscaler to allow override in test. (#6556) The tests override the tickProvider of the multiscaler to be able to accurately control the tick timings and not rely on wallclock time. The multiscaler however used an old global tickProvider so the tests could not override it at all, leading to unexpected ticks and races.

view details

Andrew Su

commit sha 2499f40532f53d8be4cce5e203b83cfca8968228

Update httpproxy with port and header, so it can be used in Ingress conformance tests without queue-proxy (#6560) * Update httpproxy - Enable port selection - Append http header “K-Network-Hash” * Update PR comments

view details

Evan Anderson

commit sha 7b37bce990d4ab8325b54c9be57bae3bbceab759

Convert tag.Insert to tag.Upsert (#6561)

view details

Victor Agababov

commit sha 4505ff00396d4acd85cd079ce7f20d815927d157

Add parsing bit to the new process uptime metric. (#6563) * Add parsing bit to the new process uptime metric. This parses the metrics in the scraper. Since this is a new metric, the old revision pods won't have it [another good reason for the QP restarts]. So parsing won't fail if the metric is absent. * fix comments

view details

Victor Agababov

commit sha 1465bd4134b794bc270a9079fd7d1fba8e32e278

Fix a bug with the rounding of the small numbers during metric computation (#6573) * Fix a bug with the rounding of the small numbers during metric computation. Make sure we round to 3 decimal digits. Should be completely enought for our computations, given all our inputs are integers. * Fix tests and test reporting.

view details

Markus Thömmes

commit sha 60e2801d76553a03ee767b905e00af537dcdeeb8

Actually handle revision get error. (#6574)

view details

Julian Friedman

commit sha ef944e45df26ae24738a45a4af80d4fd6ddbf155

Clarify how to check Istio gateway is installed (#6579) Added a little hint for newbies like me who struggle to remember that `cluster-local-gateway` is a `service` in order to know what type of resource to ask `kubectl get` for.

view details

Victor Agababov

commit sha e70b4786b3619bf252e800fd50932909b551b1ea

Improve readability in QP metrics and add tests (#6581) This change does not really touch functionality, but improves readability a bit and most importantly adds a test, that permits easier reasoning about the weightedAverage computations.

view details

Markus Thömmes

commit sha 93ad43a045474b0934d56a62da8f83a83413cc92

Wait for reconcilation events in nscert tests. (#6578) These tests used to call `Reconcile` through the backdoor by abusing the ability to have the controller object available. While that's usually not a big deal, this can cause races between actual reconcilation and the "manual" reconcilation triggered by the tests themselves. This has become most apparent in the failures of nscert.Failure where the configmap watcher of `TestDomainConfigExplicitDefaultDomain` triggered after that test was actually done. This removes all manual calls to `Reconcile` and instead makes the whole flow dependent on events coming from "proper" `Reconcile` calls. That should make sure that the state of the test controller is consistent and that all updates are processed in proper order and actually done once the test exits.

view details

Victor Agababov

commit sha 50852d97fd37b82aca67b8d19b4817e72f1951d6

Generalize precision func and raise default to 6 digits (#6585) * Generalize precision func and raise default to 6 digits Since there may be edgecases when 3 digits is not enough. e.g. big window (1hr) and 1 request, or there might be many pods and just one request e.g. after a big drop off in traffic. Hence raise it to 6 digits, which should be correct for all intents and purposes. * Fix test name * Use const

view details

push time in 10 days

Pull request review commentknative/docs

Add pointer to monitoring-core.yaml

 You can find a number of sample on how to get started with Knative Eventing [her  Knative provides a bundle of monitoring components that can be used to make the Serving and Eventing components more observable. +Before you install any specific monitoring components, you must first install the core monitoring pieces:++```bash+kubectl apply --filename https://github.com/knative/serving/releases/download/{{< version >}}/monitoring-core.yaml

done!

duglin

comment created time in 10 days

push eventduglin/docs

Doug Davis

commit sha 43c535b85d75fa482581f207107fcecbfa48e98e

use template Signed-off-by: Doug Davis <dug@us.ibm.com>

view details

push time in 10 days

issue commentknative/community

Remove the usage of the googlebot and google CLA

While not a blocker for IBM, I do know that having a company specific CLA on an open source project does raise some eyebrows during our internal reviews. And I have heard from other companies that they've run into similar concerns and even roadblocks over it. This falls into the category of "bad optics" when trying to convince people that this really is an open-source project and not just a Google project that allows other people to participate.

This also is related to a question that I believe has been asked before... what exactly does the SC have control over? Most of the conversations have been around Google wanting to retain trademark enforcement, and now we have this. It begs the question... what other areas of control does Google have that people might assume are community/SC owned?

I still question why we can't just use DCOs? Even with "Google's ownership of the project" (which is an interesting phrase unto itself), a DCO should be allowable and help remove some barriers for people.

n3wscott

comment created time in 11 days

pull request commentcncf/wg-serverless

Enhance cloudevent matching in event states

If there's consensus that other operands beyond OR would head down a path that people want to avoid, would it make sense to think of it as a list of "events" (meaning an array where any event in the list triggers it) instead of an "expression" ?

tsurdilo

comment created time in 11 days

issue commentknative/community

Remove the usage of the googlebot and google CLA

Can you elaborate? Are you saying the SC can’t change this if they wanted?

Sent from my iPad

On Feb 11, 2020, at 6:00 PM, Ryan Gregg notifications@github.com wrote:

I'm aware of a few issues with the Google CLA bot. I believe we've been working with the team that owns the tool to see if we can improve the issue for multiple owners. The good thing is that we can work around the tool as necessary. If there's more feedback we can take to the team to improve how the bot works on a project like this, I'd love to capture that feedback and share it.

With the structure of the project, the Google CLA is still required to be signed by all contributors or their organizations, which means we're not at the liberty to remove the Google CLA bot.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

n3wscott

comment created time in 11 days

issue commentknative/community

Remove the usage of the googlebot and google CLA

DCO!

n3wscott

comment created time in 11 days

issue commentknative/client

Add "kn service import"

if nothing else we'd need this to be symmetrical with your kn export cmd
People's OCD would kick in :-)

rhuss

comment created time in 11 days

issue commentknative/client

Add "kn service export"

I like this idea - it will also help with people who want to learn about Kn by easily seeing how someone else created their KnService.

rhuss

comment created time in 11 days

issue openedknative/docs

Add editable search box to search results page

Describe the change you'd like to see From this page: https://knative.dev/docs/ if I enter a search phrase I get a new tab/window with the results. However, the search phrase is not shown to me on that results page in an editable form. Which means if I had a typo in my search I need to close the window and use the old/previous window. It would be nice if I could just edit the search phrase on this new results page.

Here's what the page looks like now: image

created time in 12 days

delete branch duglin/serving

delete branch : removeDupNS

delete time in 12 days

pull request commentknative/docs

Add pointer to monitoring-core.yaml

knative/serving#6663 is now merged.

duglin

comment created time in 12 days

pull request commentknative/serving

Remove duplicate NS in yamls

@mattmoor ok see if you think this covers it: https://github.com/knative/docs/pull/2197

duglin

comment created time in 12 days

PR opened knative/docs

Add pointer to monitoring-core.yaml

DO NOT merge this until https://github.com/knative/serving/pull/6663 is merged

Signed-off-by: Doug Davis dug@us.ibm.com

<!-- General PR guidelines:

New contributors:

If you are new to Git/GitHub and want to make a quick fix to the docs, open your PR against the release branch where you found the error, such as "release-0.5".

Regular contributors:

Most PRs should be opened against the master branch.

If the change should also be in the most recent numbered release, add the corresponding "cherrypick-0.X" label; for example, "cherrypick-0.5", to the original PR. Best practice is to open a PR for the cherry-pick yourself after your original PR has been merged into the master branch. Once the cherry-pick PR has merged, remove the cherry-pick label from the original PR.

For more information on contributing to the Knative Docs, see: https://www.knative.dev/community/contributing/

-->

Fixes #issue-number

Proposed Changes

+6 -0

0 comment

1 changed file

pr created time in 12 days

create barnchduglin/docs

branch : serving6663

created branch time in 12 days

push eventduglin/serving

Doug Davis

commit sha 9d25cc92327cd9db21f7b39d483e534bd09f4640

add monitoring-core.yaml Signed-off-by: Doug Davis <dug@us.ibm.com>

view details

push time in 12 days

issue commentcncf/wg-serverless

Unmanageable event expressions in event states

will have to be changes/modified if you move to another impl

That's kind of my point. If the spec mandates a single language then I think this issue goes away because in order to be spec compliant everyone MUST support it - hence we get interop.

I don't have the background in this space to know the answers, but some things that are running through my head:

  • based on what's out there today, and based on the comments I've seen so far, it seems that forcing all implementations to support a new language might be a large hurdle
  • of course, if they're already adding code to support this workflow spec then they're already making changes so if the language is simple, is it really that bad?
  • if the spec's "one language" is simple enough, is there an easy mapping from it to existing languages? Meaning, each impl would just have to have an adapter of some kind
  • related to this... do existing impls have the notion of expressions like this on a per state basis or do they typically use the workflow control logic to do this? If introducing an expression per state is a radical idea then no matter how easy the language is it still could be a burden given the change in processing model

It seems to me that the decision to force one language on all implementations is not yet agreed to by the group. Combine that with the fact that even if there was agreement on that point, the "one" expression language is not specified/define by the spec. Therefore it might be better to have that as a separate discussion/PR. Meaning, pull it out of the current spec and then bring forward a PR that fully defines the expression language. This would allow people to more concretely see what is being asked of them and we might then be able to get feedback from existing implementations as to whether supporting this common language is something they would consider. A mapping from this new language to existing languages might be useful too.

tsurdilo

comment created time in 13 days

push eventcloudevents/spec

Doug Davis

commit sha 47fd83e5295e96faeb8086c2fbbbb83ab2c22838

add rules for changing Admins Signed-off-by: Doug Davis <dug@us.ibm.com>

view details

Doug Davis

commit sha d97aa4ae39947a587b6c6c17cb81e5ba7169258b

Merge pull request #564 from duglin/issue318 add rules for changing Admins

view details

push time in 13 days

PR merged cloudevents/spec

add rules for changing Admins

Signed-off-by: Doug Davis dug@us.ibm.com

Closes #318

+20 -0

1 comment

1 changed file

duglin

pr closed time in 13 days

pull request commentcloudevents/spec

add rules for changing Admins

Approved on the 2/6 call

duglin

comment created time in 13 days

PR merged cloudevents/spec

Say it's ok to ignore non-MUST recommendations - at your own risk

Closes: #545

Signed-off-by: Doug Davis dug@us.ibm.com

+14 -0

2 comments

1 changed file

duglin

pr closed time in 13 days

push eventcloudevents/spec

Doug Davis

commit sha 31458e3988e8b3bc90bd81f6c785bb576b08f41a

Say it's ok to ignore non-MUST recommendations - at your own risk Closes: #545 Signed-off-by: Doug Davis <dug@us.ibm.com>

view details

Doug Davis

commit sha 082ded8c6286a3f345e12c0eec7ef98c7eb3000c

Merge pull request #562 from duglin/issue545 Say it's ok to ignore non-MUST recommendations - at your own risk

view details

push time in 13 days

issue closedcloudevents/spec

Advanced HTTP events

The documentation describes HTTP related events pretty well. I try to understand advanced use cases which are not covered by the docs.

I'd like to standardize all incoming requests of my SAAS service by forcing cloud events everywhere. This would include API calls performed by users over the HTTP protocol (e.g. REST, RPC). The spec in this repo is pretty forward. Based on HTTP Protocol Binding nothing prevents such use case. If I'm correct, then a user would send event metadata as HTTP headers and use application/json content type for GET/POST requests and application/octet-stream for binary stuff like file uploads. The only restriction I see is the size limit for intermediaries and consumers which is expected to be up to 64 KB - this blocks file upload implementation. By ignoring this limit the HTTP event received from a user would look like this:

const httpEvent = {
    "specversion" : "1.0",
    "type" : "com.myservice.api.files.upload",
    "source" : "https://myservice.com/api/files",
    "subject" : "upload",
    "id" : "A234-1234-1234",
    "time" : "2018-04-05T17:31:00Z",
    "datacontenttype" : "application/octet-stream",
    "data" : HTTPRequestObject // stream instance (`req`)
};

I wonder why Google's implementation of HTTP cloud function doesn't provide the event object. It seems like Knative also follows Google's pattern. AWS, however, agrees that a user sends an event object where they attach custom properties.

// Google
exports.helloHttp = (req, res) => {};
exports.helloEvent = (event, context) => {};
// AWS
exports.helloHttp = async (event) => {};

If we ignore how cloud functions work today, cloud events could also start HTTP event streaming (SSE) and there's a bunch of other use cases. At the moment it seems to me that cloud events are only meant for small background triggers and not as an overall event standard. Am I correct? I remember a talk almost a year ago where a guy said that an event carries only data for a consumer to create another request to access actual data. Maybe this is still true.

I would appreciate comments on what I wrote above and some more words about the conceptional-level overview of how the cloud events should be used. It would be great for this spec to be more opinionated to eliminate confusion. Thank you.

closed time in 13 days

xpepermint

pull request commentcloudevents/spec

Say it's ok to ignore non-MUST recommendations - at your own risk

Approved on the 2/6 call

duglin

comment created time in 13 days

Pull request review commentknative/serving

Remove duplicate NS in yamls

 cat "${SERVING_CORE_YAML}" > "${SERVING_YAML}" cat "${SERVING_HPA_YAML}" >> "${SERVING_YAML}" cat "${SERVING_ISTIO_YAML}" >> "${SERVING_YAML}" -echo "Building Monitoring & Logging"-# Use ko to concatenate them all together.-ko resolve ${KO_YAML_FLAGS} -R -f config/monitoring/100-namespace.yaml \-    -f third_party/config/monitoring/logging/elasticsearch \-    -f config/monitoring/logging/elasticsearch \-    -f third_party/config/monitoring/metrics/prometheus \-    -f config/monitoring/metrics/prometheus \-    -f config/monitoring/tracing/zipkin | "${LABEL_YAML_CMD[@]}" > "${MONITORING_YAML}"- # Metrics via Prometheus & Grafana-ko resolve ${KO_YAML_FLAGS} -R -f config/monitoring/100-namespace.yaml \+ko resolve ${KO_YAML_FLAGS} -R \     -f third_party/config/monitoring/metrics/prometheus \     -f config/monitoring/metrics/prometheus | "${LABEL_YAML_CMD[@]}" > "${MONITORING_METRIC_PROMETHEUS_YAML}"  # Logs via ElasticSearch, Fluentd & Kibana-ko resolve ${KO_YAML_FLAGS} -R -f config/monitoring/100-namespace.yaml \+ko resolve ${KO_YAML_FLAGS} -R \

@mattmoor which line in the docs did you have in mind? I agree that if people only look at the table of yaml files then they might miss the namespace yaml, but w.r.t. the instructions, they appear to push people towards using the monitor.yaml, which should be safe.

I can create a monitoring-core.yaml file (for just the NS) but it's not clear to me which docs would point to it.

duglin

comment created time in 13 days

issue commentcncf/wg-serverless

Unmanageable event expressions in event states

If I'm following, the issue isn't just whether the spec should allow for the expression, but whether or not a single expression language could be agreed upon. Meaning, while the expression definition allows for a language to be specified, if every implementation chooses to support a different language then we have zero interop and makes the spec less useful. In order to have some level guaranteed interop, it would seem that at least one language would have to be mandated to be supported, and based on what I think I'm hearing, that sounds like it will be a challenge given the number of implementations out there today. Do I have this right?

tsurdilo

comment created time in 13 days

issue closedknative/serving

Requests are routed sub-optimally

In what area(s)?

/area autoscale /area networking

What version of Knative?

master

Issue

This is a follow-on to #1409. I'm still seeing the queuing behavior described.

Using this script (modify the URL to your env):

#!/bin/bash

set -e

echo "Creating service and wait for instance to die"
URL=$(kn service create echo --concurrency-limit=1 --image=duglin/echo | tail -1)

while kubectl get pods | grep -vi terminating | grep echo > /dev/null ; do
  sleep 1
done

echo Prime it
time curl $URL

echo
echo Start a long one
time curl $URL?sleep=30 &
sleep 3

echo
echo next one should come back in no longer than cold-start time
time curl $URL

echo
echo Wait for long one
wait

kn service delete echo

If things work, then the 1st and 3rd curl should take however long cold-starts take (about 5-10 seconds), while the 2nd curl should take about 30 seconds.

However, for me the 3rd curl is consistency taking over 30 seconds which shows that it is being queued behind the 2nd request and a new instance is not being created to service it.

I ran this with TBC=-1 and TBC=200 - same results.

When running the script, make sure the service is scaled down to zero before each run.

closed time in 13 days

duglin

issue commentknative/serving

Requests are routed sub-optimally

I'm gonna close this as I think it's fixed - and if not we can reopen it

duglin

comment created time in 13 days

issue commentknative/serving

Feature Request: Allow processing of async requests in Knative

/remove-lifecycle rotten

nimakaviani

comment created time in 14 days

issue commentknative/serving

Feature Request: Allow processing of async requests in Knative

/remove-lifecycle stale

nimakaviani

comment created time in 14 days

pull request commentknative/test-infra

Allow publishing manifests to a local dir

@adrcunha which flags do I need to set so that it picks up the KO_DOCKER_REPO value and uses that instead of a gcr or ko.local ?

adrcunha

comment created time in 14 days

issue commentkubernetes/kubernetes

Support removing ephemeral container from pod

Hi - what's the status of this? I'm really interested in using ephemeral containers for some of my usecases but, like others, I'd like to be able to remove them and reclaim all resources as the pod itself might live for a long time but these e-containers would come and go quite frequently.

shuiqing05

comment created time in 16 days

pull request commentknative/test-infra

Allow publishing manifests to a local dir

cool - wanna merge it?

adrcunha

comment created time in 16 days

Pull request review commentcloudevents/spec

Paragraph about nested events to the primer

 original event source. And as such, it is expected that CloudEvents attributes related to the event producer (such as 'source`and`id`) would be changed from the incoming CloudEvent. +There might exist special cases in which it is necessary to create a CloudEvent

Minor thing, and I don't have a strong opinion but since it popped in my head, should we add a ### Nested CloudEvents header here so that this stands out a bit?

deissnerk

comment created time in 16 days

delete branch duglin/pkg

delete branch : supportKUBECONFIG

delete time in 17 days

push eventduglin/pkg

push time in 17 days

push eventduglin/pkg

Doug Davis

commit sha b6877fe86257fe7a328853c817198b2a6fb75a2b

add printf Signed-off-by: Doug Davis <dug@us.ibm.com>

view details

push time in 17 days

Pull request review commentknative/pkg

Add support for KUBECONFIG env var in the tests

 func initializeFlags() *EnvironmentFlags { 	flag.StringVar(&f.Cluster, "cluster", "", 		"Provide the cluster to test against. Defaults to the current cluster in kubeconfig.") -	var defaultKubeconfig string-	if usr, err := user.Current(); err == nil {-		defaultKubeconfig = path.Join(usr.HomeDir, ".kube/config")+	// Use KUBECONFIG if available+	defaultKubeconfig := os.Getenv("KUBECONFIG")++	// If KUBECONFIG env var isn't set then look for $HOME/.kube/config+	if defaultKubeconfig == "" {+		if usr, err := user.Current(); err == nil {+			defaultKubeconfig = path.Join(usr.HomeDir, ".kube/config")+		} 	} 

ok done

duglin

comment created time in 17 days

push eventduglin/pkg

Doug Davis

commit sha b80d1940e1212566d3585d2be1cc7459c8db5248

add printf Signed-off-by: Doug Davis <dug@us.ibm.com>

view details

push time in 17 days

PR opened knative/community

remove links to VCs, point to notes

Since these URLs change every now and then, this will avoid the need for a PR each time it does

Signed-off-by: Doug Davis dug@us.ibm.com

+11 -11

0 comment

1 changed file

pr created time in 17 days

push eventduglin/kn-community

Doug Davis

commit sha 636e651385dd6ab021395030aa762ce8f8e3ab83

remove links to VCs, point to notes Signed-off-by: Doug Davis <dug@us.ibm.com>

view details

push time in 17 days

create barnchduglin/kn-community

branch : removeVCurls

created branch time in 17 days

push eventduglin/tools

Doug Davis

commit sha 3d4ae27a33a0eff6f47fc31402c100acd766a304

add doclear support Signed-off-by: Doug Davis <dug@us.ibm.com>

view details

push time in 18 days

pull request commentknative/serving

Try out magical envoy header for pod addressability through the mesh.

@markusthoemmes how did it go?

markusthoemmes

comment created time in 18 days

push eventduglin/pkg

Doug Davis

commit sha 157f801bb5443f391107ae5a07ca3be5dc5ab46a

Add support for KUBECONFIG env var in the tests Signed-off-by: Doug Davis <dug@us.ibm.com>

view details

push time in 18 days

more