profile
viewpoint

Pull request review commentknative/test-infra

Allow for me to set KO_DOCKER_REPO

 function parse_flags() {       --dot-release) is_dot_release=1 ;;       --auto-release) is_auto_release=1 ;;       --from-latest-nightly) FROM_NIGHTLY_RELEASE=latest ;;+      --use-ko-docker-repo) USE_KO_DOCKER_REPO=1 ;;

Ideally, I think it would be nice if scripts like this took two input optional parameters: 1 - the registry in which to push the build images (e.g. KO_DOCKER_REPO, default to local docker) 2 - the dir in which to place all files needed to do an install (default to some install_files type of dir).

That gives people who want to build knative themselves the easiest path to getting all of the information they need in very well defined (local) locations, with minimal effort.

If after that some wrapper script wanted to push the files to some company specific location (like GCS) they can do so, but I think the core script should be generic and easy for everyone to run.

I'm not against this script knowing how to talk to company specific components (like GCS) but that should require a flag to do so - the default should be the easiest... store the files in a dir.

duglin

comment created time in a day

Pull request review commentknative/test-infra

Allow for me to set KO_DOCKER_REPO

 function parse_flags() {       --dot-release) is_dot_release=1 ;;       --auto-release) is_auto_release=1 ;;       --from-latest-nightly) FROM_NIGHTLY_RELEASE=latest ;;+      --use-ko-docker-repo) USE_KO_DOCKER_REPO=1 ;;

I'm not using GCS - that's part of the point of this PR. We need to allow people to build Knative outside of Google.

duglin

comment created time in a day

push eventcloudevents/spec

lucperkins

commit sha 978a62a46816f0d1346f75db33a596531814570d

Add Ruby SDK to SDK lists Signed-off-by: lucperkins <lucperkins@gmail.com>

view details

Doug Davis

commit sha 9f1e0cb38276aef333d2ae01b0e96ef4d0a3f463

Merge pull request #548 from lucperkins/lperkins/ruby-sdk Add Ruby SDK to SDK lists

view details

push time in a day

PR merged cloudevents/spec

Add Ruby SDK to SDK lists
+2 -0

2 comments

2 changed files

lucperkins

pr closed time in a day

pull request commentcloudevents/spec

Add Ruby SDK to SDK lists

thanks @cneijenhuis

lucperkins

comment created time in a day

pull request commentcloudevents/spec

Add highlighting to Primer JSON snippets

I'm not against adding the json tags, I'm just curious to see where it would show up since I've never seen it taken into account in a browser

lucperkins

comment created time in a day

pull request commentcloudevents/spec

Add highlighting to Primer JSON snippets

It is browser specific thing? Or a plugin? I don't see anything special: image

lucperkins

comment created time in a day

Pull request review commentknative/test-infra

Allow for me to set KO_DOCKER_REPO

 function parse_flags() {       --dot-release) is_dot_release=1 ;;       --auto-release) is_auto_release=1 ;;       --from-latest-nightly) FROM_NIGHTLY_RELEASE=latest ;;+      --use-ko-docker-repo) USE_KO_DOCKER_REPO=1 ;;

what do I put for "my-gcs"? When I replaced the --release-gcr flag with my private repo and left "my-gcs" as is, it didn't seem to do anything, no build, no push. And yet it also didn't give an error.

duglin

comment created time in a day

pull request commentcloudevents/spec

Add highlighting to Primer JSON snippets

Aside from some indenting difference, what's changed? I'm not see it

lucperkins

comment created time in a day

pull request commentcloudevents/spec

Add Ruby SDK to SDK lists

LGTM can i get one more set of eyes and then I'll merge?

lucperkins

comment created time in a day

push eventduglin/test-infra-1

Doug Davis

commit sha 5edd220430a6c580840bd7c12958c14c056ed4cd

more tweaks Signed-off-by: Doug Davis <dug@us.ibm.com>

view details

push time in 2 days

Pull request review commentknative/test-infra

Allow for me to set KO_DOCKER_REPO

 function parse_flags() {     prepare_dot_release   fi +  if (( ! USE_KO_DOCKER_REPO )); then+    # If they didn't use the special flag then force it to gcr+    export KO_DOCKER_REPO="gcr.io/knative-nightly"+  fi+   # Update KO_DOCKER_REPO and KO_FLAGS if we're not publishing.   if (( ! PUBLISH_RELEASE )); then     (( has_gcr_flag )) && echo "Not publishing the release, GCR flag is ignored"     (( has_gcs_flag )) && echo "Not publishing the release, GCS flag is ignored"-    KO_DOCKER_REPO="ko.local"-    KO_FLAGS="-L ${KO_FLAGS}"+    if (( USE_KO_DOCKER_REPO )); then

wasn't sure whether 424 was in the old vs new code.... check to see if I put it where you wanted

duglin

comment created time in 2 days

Pull request review commentknative/test-infra

Allow for me to set KO_DOCKER_REPO

 function parse_flags() {   if (( ! PUBLISH_RELEASE )); then     (( has_gcr_flag )) && echo "Not publishing the release, GCR flag is ignored"     (( has_gcs_flag )) && echo "Not publishing the release, GCS flag is ignored"-    KO_DOCKER_REPO="ko.local"-    KO_FLAGS="-L ${KO_FLAGS}"+    KO_DOCKER_REPO="${KO_DOCKER_REPO:-ko.local}"+    KO_FLAGS="${KO_FLAGS}"

doi :-)

duglin

comment created time in 2 days

Pull request review commentknative/test-infra

Allow for me to set KO_DOCKER_REPO

 function parse_flags() {     prepare_dot_release   fi +  if (( ! USE_KO_DOCKER_REPO )); then+    # If they didn't use the special flag then force it to gcr+    export KO_DOCKER_REPO="gcr.io/knative-nightly"+  fi+   # Update KO_DOCKER_REPO and KO_FLAGS if we're not publishing.   if (( ! PUBLISH_RELEASE )); then     (( has_gcr_flag )) && echo "Not publishing the release, GCR flag is ignored"     (( has_gcs_flag )) && echo "Not publishing the release, GCS flag is ignored"-    KO_DOCKER_REPO="ko.local"-    KO_FLAGS="-L ${KO_FLAGS}"+    if (( USE_KO_DOCKER_REPO )); then

okie dokie

duglin

comment created time in 2 days

issue commentknative/serving-operator

KnativeServing operator should enforce the configuration defined in the CR

It might depend on our point of view, but for each resource there is only one source of truth. It is true that depending on the resource (and it's policy) its source of truth might be in a kn configMap or might be in the operator's CR/manifest. But, I'm hoping that for configMaps that we expect users to be able to edit, the source of truth would be in kn's configMaps not in the operator.

But obviously 1 works too :-) I'm just not excited by today's model.

garron

comment created time in 2 days

issue commentknative/serving-operator

KnativeServing operator should enforce the configuration defined in the CR

Can you elaborate on how it doesn't adhere to kube? With 2 the operator just sets the initial values for certain resources. I don't think kube has anything to say about whether it must then revert any changes on those resources. I think it's up to the semantics of the operator to decide that.

garron

comment created time in 2 days

issue commentknative/serving-operator

KnativeServing operator should enforce the configuration defined in the CR

In theory I hear ya.... but in practice I keep thinking about situations where someone is trying to create a tool, or even just a simple bash script, that they want to hand to someone else and in order to make sure it "just works" means they have to code up all config changes twice and wrapper them with an if-statement each time. That's why I'm leaning towards option 2 - it allows for people to only need to know about the configMaps.

garron

comment created time in 2 days

issue commentknative/serving-operator

KnativeServing operator should enforce the configuration defined in the CR

I was going to open an issue but I think this one might be related, if not let me know and I'll open a new one.

Overall: where should a user modify things if they want to change their Knative install?

Today, w/o the operator, if someone wants to do something like modify the template of the URL generated for their ksvc they would modify the config-network configMap's domainTemplate value. However, with the operator any updates made directly to that config-map would eventually be overwritten by the operator's reconciliation loop.

This means that we have two different sources of truth based on how Knative was installed. This also means that the user needs to know which install mechanism was used so they can modify the correct data to get the results they want. This is not a good user experience for people manually managing Knative, nor is it good for tool authors who will want their tools to work against all Knative installs easily (and with minimal conditional logic).

It would seem to me that either: 1 - the operator should become a required component of KnServing and THE ONLY WAY to install it, and thus all configurations must be done via the CR. This would then put the burden on the operator to ensure that ALL possible configuration knobs are available via the CR since the low-level kube resources would then be off-limits to manual/tooling edits. And our docs should only point to the operator for configuring things. 2 - treat the operator as an optional tool. This would then mean it would need to probably support some kind of "policy" for how to deal with each resource in KnServing it manages. For example, within IBM we've had an operator-like system for a while and we allow each resource to have a policy that indicates whether the operator should a) just make sure the resource exists, and as long as it does then the operator won't touch it or override any edits made by the user, or b) make sure the resource looks exactly like the operator expects and any edits made by the user will be overwritten.

I think either way can work, we just need to pick a direction. But the current state doesn't seem ideal.

Thoughts?

/cc @houshengbo @mattmoor

garron

comment created time in 4 days

Pull request review commentknative/docs

This adds a simple cloudevents-go sample.

+package main++import (+	"context"+	"fmt"+	"log"+	"net/http"++	cloudevents "github.com/cloudevents/sdk-go"+	"github.com/kelseyhightower/envconfig"+)++type Receiver struct {+	client cloudevents.Client++	// If the K_SINK environment variable is set, then events are sent there,+	// otherwise we simply reply to the inbound request.+	Target string `envconfig:"K_SINK"`+}++func main() {+	client, err := cloudevents.NewDefaultClient()+	if err != nil {+		log.Fatal(err.Error())+	}++	r := Receiver{client: client}+	if err := envconfig.Process("", &r); err != nil {+		log.Fatal(err.Error())+	}++	// Depending on whether targetting data has been supplied,+	// we will either reply with our response or send it on to+	// an event sink.+	var receiver interface{} // the SDK reflects on the signature.+	if r.Target == "" {+		receiver = r.ReceiveAndReply+	} else {+		receiver = r.ReceiveAndSend+	}++	if err := client.StartReceiver(context.Background(), receiver); err != nil {+		log.Fatal(err)+	}+}++// Request is the structure of the event we expect to receive.+type Request struct {+	Name string `json:"name"`+}++// Response is the structure of the event we send in response to requests.+type Response struct {+	Message string `json:"message,omitempty"`+}++// handle shared the logic for producing the Response event from the Request.+func handle(req Request) (resp Response) {+	resp.Message = fmt.Sprintf("Hello, %s", req.Name)+	return+}++// ReceiveAndSend is invoked whenever we receive an event.+func (recv *Receiver) ReceiveAndSend(ctx context.Context, event cloudevents.Event) error {+	req := Request{}+	if err := event.DataAs(&req); err != nil {+		return err+	}+	log.Printf("Got an event from: %q", req.Name)++	resp := handle(req)+	log.Printf("Sending event: %q", resp.Message)++	r := cloudevents.NewEvent(cloudevents.VersionV1)+	r.SetType("dev.knative.docs.sample")+	r.SetSource("https://github.com/knative/docs/docs/serving/samples/cloudevents/cloudevents-go")+	r.SetDataContentType("application/json")+	r.SetData(resp)++	ctx = cloudevents.ContextWithTarget(ctx, recv.Target)+	_, _, err := recv.client.Send(ctx, event)

just to show the result of the flow, otherwise we're just bit-bucketing the data. I kind of view this like showing the result of a KnEventing Sequence - except just one step is used.

mattmoor

comment created time in 4 days

Pull request review commentknative/docs

This adds a simple cloudevents-go sample.

+package main++import (+	"context"+	"fmt"+	"log"+	"net/http"++	cloudevents "github.com/cloudevents/sdk-go"+	"github.com/kelseyhightower/envconfig"+)++type Receiver struct {+	client cloudevents.Client++	// If the K_SINK environment variable is set, then events are sent there,+	// otherwise we simply reply to the inbound request.+	Target string `envconfig:"K_SINK"`+}++func main() {+	client, err := cloudevents.NewDefaultClient()+	if err != nil {+		log.Fatal(err.Error())+	}++	r := Receiver{client: client}+	if err := envconfig.Process("", &r); err != nil {+		log.Fatal(err.Error())+	}++	// Depending on whether targetting data has been supplied,+	// we will either reply with our response or send it on to+	// an event sink.+	var receiver interface{} // the SDK reflects on the signature.+	if r.Target == "" {+		receiver = r.ReceiveAndReply+	} else {+		receiver = r.ReceiveAndSend+	}++	if err := client.StartReceiver(context.Background(), receiver); err != nil {+		log.Fatal(err)+	}+}++// Request is the structure of the event we expect to receive.+type Request struct {+	Name string `json:"name"`+}++// Response is the structure of the event we send in response to requests.+type Response struct {+	Message string `json:"message,omitempty"`+}++// handle shared the logic for producing the Response event from the Request.+func handle(req Request) (resp Response) {+	resp.Message = fmt.Sprintf("Hello, %s", req.Name)+	return+}++// ReceiveAndSend is invoked whenever we receive an event.+func (recv *Receiver) ReceiveAndSend(ctx context.Context, event cloudevents.Event) error {+	req := Request{}+	if err := event.DataAs(&req); err != nil {+		return err+	}+	log.Printf("Got an event from: %q", req.Name)++	resp := handle(req)+	log.Printf("Sending event: %q", resp.Message)++	r := cloudevents.NewEvent(cloudevents.VersionV1)+	r.SetType("dev.knative.docs.sample")+	r.SetSource("https://github.com/knative/docs/docs/serving/samples/cloudevents/cloudevents-go")+	r.SetDataContentType("application/json")+	r.SetData(resp)++	ctx = cloudevents.ContextWithTarget(ctx, recv.Target)+	_, _, err := recv.client.Send(ctx, event)

well, we have flows in eventing where the sink can return an event, so I just thought it might be nice to show what that even is (if it's there) instead of just always returning a 202. I assume this returns a 202 (or 200) w/o a body, right?

mattmoor

comment created time in 5 days

Pull request review commentknative/docs

This adds a simple cloudevents-go sample.

+package main++import (+	"context"+	"fmt"+	"log"+	"net/http"++	cloudevents "github.com/cloudevents/sdk-go"+	"github.com/kelseyhightower/envconfig"+)++type Receiver struct {+	client cloudevents.Client++	// If the K_SINK environment variable is set, then events are sent there,+	// otherwise we simply reply to the inbound request.+	Target string `envconfig:"K_SINK"`+}++func main() {+	client, err := cloudevents.NewDefaultClient()+	if err != nil {+		log.Fatal(err.Error())+	}++	r := Receiver{client: client}+	if err := envconfig.Process("", &r); err != nil {+		log.Fatal(err.Error())+	}++	// Depending on whether targetting data has been supplied,+	// we will either reply with our response or send it on to+	// an event sink.+	var receiver interface{} // the SDK reflects on the signature.+	if r.Target == "" {+		receiver = r.ReceiveAndReply+	} else {+		receiver = r.ReceiveAndSend+	}++	if err := client.StartReceiver(context.Background(), receiver); err != nil {+		log.Fatal(err)+	}+}++// Request is the structure of the event we expect to receive.+type Request struct {+	Name string `json:"name"`+}++// Response is the structure of the event we send in response to requests.+type Response struct {+	Message string `json:"message,omitempty"`+}++// handle shared the logic for producing the Response event from the Request.+func handle(req Request) (resp Response) {+	resp.Message = fmt.Sprintf("Hello, %s", req.Name)+	return+}++// ReceiveAndSend is invoked whenever we receive an event.+func (recv *Receiver) ReceiveAndSend(ctx context.Context, event cloudevents.Event) error {+	req := Request{}+	if err := event.DataAs(&req); err != nil {+		return err+	}+	log.Printf("Got an event from: %q", req.Name)++	resp := handle(req)+	log.Printf("Sending event: %q", resp.Message)++	r := cloudevents.NewEvent(cloudevents.VersionV1)+	r.SetType("dev.knative.docs.sample")+	r.SetSource("https://github.com/knative/docs/docs/serving/samples/cloudevents/cloudevents-go")+	r.SetDataContentType("application/json")+	r.SetData(resp)++	ctx = cloudevents.ContextWithTarget(ctx, recv.Target)+	_, _, err := recv.client.Send(ctx, event)

Would it make sense to return what the sink might send back?

mattmoor

comment created time in 5 days

Pull request review commentknative/test-infra

Allow for me to set KO_DOCKER_REPO

 function parse_flags() {       --dot-release) is_dot_release=1 ;;       --auto-release) is_auto_release=1 ;;       --from-latest-nightly) FROM_NIGHTLY_RELEASE=latest ;;+      --use-ko-docker-repo) USE_KO_DOCKER_REPO=1 ;;

do you mean change the flag name from use-ok-docker-repo to docker-repo and pass in the repo name there? If so, I guess, but now there are two different ways to specify the location and that seems even more confusing than a flag that means "I really mean it".

duglin

comment created time in 6 days

push eventduglin/test-infra-1

Doug Davis

commit sha d5d5f11ddf0627140963c158ac61104780f7cd9d

minor edits Signed-off-by: Doug Davis <dug@us.ibm.com>

view details

push time in 6 days

Pull request review commentknative/test-infra

Allow for me to set KO_DOCKER_REPO

 function parse_flags() {   [[ -n "${RELEASE_VERSION}" ]] && TAG="v${RELEASE_VERSION}"   [[ -n "${RELEASE_VERSION}" && -n "${RELEASE_BRANCH}" ]] && (( PUBLISH_RELEASE )) && PUBLISH_TO_GITHUB=1 +  # If not already set then go ahead and default to gcr+  export KO_DOCKER_REPO="${KO_DOCKER_REPO:-gcr.io/knative-nightly}"+

I think this needs to be before the check for PUBLISH_RELEASE, so I moved it but check me on that...

duglin

comment created time in 6 days

pull request commentknative/test-infra

Allow for me to set KO_DOCKER_REPO

Hey, what about an option --use-env-ko-docker-repo ?

I think it's weird to require the user to add a "yes I know what I'm doing" flag, but at this point I just need a way to do a build so ok.

duglin

comment created time in 7 days

push eventduglin/test-infra-1

Doug Davis

commit sha 00892020202e117856f5463496316c216f77b2a2

add a flag Signed-off-by: Doug Davis <dug@us.ibm.com>

view details

push time in 7 days

pull request commentknative/test-infra

Allow for me to set KO_DOCKER_REPO

I think it is intentional so the release process doesn't become messed up by accident if someone leaves an ENV variable set wrong.

thinking more about this... isn't the opposite more likely? Meaning, someone is pointing to a personal repo when they run the release script? Why would someone be pointing to the official repo "by accident" as part of a non-release-build action?

duglin

comment created time in 7 days

pull request commentknative/test-infra

Allow for me to set KO_DOCKER_REPO

I think it is intentional so the release process doesn't become messed up by accident if someone leaves an ENV variable set wrong. I think a better way, if --release-gcr doesn't work, is add something like --ko-docker-repo-override and have it just change KO_DOCKER_REPO.

If a "normal" dev has write access to the release repo then it seems like there are other security issues since any run of ko could cause harm then.

The other scripts that I've had to use to do builds appear to use KO_DOCKER_REPO as a way to tell it where to put images (e.g. serving does that). I haven't followed the code enough to fully understand why eventing's process is different, but the consistency of KO_DOCKER_REPO being how someone conveys this information is very nice from a consistency point of view. Adding more flags to re-enforce what someone is asking for just feels redundant.

duglin

comment created time in 7 days

pull request commentknative/test-infra

Allow for me to set KO_DOCKER_REPO

/hold cancel

duglin

comment created time in 8 days

Pull request review commentknative/test-infra

Allow for me to set KO_DOCKER_REPO

 function parse_flags() {   if (( ! PUBLISH_RELEASE )); then     (( has_gcr_flag )) && echo "Not publishing the release, GCR flag is ignored"     (( has_gcs_flag )) && echo "Not publishing the release, GCS flag is ignored"-    KO_DOCKER_REPO="ko.local"-    KO_FLAGS="-L ${KO_FLAGS}"+    KO_DOCKER_REPO="${KO_DOCKER_REPO:-ko.local}"+    KO_FLAGS="${KO_FLAGS}"

removed the -L because it's redundant with the ko.local value - and if set it would override the KO_DOCKER_REPO env var

duglin

comment created time in 8 days

push eventduglin/test-infra-1

Doug Davis

commit sha 8d1f0983311d43e8721dcdfb8bde08e1895b9905

Allow for me to set KO_DOCKER_REPO If I set it to a value, I expect the script to do what I asked for. But if there is no value then let it default to whatever it does today. Signed-off-by: Doug Davis <dug@us.ibm.com>

view details

push time in 8 days

pull request commentknative/test-infra

Allow for me to set KO_DOCKER_REPO

/hold

might be another fix needed...

duglin

comment created time in 8 days

pull request commentknative/test-infra

Allow for me to set KO_DOCKER_REPO

I was afraid of setting that flag :-)

Seems to me that if the user sets KO_DOCKER_REPO to a value the script should use it

duglin

comment created time in 8 days

pull request commentknative/test-infra

Allow for me to set KO_DOCKER_REPO

/hold cancel

duglin

comment created time in 8 days

pull request commentknative/test-infra

Allow for me to set KO_DOCKER_REPO

/hold

duglin

comment created time in 8 days

pull request commentknative/test-infra

Allow for me to set KO_DOCKER_REPO

/hold cancel

duglin

comment created time in 8 days

pull request commentknative/test-infra

Allow for me to set KO_DOCKER_REPO

/nohold

duglin

comment created time in 8 days

pull request commentknative/test-infra

Allow for me to set KO_DOCKER_REPO

/hold remove

duglin

comment created time in 8 days

pull request commentknative/test-infra

Allow for me to set KO_DOCKER_REPO

/hold

duglin

comment created time in 8 days

PR opened knative/test-infra

Allow for me to set KO_DOCKER_REPO

If I set it to a value, I expect the script to do what I asked for. But if there is no value then let it default to whatever it does today.

Signed-off-by: Doug Davis dug@us.ibm.com

<!-- Request Prow to automatically lint any go code in this PR:

/lint

+4 -2

0 comment

1 changed file

pr created time in 8 days

create barnchduglin/test-infra-1

branch : setKO

created branch time in 8 days

issue commentknative/serving

Requests are routed sub-optimally

yup, as I said above

I do believe that the most significant issue appears to be fixed for my scenario

I'm just trying to make sure a 2nd issue isn't lurking.

duglin

comment created time in 10 days

issue commentknative/serving

Requests are routed sub-optimally

Is the 15 vs 8 is due to image download/caching? Can you run it a couple of time to try to avoid the image download times? w/o image download, those 2 should be almost the same, right? 7 seconds seems like a big diff to me.

duglin

comment created time in 13 days

issue commentknative/serving

Requests are routed sub-optimally

@markusthoemmes By "instance" I mean instance of the service - IOW "pod". Yes the latency will vary between systems but ideally the time when curl'ing the 1st and 2nd instance should be basically the same - meaning "cold start time" + time of the GET. Or to put it into the order of the curl's... the 1st and 3rd curl should be the same, while the 2nd curl should be about 30 seconds since I told it to sleep for 30 seconds, to force the creation of the 2nd instance/pod when the 3rd curl is received.

Based on the limited testing I've done, I do believe that the most significant issue appears to be fixed for my scenario - that being a 2nd request is queued behind the 1st request instead of creating a new pod to service it (when concurrency=1).

I suspect you're correct that the diff in timing between the 1st and 3rd curl might be the delay in the system detecting that a new instance is needed. If I'm looking at the right config, it appears the stable window defaults to 60s, not sure if "1s" is legal but using that I still see quite a delay - almost double the first one. Is there a more appropriate value I should try?

re: mesh... nope, no mesh being used

duglin

comment created time in 13 days

issue commentknative/serving

Requests are routed sub-optimally

yup using HEAD. Using tbc=-1 and the default (200), I seem to get similar results... that being, the first instance takes about 3 seconds and the 2nd instances takes at least twice as long (usually around 8 seconds). I've updated the script with in the first comment - do you see similar results?

duglin

comment created time in 13 days

delete branch duglin/cloudevents-web

delete branch : fixToday

delete time in 13 days

delete branch duglin/cloudevents-web

delete branch : v1.0

delete time in 13 days

push eventcloudevents/cloudevents-web

Doug Davis

commit sha 64a029114d2d2d6b71378e758136b74415170caf

v1.0 announce Signed-off-by: Doug Davis <dug@us.ibm.com>

view details

Doug Davis

commit sha b6b69a7072024d4d908bc39292695bde886c5a4f

Merge pull request #12 from duglin/v1.0 v1.0 announce

view details

push time in 13 days

PR merged cloudevents/cloudevents-web

v1.0 announce

Signed-off-by: Doug Davis dug@us.ibm.com

+10 -24

0 comment

1 changed file

duglin

pr closed time in 13 days

push eventduglin/cloudevents-web

Doug Davis

commit sha 64a029114d2d2d6b71378e758136b74415170caf

v1.0 announce Signed-off-by: Doug Davis <dug@us.ibm.com>

view details

push time in 13 days

PR opened cloudevents/cloudevents-web

v1.0 announce

Signed-off-by: Doug Davis dug@us.ibm.com

+10 -24

0 comment

1 changed file

pr created time in 13 days

create barnchduglin/cloudevents-web

branch : v1.0

created branch time in 13 days

Pull request review commentcncf/wg-serverless

adding roadmap and changelog docs

+# Serverless Workflow - Specification Roadmap++This document describes the tentative plan for the Serverless Workflow specification in the short and long-term. ++Serverless Workflow specification is a community effort so any feedback is more than welcome!+The best ways to provide feedback is contact us (see the "Communication" section in the [readme doc](readme.md))+or open an Issue in this GitHub project. You can also open a Pull Request to directly +add your feedback to this document. +++## Short Term++### Continue updating [specification document](spec.md)+* Add more examples+* Add more use cases+* Update sections to make their contents clearer to understand+* Update images++### Keep defining/improving serverless workflow Notation+* Update icons+* Update shapes / transitions+* Add more notation examples+* Add comparisons and align with other existing frameworks/notations++### Add JSON examples+* Currently JSON Schema definition is provided for each of the workflow elements. Add JSON example for each aswell.++### Create content for community+* Create blogs/videos showcasing serverless workflow features++## Longer Term

sounds great to me - especially if we could get other languages supported too... like golang

tsurdilo

comment created time in 13 days

Pull request review commentcncf/wg-serverless

adding roadmap and changelog docs

+# Serverless Workflow - Specification Roadmap++This document describes the tentative plan for the Serverless Workflow specification in the short and long-term. ++Serverless Workflow specification is a community effort so any feedback is more than welcome!+The best ways to provide feedback is contact us (see the "Communication" section in the [readme doc](readme.md))+or open an Issue in this GitHub project. You can also open a Pull Request to directly +add your feedback to this document. +++## Short Term++### Continue updating [specification document](spec.md)

This list and the next one, while all good things, seem relative minor in scope and kind of obvious clean-up things. I think it might be good to include a list of architectural design points, or features, that you guys are considering to give people an idea of where the spec might change in the future. Assuming you have some in mind of course.

tsurdilo

comment created time in 14 days

Pull request review commentcncf/wg-serverless

adding roadmap and changelog docs

+# Serverless Workflow - Specification Roadmap++This document describes the tentative plan for the Serverless Workflow specification in the short and long-term. ++Serverless Workflow specification is a community effort so any feedback is more than welcome!+The best ways to provide feedback is contact us (see the "Communication" section in the [readme doc](readme.md))+or open an Issue in this GitHub project. You can also open a Pull Request to directly +add your feedback to this document. +++## Short Term++### Continue updating [specification document](spec.md)+* Add more examples+* Add more use cases+* Update sections to make their contents clearer to understand+* Update images++### Keep defining/improving serverless workflow Notation+* Update icons+* Update shapes / transitions+* Add more notation examples+* Add comparisons and align with other existing frameworks/notations++### Add JSON examples+* Currently JSON Schema definition is provided for each of the workflow elements. Add JSON example for each aswell.++### Create content for community+* Create blogs/videos showcasing serverless workflow features++## Longer Term

I wonder if the word "Options" should be added here. I say this because of the last one... the "workflow API". Does everyone agree with that as something that will be done? If not it might be good to make it clear that this list is just ideas people have and not necessarily a statement of commitment. Unless you've agreed.... then never mind.

tsurdilo

comment created time in 14 days

Pull request review commentcncf/wg-serverless

adding roadmap and changelog docs

+# Serverless Workflow - Specification Roadmap++This document describes the tentative plan for the Serverless Workflow specification in the short and long-term. ++Serverless Workflow specification is a community effort so any feedback is more than welcome!+The best ways to provide feedback is contact us (see the "Communication" section in the [readme doc](readme.md))+or open an Issue in this GitHub project. You can also open a Pull Request to directly +add your feedback to this document. +++## Short Term++### Continue updating [specification document](spec.md)+* Add more examples+* Add more use cases+* Update sections to make their contents clearer to understand+* Update images++### Keep defining/improving serverless workflow Notation+* Update icons+* Update shapes / transitions+* Add more notation examples+* Add comparisons and align with other existing frameworks/notations++### Add JSON examples+* Currently JSON Schema definition is provided for each of the workflow elements. Add JSON example for each aswell.

s/aswell/as well/ Unless you meant "a swell" ? :-)

tsurdilo

comment created time in 14 days

Pull request review commentcncf/wg-serverless

adding roadmap and changelog docs

+# Serverless Workflow - Specification Roadmap++This document describes the tentative plan for the Serverless Workflow specification in the short and long-term. ++Serverless Workflow specification is a community effort so any feedback is more than welcome!+The best ways to provide feedback is contact us (see the "Communication" section in the [readme doc](readme.md))+or open an Issue in this GitHub project. You can also open a Pull Request to directly +add your feedback to this document. 

One thing we've done for CloudEvents is create a primer: https://github.com/cloudevents/spec/blob/master/primer.md that provides insight and in-depth explanations behind the reasons for the spec and some of the design decisions we made. Stuff that isn't really appropriate for a formal specification but useful for people who like to understand the rationale behind the decisions made. Asking people to read github issues/PR to get the background isn't as easy to do. Then in a doc like that you can position this spec against others to explain why this is different and needed.

tsurdilo

comment created time in 14 days

Pull request review commentcncf/wg-serverless

adding roadmap and changelog docs

+# Serverless Workflow - Changelog

You might find it easier to just run a git cmd to auto-generate this list on a periodic basis (and remove trivial ones). Just because I know how I tend to work... I always forget to update things like this :-)
But aside from that it seems fine.

tsurdilo

comment created time in 14 days

issue commentknative/client

kn won't start w/o a config

What's kind of funny is that when I think about the two errors messages: 1 : no Knative serving API found on the backend 2: Get https://api.crc.te... dial tcp: lookup api.crc.testing on 127.0.0.1:53: read udp 127.0.0.1:55884->127.0.0.1:53: i/o timeout

I actually find the 2nd one more helpful (user friendly) despite it using more geeky terms - which is the opposite what I might expect. ;-)

I think it's because the 2nd one gives the exact reason behind the error condition and gives the user the exact info they need to try to fix it... their server at that address isn't there. The 1st one is too vague for people to know exactly what happened and doesn't give a hint how to fix it. For example, if the first one said what @daisy-ycguo mentioned "a 404 was returned from the server URL" (and showed me the URL) I think there would be no confusion at all and I have a hint at how to fix it - despite it being more geeky.

So, what about something like: The serving endpoint (<insert URL here>) does not appear to a valid Knative server, please check your installation. or something like that. That gives me the location of the server, so I can check whether that URL is correct or not - and if not it's a client config issue. And, if the URL is correct then I know to go debug why the server isn't talking knative.

duglin

comment created time in 14 days

issue commentknative/serving

Requests are routed sub-optimally

howdy - what's the status of this one? I just tested with the latest and while the times I'm seeing are a bit higher than I would expect (~6 seconds instead of ~3 seconds for a new instance), it's no where near the 30 seconds I saw before. But, I'm wondering if I'm just getting lucky or did a "fix" go in for this?

duglin

comment created time in 14 days

delete branch duglin/client

delete branch : betterError

delete time in 14 days

issue commentknative/docs

Missing docs for cluster local setup

/remove-lifecycle rotten

duglin

comment created time in 15 days

issue openedknative/docs

Psuedo schema for our resources

Describe the change you'd like to see

On today's eventing call I mentioned that I'd be interested in seeing a pseudo schema for our resources. So, I created this: https://docs.google.com/document/d/1MTuQ_v7TH-JgKWr4Ie7bYvvi4uR6PEs4c4DHAfW4Y60/edit# to show the type of thing I was looking for.

While yes this is a very geeky thing and would not replace our more user-friendly docs, I think there are people (like myself) who just want a cliff-notes version of what the yaml should look like and just want to be able to see all of the various options, where they go, and their syntax.

Then, if there are links to get more info about each one (like I show) and in cases where there are ranges on the values that can be used, we can include those as well.

This then almost becomes like a Table of Contents for the resource and people can more easily find what they're looking for, and dive deep on just one topic if they want.

Just a thought.

created time in 15 days

PR opened knative/client

Improve error message

While testing I specified this on the kn command line:

./kn service create echo --concurrency-limit=1 --concurrency-target=1 --env foo=bar --limits-cpu=1000m --limits-memory=1024m --max-scale=1 --min-scale=1 --port=9999 --requests-cpu=250m --requests-memory=64mi --image duglin/echo

and it returned:

unable to parse quantity's suffix

I had no idea which value it didn't like. So this PR makes it so the output is now this:

Error parsing "64mi": unable to parse quantity's suffix

Still not perfect since I still had to think way too hard to realize that it didn't like the lower-case m, but that's a different issue.

Signed-off-by: Doug Davis dug@us.ibm.com

/lint

+4 -2

0 comment

1 changed file

pr created time in 15 days

create barnchduglin/client

branch : betterError

created branch time in 15 days

issue commentknative/client

kn won't start w/o a config

@daisy-ycguo in this case:

$ KUBECONFIG= kn service create create asd --image asd
no Knative serving API found on the backend. Please verify the installation.

what is the error condition being detected? No URL specified to connect to? If so, perhaps we could simply reword this say that - something like:

No serving API specified, please verify the installation

It's minor but saying "specified" is clearer that config data is missing. Saying something on the "backend" is missing isn't as clear to me what's going on because someone could interpret it as a server-side issue, not a client one.

duglin

comment created time in 15 days

issue commentknative/client

Enable/Disable secure https access to Services

Some platforms,such as IKS, enable TLS by default for all services - so if we do something in this space we need to take into account what the platform might already be doing.

zhanggbj

comment created time in 16 days

issue openedknative/docs

Add support for cmd and args when creating a new service

Knative, via yaml, supports specifying the cmd and args in the podSpec - kn should support this too since there are times when people may need to specify which exe in the image to run or what args to pass to the exe. Sometimes the default values in the image are not sufficient.

created time in 17 days

push eventduglin/knregistry

Doug Davis

commit sha 485a8b99252c863cddab0af082ac8b988721bc5a

update Signed-off-by: Doug Davis <dug@us.ibm.com>

view details

push time in 21 days

MemberEvent
MemberEvent

pull request commentcncf/wg-serverless

Removing end state

all 3 of you should have merge rights now - so whoever is first can try to merge this :-) Just PLEASE only merge PRs for the workflow dir - nothing from the generic serverless wg stuff

tsurdilo

comment created time in 21 days

pull request commentcncf/wg-serverless

Adding reviewers and governance docs

@tsurdilo too

tsurdilo

comment created time in 21 days

pull request commentcncf/wg-serverless

Adding reviewers and governance docs

@ruromero can you join the cncf github org?

tsurdilo

comment created time in 21 days

Pull request review commentcncf/wg-serverless

Adding reviewers and governance docs

+# Serverless Workflow - Project Governance++As a CNCF member project, we abide by the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).++## Reviewers++A reviewer is a core role within the project.+They share in reviewing issues and pull requests and their LGTM counts towards the+required LGTM count to merge a code change into the project.+You can find list of current reviewers [here](reviewers.md).++## Review Process++Given that there are only 3 reviewers at this time the review process is as follows:++* A Pull Request is submitted.+* Reviewers review the changes and can add comments and suggestions for the changes.+* After a review, reviewers can approve the changes by commenting on the PR with "LGTM" ("Looks good to me"), +request changes to the PR with "CR" ("Changes requested"), or reject the PR with "NA" ("Not acceptable").+* Reviewers must provide an explanation for both "CR" and "NA" comments to the contributor.+* 2 "LGTM" comments by different reviewers are needed in order for a change to be accepted.+* If the contributor is a reviewer herself, then only 1 "LGTM" vote is needed.

s/herself/themselves/ or just leave it off

tsurdilo

comment created time in 21 days

Pull request review commentcncf/wg-serverless

Adding reviewers and governance docs

+# Serverless Workflow - Project Governance++As a CNCF member project, we abide by the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).++## Reviewers

See comment below

tsurdilo

comment created time in 21 days

Pull request review commentcncf/wg-serverless

Adding reviewers and governance docs

+## Serverless Workflow - Reviewers

To be clear... for now I think just one tier (owners) is fine

tsurdilo

comment created time in 21 days

Pull request review commentcncf/wg-serverless

Adding reviewers and governance docs

+## Serverless Workflow - Reviewers

My original comment didn't have Prow in mind, I just wanted a file that listed the people who could merge stuff - whether it's eventually for a tool like Prow or just so that people know who to nag to get something merged. So anything that conveys that is fine with me. However, I think many projects have another tier of people called "reviewers" that are like "trusted folks" and they don't have merge rights but their LGTM means more than a random person's.

tsurdilo

comment created time in 21 days

push eventcloudevents/spec

Aaron Kunz

commit sha 615458ce116d515830d5d4b0042c800f127ad415

Fix typo Signed-off-by: Aaron Kunz <me@daxaholic.com>

view details

Doug Davis

commit sha c5f0b4f74c1188079b96b1861376e29e2adddc27

Merge pull request #544 from DAXaholic/patch-1 Fix typo

view details

push time in 23 days

PR merged cloudevents/spec

Fix typo
+1 -1

1 comment

1 changed file

DAXaholic

pr closed time in 23 days

pull request commentcloudevents/spec

Fix typo

@DAXaholic thanks!!

DAXaholic

comment created time in 23 days

issue commentknative/client

Proposal: kn API version support

Since 0.10 is almost here, I think that plan seems reasonable.

rhuss

comment created time in 23 days

issue commentknative/client

Change "Address" field name on service descibe

ok with me

duglin

comment created time in a month

delete branch duglin/spec

delete branch : fixREADME

delete time in a month

created tagcloudevents/spec

tagv1.0

CloudEvents Specification

created time in a month

release cloudevents/spec

v1.0

released time in a month

delete tag cloudevents/spec

delete tag : v1.0

delete time in a month

push eventcloudevents/spec

Doug Davis

commit sha 51da0ff413aa8135f412cc7b9bf3c3ae4783389c

fix links in README Signed-off-by: Doug Davis <dug@us.ibm.com>

view details

Doug Davis

commit sha 09ff60ce89151f771fa31e3129a5a2b83cecc32f

Merge pull request #543 from duglin/fixREADME fix links in README

view details

push time in a month

PR merged cloudevents/spec

fix links in README

Signed-off-by: Doug Davis dug@us.ibm.com

+16 -16

1 comment

1 changed file

duglin

pr closed time in a month

PR opened cloudevents/spec

fix links in README

Signed-off-by: Doug Davis dug@us.ibm.com

+16 -16

0 comment

1 changed file

pr created time in a month

create barnchduglin/spec

branch : fixREADME

created branch time in a month

created tagcloudevents/spec

tagv1.0

CloudEvents Specification

created time in a month

release cloudevents/spec

v1.0

released time in a month

delete branch duglin/spec

delete branch : makev1

delete time in a month

push eventcloudevents/spec

Doug Davis

commit sha 4284ba4e91eb351ecab8d97909774a9a74d29498

v1.0 Signed-off-by: Doug Davis <dug@us.ibm.com>

view details

Doug Davis

commit sha 85e5df9f35faeb2913766343ec878bdfb07b39aa

Merge pull request #541 from duglin/makev1 v1.0 release and vote

view details

push time in a month

PR merged cloudevents/spec

v1.0 release and vote

Please add your LGTM to this PR if you approve of v1.0

Eligible voters are: Enterprise Holdings, Google, IBM, Microsoft, NATS/Synadia, PayPal, Platform9, Red Hat, RX-M, SAP, VMWare, Vlad Ionescu, Erik Erikson, John Mitchell

Signed-off-by: Doug Davis dug@us.ibm.com

+112 -112

14 comments

17 changed files

duglin

pr closed time in a month

pull request commentcloudevents/spec

v1.0 release and vote

Yes: Google, IBM, Microsoft, NATS, PayPal, RedHat, SAP, VMware, Vlad Ionescu, Erik Erikson, John Mitchell Passed: 11 Yes, 0 No

duglin

comment created time in a month

push eventcloudevents/spec

Lionel Villard

commit sha f64d796e499c941260c5b1e23f41a7c70065dae9

Add CouchDB adapter specification (#542) * Add CouchDB adapter spec Signed-off-by: Lionel Villard <villard@us.ibm.com> * 1.0-rc1 -> 1.0. Use database instead of db. Signed-off-by: Lionel Villard <villard@us.ibm.com> * run prettier Signed-off-by: Lionel Villard <villard@us.ibm.com>

view details

push time in a month

PR merged cloudevents/spec

Add CouchDB adapter specification
+74 -0

1 comment

1 changed file

lionelvillard

pr closed time in a month

more