profile
viewpoint

bnewport/Samples 19

Source code from Billy Newports blog

bkmartin/cfvendo 2

A Docker container vending machine for Cloud Foundry and IBM Bluemix

fraenkel/app-autoscaler 0

Auto Scaling for CF Applications

fraenkel/bank-vaults 0

A Vault swiss-army knife: a K8s operator, Go client with automatic token renewal, automatic configuration, multiple unseal options and more. A CLI tool to init, unseal and configure Vault (auth methods, secret engines). Direct secret injection into Pods.

fraenkel/binary-buildpack 0

Deploy binaries to Cloud Foundry

fraenkel/cf-test-helpers 0

Helpers for running tests against Cloud Foundry

fraenkel/cli 0

A CLI for Cloud Foundry written in Go

fraenkel/client_golang 0

Prometheus instrumentation library for Go applications

PR closed tektoncd/pipeline

Reviewers
test/builder: namespace is optional cla: yes ok-to-test size/XL

Changes

Allow the namespace of most of the test resources to be optional. They can be provided as a XXXOp argument to the resource, e.g., tb.Task("my-task", tb.Namespace("my-namespace"))

Fixes #1824

Submitter Checklist

These are the criteria that every PR should meet, please check them off as you review them:

See the contribution guide for more details.

Double check this list of stuff that's easy to miss:

Reviewer Notes

If API changes are included, additive changes must be approved by at least two OWNERS and backwards incompatible changes must be approved by more than 50% of the OWNERS, and they must first be added in a backwards compatible way.

Release Notes

Describe any user facing changes here, or delete this block.

Examples of user facing changes:
- API changes
- Bug fixes
- Any changes in behavior

+512 -479

9 comments

52 changed files

fraenkel

pr closed time in 5 hours

pull request commenttektoncd/pipeline

test/builder: namespace is optional

@vdemeester The only other option is to have a default namespace name but I am not convinced it buys us much.

fraenkel

comment created time in 5 hours

issue commentgolang/go

x/net/http2/h2c: http BaseContext/ConnContext methods are not used

But you do have the remote address via the Request.

jared2501

comment created time in 7 days

issue commentgolang/go

x/net/http2/h2c: http BaseContext/ConnContext methods are not used

@jared2501 If you have set the MaxConcurrentStreams, the client side should be getting a stream error. I know that there is a lack of desire to expose the net.Conn underneath the h2 or h2c implementations. I would have to defer to @bradfitz for an alternative way to track streams per connection. You might be able to get away with retrieving the LocallAddrContextKey to manage the streams for a given connection in your own map.

jared2501

comment created time in 8 days

fork fraenkel/vault

A tool for secrets management, encryption as a service, and privileged access management

https://www.vaultproject.io/

fork in 11 days

issue commentgolang/go

x/net/http2/h2c: http BaseContext/ConnContext methods are not used

@jared2501 One question, now that you have created the context, what is going to use it?

jared2501

comment created time in 11 days

issue commentgolang/go

net/http: HTTP/2 SETTINGS frame being read incorrectly

@x04 is there a reproducer? Can you run with http2debug so we can see what is being processed?

x04

comment created time in 11 days

issue commentgolang/go

x/net/http2: SetKeepAlivesEnabled(false) closes all HTTP/2 connections older than 5 seconds

@nwidger Its all related on how connection management works and when idle connections are closed. There is an outstanding issue #26303, which I need more details on since its unclear to me why http2 is being managed this way.

nwidger

comment created time in 13 days

issue commentgolang/go

x/net/http2: SetKeepAlivesEnabled(false) closes all HTTP/2 connections older than 5 seconds

@bradfitz It seems the connection should be transitioned to StateHijacked. I haven't checked but I believe we are just collecting http/2 connections.

nwidger

comment created time in 15 days

issue commentgolang/go

net/http: http.Client.Do() sometimes throws INTERNAL_ERROR when using high number of parallel HTTP/2 requests (regression between 1.12 and 1.13.4)

We are going to need a test case or at the very least some http/2 debug to determine why the INTERNAL_ERROR occurred.

Does this problem also occur with tip?

rayvbr

comment created time in 15 days

issue commentstevvooe/protobuild

Proposal to allow multiple service interface implementations for a single .proto

Had to go track this down since its been a while. It started from https://github.com/containerd/containerd/pull/2475 and I have a branch, https://github.com/fraenkel/protobuild/tree/relative_outputs TBH, I don't know how far I got since it all went silent.

fraenkel

comment created time in 17 days

issue commentgolang/go

net/http: permanently broken connection with error "read: connection reset by peer"

@josharian Did you enable http2 on the Server?

josharian

comment created time in a month

issue commentgolang/go

net/http: permanently broken connection with error "read: connection reset by peer"

You were probably affected by https://github.com/golang/go/issues/24138

josharian

comment created time in a month

issue commentgolang/go

net/http: HTTP/2 with MaxConnsPerHost hangs or crashes

@michaeldorner This has not be backported to 1.13 yet.

rhysh

comment created time in a month

pull request commenttektoncd/pipeline

test/builder: namespace is optional

The first issue is with the lister-gen generation. It doesn't generate code to handle an empty namespace. https://github.com/kubernetes/code-generator/issues/63

fraenkel

comment created time in a month

pull request commenttektoncd/pipeline

test/builder: namespace is optional

Thanks! It’d actually be great to go through these and remove any namespace declarations that aren't necessary for the test. I suspect that’s basically all cases.

Unfortunately its not that simple. Some of them can be removed, but others cause test failures.

=== RUN   TestReconcileWithTimeout
2020-01-19T08:54:38.721-0500	DEBUG	TestReconcileWithTimeout.pipeline-controller	reconciler/reconciler.go:99	Creating event broadcaster	{"knative.dev/controller": "pipeline-controller"}
2020-01-19T08:54:38.722-0500	INFO	TestReconcileWithTimeout.pipeline-controller	pipelinerun/controller.go:89	Setting up event handlers	{"knative.dev/controller": "pipeline-controller"}
2020-01-19T08:54:38.722-0500	INFO	TestReconcileWithTimeout.pipeline-controller	pipelinerun/controller.go:101	Setting up ConfigMap receivers	{"knative.dev/controller": "pipeline-controller"}
2020-01-19T08:54:38.722-0500	INFO	TestReconcileWithTimeout.pipeline-controller	pipelinerun/pipelinerun.go:115	Reconciling 2020-01-19 08:54:38.722104295 -0500 EST m=+0.010597083	{"knative.dev/controller": "pipeline-controller"}
2020-01-19T08:54:38.722-0500	ERROR	TestReconcileWithTimeout.pipeline-controller	pipelinerun/pipelinerun.go:130	pipeline run "test-pipeline-run-with-timeout" in work queue no longer exists	{"knative.dev/controller": "pipeline-controller"}
github.com/tektoncd/pipeline/pkg/reconciler/pipelinerun.(*Reconciler).Reconcile
	/home/fraenkel/workspace/pipeline/pkg/reconciler/pipelinerun/pipelinerun.go:130
github.com/tektoncd/pipeline/pkg/reconciler/pipelinerun.TestReconcileWithTimeout
	/home/fraenkel/workspace/pipeline/pkg/reconciler/pipelinerun/pipelinerun_test.go:923
testing.tRunner
	/snap/go/4901/src/testing/testing.go:909
--- FAIL: TestReconcileWithTimeout (0.00s)
    pipelinerun_test.go:935: Expected a CompletionTime on invalid PipelineRun but was nil
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x50 pc=0x152dccf]

goroutine 9 [running]:
testing.tRunner.func1(0xc0004fad00)
	/snap/go/4901/src/testing/testing.go:874 +0x3a3
panic(0x16e3be0, 0x28ba780)
	/snap/go/4901/src/runtime/panic.go:679 +0x1b2
github.com/tektoncd/pipeline/pkg/reconciler/pipelinerun.TestReconcileWithTimeout(0xc0004fad00)
	/home/fraenkel/workspace/pipeline/pkg/reconciler/pipelinerun/pipelinerun_test.go:939 +0x75f
testing.tRunner(0xc0004fad00, 0x19c2c58)
	/snap/go/4901/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
	/snap/go/4901/src/testing/testing.go:960 +0x350
FAIL	github.com/tektoncd/pipeline/pkg/reconciler/pipelinerun	0.016s
FAIL

I am trying to determine how to make the reconciler package work without a namespace.

fraenkel

comment created time in a month

push eventfraenkel/pipeline

Michael Fraenkel

commit sha 4d78aeb8c18a03b7dde68ad5b57fd0d2ab0bbf45

test/builder: namespace is optional Allow the namespace of most of the test resources to be optional. They can be provided as a XXXOp argument to the resource, e.g., tb.Task("my-task", tb.Namespace("my-namespace")) Fixes #1824

view details

push time in a month

push eventfraenkel/pipeline

Michael Fraenkel

commit sha 7ad12f784e6f22955253d0c97eb530b69dfde96a

test/builder.Step name is optional Allow the name of a tb.Step to be optional. A name can be specified by using the StepOp, tb.StepName(), e.g., tb.Step("image", tb.StepName("step-name")) Fixes #1823

view details

push time in a month

issue commentgolang/go

x/net/http: PROTOCOL_ERROR with HTTP2

Using http.NewRequest("get", url, nil) instead of http.NewRequest("GET", url, nil) leads to the exact same error message:

There is http.MethodGet so you don't have to think about it.

siscia

comment created time in a month

push eventfraenkel/pipeline

Michael Fraenkel

commit sha b3260779b38e15fb81a3506d22ad7e28084e296c

test/builder.Step name is optional Allow the name of a tb.Step to be optional. A name can be specified by using the StepOp, tb.StepName(), e.g., tb.Step("image", tb.StepName("step-name")) Fixes #1823

view details

push time in a month

push eventfraenkel/pipeline

Michael Fraenkel

commit sha 7367d142b6ebee3963753b71f8842ae5530bf0e4

test/builder: namespace is optional Allow the namespace of most of the test resources to be optional. They can be provided as a XXXOp argument to the resource, e.g., tb.Task("my-task", tb.Namespace("my-namespace")) Fixes #1824

view details

push time in a month

push eventfraenkel/pipeline

Michael Fraenkel

commit sha b2452fd4964f0890fb625117034791f8dda69dfd

test/builder.Step name is optional Allow the name of a tb.Step to be optional. A name can be specified by using the StepOp, tb.StepName(), e.g., tb.Step("image", tb.StepName("step-name")) Fixes #1823

view details

push time in a month

PR opened tektoncd/pipeline

test/builder: namespace is optional

Changes

Allow the namespace of most of the test resources to be optional. They can be provided as a XXXOp argument to the resource, e.g., tb.Task("my-task", tb.Namespace("my-namespace"))

Fixes #1824

Submitter Checklist

These are the criteria that every PR should meet, please check them off as you review them:

See the contribution guide for more details.

Double check this list of stuff that's easy to miss:

Reviewer Notes

If API changes are included, additive changes must be approved by at least two OWNERS and backwards incompatible changes must be approved by more than 50% of the OWNERS, and they must first be added in a backwards compatible way.

Release Notes

Describe any user facing changes here, or delete this block.

Examples of user facing changes:
- API changes
- Bug fixes
- Any changes in behavior

+431 -396

0 comment

35 changed files

pr created time in a month

create barnchfraenkel/pipeline

branch : namespace_optional

created branch time in a month

PR opened tektoncd/pipeline

test/builder.Step name is optional

Changes

Allow the name of a tb.Step to be optional. A name can be specified by using the StepOp, tb.StepName(), e.g., tb.Step("image", tb.StepName("step-name"))

Fixes #1823

Submitter Checklist

These are the criteria that every PR should meet, please check them off as you review them:

Reviewer Notes

If API changes are included, additive changes must be approved by at least two OWNERS and backwards incompatible changes must be approved by more than 50% of the OWNERS, and they must first be added in a backwards compatible way.

Release Notes

Describe any user facing changes here, or delete this block.

Examples of user facing changes:
- API changes
- Bug fixes
- Any changes in behavior

+30 -28

0 comment

8 changed files

pr created time in a month

create barnchfraenkel/pipeline

branch : optional_step_name

created branch time in a month

issue closedterraform-providers/terraform-provider-aws

aws_route created during apply and then again during the next plan

Our CI does the same plan/apply but at times we detect a drift after the apply.

When it "drifts", we get the following:

Apply complete! Resources: 42 added, 0 changed, 0 destroyed.

The subsequent plan shows:

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # aws_route.internet-public will be created
  + resource \"aws_route\" \"internet-public\" {
      + destination_cidr_block     = \"0.0.0.0/0\"
      + destination_prefix_list_id = (known after apply)
      + egress_only_gateway_id     = (known after apply)
      + gateway_id                 = \"igw-08ad8515ab712d3fd\"
      + id                         = (known after apply)
      + instance_id                = (known after apply)
      + instance_owner_id          = (known after apply)
      + nat_gateway_id             = (known after apply)
      + network_interface_id       = (known after apply)
      + origin                     = (known after apply)
      + route_table_id             = \"rtb-0112ca0275aa25466\"
      + state                      = (known after apply)
    }

Plan: 1 to add, 0 to change, 0 to destroy.

The odd bit is the apply shows

aws_route.internet-public: Creating...
aws_subnet.public[\"10.1.16.0/20\"]: Creation complete after 1s [id=subnet-08e2a917851035877]
aws_vpc_endpoint.s3: Creating...
aws_route.internet-public: Creation complete after 0s [id=r-rtb-0a96da2203f5122301080289494]

closed time in a month

fraenkel

issue commentterraform-providers/terraform-provider-aws

aws_route created during apply and then again during the next plan

This might be caused by having an additional resource aws_default_route_table to add some tags. The above route uses the aws_vpc.default_route_table_id rather than from the one that added the tag. I have adjusted our TF code and will reopen if we continue to see failures.

fraenkel

comment created time in a month

issue closedterraform-providers/terraform-provider-aws

aws_nat_gateway does not always properly destroy itself

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform Version

Terraform v0.12.17

Affected Resource(s)

  • aws_nat_gateway

Terraform Configuration Files

<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->

resource "aws_subnet" "public" {
  for_each = var.subnets_public

  vpc_id               = aws_vpc.network.id
  availability_zone_id = each.value
  cidr_block           = each.key
}

resource "aws_eip" "natgw" {
  for_each = var.subnets_public
  vpc      = true

  tags = merge(var.tags_extra, {
    Name = "${var.name}-natgw-${each.value}"
  })
}

resource "aws_nat_gateway" "network" {
  for_each = var.subnets_public

  allocation_id = aws_eip.natgw[each.key].id
  subnet_id     = aws_subnet.public[each.key].id

  tags = merge(var.tags_extra, { Name = "${var.name}-${each.value}" })
}

Expected Behavior

Successful destruction.

Actual Behavior

When I perform a terraform plan --destroy and then terraform apply, we occasionally receive

Plan: 0 to add, 0 to change, 16 to destroy.
Releasing state lock. This may take a few moments...

aws_route_table_association.private[\"10.1.48.0/20\"]: Destroying... [id=rtbassoc-08a9430b49fef6814]
aws_route_table_association.private[\"10.1.80.0/20\"]: Destroying... [id=rtbassoc-09223d644a61fbfff]
aws_internet_gateway.network: Destroying... [id=igw-0ccc6a20eced9fa04]
aws_default_route_table.public: Destroying... [id=rtb-02489ab6ff6232a82]
aws_eip.public[\"10.1.0.0/20\"]: Destroying... [id=eipalloc-08af63431a0a305e4]
aws_subnet.public[\"10.1.0.0/20\"]: Destroying... [id=subnet-0ff957d3249ed8523]
aws_eip.public[\"10.1.16.0/20\"]: Destroying... [id=eipalloc-01269b3ef1467b950]
aws_eip.natgw[\"10.1.16.0/20\"]: Destroying... [id=eipalloc-0d2192f48c15cc81e]
aws_default_route_table.public: Destruction complete after 0s
aws_default_security_group.default: Destroying... [id=sg-0d46bbb1e82e827a2]
aws_eip.public[\"10.1.32.0/20\"]: Destroying... [id=eipalloc-07aa6af7d40bed2b9]
aws_default_security_group.default: Destruction complete after 0s
aws_subnet.public[\"10.1.16.0/20\"]: Destroying... [id=subnet-0d73efa43497e0a7d]
aws_route_table_association.private[\"10.1.64.0/20\"]: Destroying... [id=rtbassoc-0b2c722a6c1548b47]
aws_route_table_association.private[\"10.1.48.0/20\"]: Destruction complete after 0s
aws_eip.natgw[\"10.1.0.0/20\"]: Destroying... [id=eipalloc-04d81d902b38f3d15]
aws_route_table_association.private[\"10.1.80.0/20\"]: Destruction complete after 0s
aws_eip.natgw[\"10.1.32.0/20\"]: Destroying... [id=eipalloc-0f5ff0f5383927ddb]
aws_route_table_association.private[\"10.1.64.0/20\"]: Destruction complete after 0s
aws_subnet.public[\"10.1.32.0/20\"]: Destroying... [id=subnet-057b64d49444c5b05]
aws_eip.natgw[\"10.1.16.0/20\"]: Destruction complete after 0s
aws_eip.public[\"10.1.16.0/20\"]: Destruction complete after 0s
aws_subnet.public[\"10.1.16.0/20\"]: Destruction complete after 0s
aws_eip.natgw[\"10.1.0.0/20\"]: Destruction complete after 0s
aws_eip.natgw[\"10.1.32.0/20\"]: Destruction complete after 0s
aws_subnet.public[\"10.1.0.0/20\"]: Destruction complete after 9s
aws_internet_gateway.network: Still destroying... [id=igw-0ccc6a20eced9fa04, 10s elapsed]
aws_subnet.public[\"10.1.32.0/20\"]: Still destroying... [id=subnet-057b64d49444c5b05, 10s elapsed]
aws_internet_gateway.network: Destruction complete after 11s
aws_subnet.public[\"10.1.32.0/20\"]: Destruction complete after 17s
aws_vpc.network: Destroying... [id=vpc-01e4bdea897c25d97]
aws_vpc.network: Destruction complete after 0s

Error: AuthFailure: You do not have permission to access the specified resource.
\tstatus code: 400, request id: fb112f95-24b1-4bd3-94a6-c557b91ef7ee

Error: AuthFailure: You do not have permission to access the specified resource.
\tstatus code: 400, request id: aa491005-ce5d-455c-a01d-0de2ba1f9743

I have filed a separate issue, https://github.com/hashicorp/terraform/issues/23734, since I cannot just re-plan/apply.

closed time in 2 months

fraenkel

issue commentterraform-providers/terraform-provider-aws

aws_nat_gateway does not always properly destroy itself

I will reopen this if the added dependency does not resolve the issue.

fraenkel

comment created time in 2 months

issue openedterraform-providers/terraform-provider-aws

aws_route created during apply and then again during the next plan

Our CI does the same plan/apply but at times we detect a drift after the apply.

When it "drifts", we get the following:

Apply complete! Resources: 42 added, 0 changed, 0 destroyed.

The subsequent plan shows:

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # aws_route.internet-public will be created
  + resource \"aws_route\" \"internet-public\" {
      + destination_cidr_block     = \"0.0.0.0/0\"
      + destination_prefix_list_id = (known after apply)
      + egress_only_gateway_id     = (known after apply)
      + gateway_id                 = \"igw-08ad8515ab712d3fd\"
      + id                         = (known after apply)
      + instance_id                = (known after apply)
      + instance_owner_id          = (known after apply)
      + nat_gateway_id             = (known after apply)
      + network_interface_id       = (known after apply)
      + origin                     = (known after apply)
      + route_table_id             = \"rtb-0112ca0275aa25466\"
      + state                      = (known after apply)
    }

Plan: 1 to add, 0 to change, 0 to destroy.

The odd bit is the apply shows

aws_route.internet-public: Creating...
aws_subnet.public[\"10.1.16.0/20\"]: Creation complete after 1s [id=subnet-08e2a917851035877]
aws_vpc_endpoint.s3: Creating...
aws_route.internet-public: Creation complete after 0s [id=r-rtb-0a96da2203f5122301080289494]

created time in 2 months

issue commentterraform-providers/terraform-provider-aws

aws_nat_gateway does not always properly destroy itself

I believe if we add the dependency it might clear things up. The problem is that this fails only 10% of the time so its hard to debug. I will add the dependency on the aws_internet_gateway and see if the problem no longer occurs.

fraenkel

comment created time in 2 months

issue openedterraform-providers/terraform-provider-aws

aws_nat_gateway does not always properly destroy itself

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform Version

Terraform v0.12.17

Affected Resource(s)

  • aws_nat_gateway

Terraform Configuration Files

<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->

resource "aws_subnet" "public" {
  for_each = var.subnets_public

  vpc_id               = aws_vpc.network.id
  availability_zone_id = each.value
  cidr_block           = each.key
}

resource "aws_eip" "natgw" {
  for_each = var.subnets_public
  vpc      = true

  tags = merge(var.tags_extra, {
    Name = "${var.name}-natgw-${each.value}"
  })
}

resource "aws_nat_gateway" "network" {
  for_each = var.subnets_public

  allocation_id = aws_eip.natgw[each.key].id
  subnet_id     = aws_subnet.public[each.key].id

  tags = merge(var.tags_extra, { Name = "${var.name}-${each.value}" })
}

Expected Behavior

Successful destruction.

Actual Behavior

When I perform a terraform plan --destroy and then terraform apply, we occasionally receive

Plan: 0 to add, 0 to change, 16 to destroy.
Releasing state lock. This may take a few moments...

aws_route_table_association.private[\"10.1.48.0/20\"]: Destroying... [id=rtbassoc-08a9430b49fef6814]
aws_route_table_association.private[\"10.1.80.0/20\"]: Destroying... [id=rtbassoc-09223d644a61fbfff]
aws_internet_gateway.network: Destroying... [id=igw-0ccc6a20eced9fa04]
aws_default_route_table.public: Destroying... [id=rtb-02489ab6ff6232a82]
aws_eip.public[\"10.1.0.0/20\"]: Destroying... [id=eipalloc-08af63431a0a305e4]
aws_subnet.public[\"10.1.0.0/20\"]: Destroying... [id=subnet-0ff957d3249ed8523]
aws_eip.public[\"10.1.16.0/20\"]: Destroying... [id=eipalloc-01269b3ef1467b950]
aws_eip.natgw[\"10.1.16.0/20\"]: Destroying... [id=eipalloc-0d2192f48c15cc81e]
aws_default_route_table.public: Destruction complete after 0s
aws_default_security_group.default: Destroying... [id=sg-0d46bbb1e82e827a2]
aws_eip.public[\"10.1.32.0/20\"]: Destroying... [id=eipalloc-07aa6af7d40bed2b9]
aws_default_security_group.default: Destruction complete after 0s
aws_subnet.public[\"10.1.16.0/20\"]: Destroying... [id=subnet-0d73efa43497e0a7d]
aws_route_table_association.private[\"10.1.64.0/20\"]: Destroying... [id=rtbassoc-0b2c722a6c1548b47]
aws_route_table_association.private[\"10.1.48.0/20\"]: Destruction complete after 0s
aws_eip.natgw[\"10.1.0.0/20\"]: Destroying... [id=eipalloc-04d81d902b38f3d15]
aws_route_table_association.private[\"10.1.80.0/20\"]: Destruction complete after 0s
aws_eip.natgw[\"10.1.32.0/20\"]: Destroying... [id=eipalloc-0f5ff0f5383927ddb]
aws_route_table_association.private[\"10.1.64.0/20\"]: Destruction complete after 0s
aws_subnet.public[\"10.1.32.0/20\"]: Destroying... [id=subnet-057b64d49444c5b05]
aws_eip.natgw[\"10.1.16.0/20\"]: Destruction complete after 0s
aws_eip.public[\"10.1.16.0/20\"]: Destruction complete after 0s
aws_subnet.public[\"10.1.16.0/20\"]: Destruction complete after 0s
aws_eip.natgw[\"10.1.0.0/20\"]: Destruction complete after 0s
aws_eip.natgw[\"10.1.32.0/20\"]: Destruction complete after 0s
aws_subnet.public[\"10.1.0.0/20\"]: Destruction complete after 9s
aws_internet_gateway.network: Still destroying... [id=igw-0ccc6a20eced9fa04, 10s elapsed]
aws_subnet.public[\"10.1.32.0/20\"]: Still destroying... [id=subnet-057b64d49444c5b05, 10s elapsed]
aws_internet_gateway.network: Destruction complete after 11s
aws_subnet.public[\"10.1.32.0/20\"]: Destruction complete after 17s
aws_vpc.network: Destroying... [id=vpc-01e4bdea897c25d97]
aws_vpc.network: Destruction complete after 0s

Error: AuthFailure: You do not have permission to access the specified resource.
\tstatus code: 400, request id: fb112f95-24b1-4bd3-94a6-c557b91ef7ee

Error: AuthFailure: You do not have permission to access the specified resource.
\tstatus code: 400, request id: aa491005-ce5d-455c-a01d-0de2ba1f9743

I have filed a separate issue, https://github.com/hashicorp/terraform/issues/23734, since I cannot just re-plan/apply.

created time in 2 months

issue openedhashicorp/terraform

terraform plan -destroy fails after a partial destroy

Terraform Version

terraform 0.12.17+

Debug Output

While doing my initial terraform apply against a terraform plan -destroy, I hit the following error which is common.

Plan: 0 to add, 0 to change, 16 to destroy.
Releasing state lock. This may take a few moments...

aws_route_table_association.private[\"10.1.48.0/20\"]: Destroying... [id=rtbassoc-08a9430b49fef6814]
aws_route_table_association.private[\"10.1.80.0/20\"]: Destroying... [id=rtbassoc-09223d644a61fbfff]
aws_internet_gateway.network: Destroying... [id=igw-0ccc6a20eced9fa04]
aws_default_route_table.public: Destroying... [id=rtb-02489ab6ff6232a82]
aws_eip.public[\"10.1.0.0/20\"]: Destroying... [id=eipalloc-08af63431a0a305e4]
aws_subnet.public[\"10.1.0.0/20\"]: Destroying... [id=subnet-0ff957d3249ed8523]
aws_eip.public[\"10.1.16.0/20\"]: Destroying... [id=eipalloc-01269b3ef1467b950]
aws_eip.natgw[\"10.1.16.0/20\"]: Destroying... [id=eipalloc-0d2192f48c15cc81e]
aws_default_route_table.public: Destruction complete after 0s
aws_default_security_group.default: Destroying... [id=sg-0d46bbb1e82e827a2]
aws_eip.public[\"10.1.32.0/20\"]: Destroying... [id=eipalloc-07aa6af7d40bed2b9]
aws_default_security_group.default: Destruction complete after 0s
aws_subnet.public[\"10.1.16.0/20\"]: Destroying... [id=subnet-0d73efa43497e0a7d]
aws_route_table_association.private[\"10.1.64.0/20\"]: Destroying... [id=rtbassoc-0b2c722a6c1548b47]
aws_route_table_association.private[\"10.1.48.0/20\"]: Destruction complete after 0s
aws_eip.natgw[\"10.1.0.0/20\"]: Destroying... [id=eipalloc-04d81d902b38f3d15]
aws_route_table_association.private[\"10.1.80.0/20\"]: Destruction complete after 0s
aws_eip.natgw[\"10.1.32.0/20\"]: Destroying... [id=eipalloc-0f5ff0f5383927ddb]
aws_route_table_association.private[\"10.1.64.0/20\"]: Destruction complete after 0s
aws_subnet.public[\"10.1.32.0/20\"]: Destroying... [id=subnet-057b64d49444c5b05]
aws_eip.natgw[\"10.1.16.0/20\"]: Destruction complete after 0s
aws_eip.public[\"10.1.16.0/20\"]: Destruction complete after 0s
aws_subnet.public[\"10.1.16.0/20\"]: Destruction complete after 0s
aws_eip.natgw[\"10.1.0.0/20\"]: Destruction complete after 0s
aws_eip.natgw[\"10.1.32.0/20\"]: Destruction complete after 0s
aws_subnet.public[\"10.1.0.0/20\"]: Destruction complete after 9s
aws_internet_gateway.network: Still destroying... [id=igw-0ccc6a20eced9fa04, 10s elapsed]
aws_subnet.public[\"10.1.32.0/20\"]: Still destroying... [id=subnet-057b64d49444c5b05, 10s elapsed]
aws_internet_gateway.network: Destruction complete after 11s
aws_subnet.public[\"10.1.32.0/20\"]: Destruction complete after 17s
aws_vpc.network: Destroying... [id=vpc-01e4bdea897c25d97]
aws_vpc.network: Destruction complete after 0s

Error: AuthFailure: You do not have permission to access the specified resource.
\tstatus code: 400, request id: fb112f95-24b1-4bd3-94a6-c557b91ef7ee



Error: AuthFailure: You do not have permission to access the specified resource.
\tstatus code: 400, request id: aa491005-ce5d-455c-a01d-0de2ba1f9743

When I attempt to re-plan to clean up, I get this failure:

Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

aws_vpc.network: Refreshing state... [id=vpc-01e4bdea897c25d97]
data.aws_region.current: Refreshing state...
aws_eip.natgw[\"10.1.0.0/20\"]: Refreshing state... [id=eipalloc-04d81d902b38f3d15]
aws_eip.natgw[\"10.1.32.0/20\"]: Refreshing state... [id=eipalloc-0f5ff0f5383927ddb]
aws_internet_gateway.network: Refreshing state... [id=igw-0ccc6a20eced9fa04]

Error: Invalid index

  on network.tf line 77, in resource \"aws_nat_gateway\" \"network\":
  77:   allocation_id = aws_eip.natgw[each.key].id
    |----------------
    | aws_eip.natgw is object with 1 attribute \"10.1.16.0/20\"
    | each.key is \"10.1.32.0/20\"

The given key does not identify an element in this collection value.


Error: Invalid index

  on network.tf line 77, in resource \"aws_nat_gateway\" \"network\":
  77:   allocation_id = aws_eip.natgw[each.key].id
    |----------------
    | aws_eip.natgw is object with 1 attribute \"10.1.16.0/20\"
    | each.key is \"10.1.0.0/20\"

The given key does not identify an element in this collection value.

Releasing state lock. This may take a few moments...

The resource in question looks like

resource "aws_nat_gateway" "network" {
  for_each = length(var.subnets_private) == 0 ? {} : var.subnets_public

  allocation_id = aws_eip.natgw[each.key].id
  subnet_id     = aws_subnet.public[each.key].id

  tags = merge(var.tags_extra, { Name = "${var.name}-${each.value}" })
}

created time in 2 months

issue commentgolang/go

net/http: HTTP/2 with MaxConnsPerHost hangs or crashes

The fix only went into 1.14. The powers that be would have to decide if it gets backported.

rhysh

comment created time in 3 months

issue commentgolang/go

net/http: HTTP/2 with MaxConnsPerHost hangs or crashes

Can you also provide the stack trace? And the potential scenario if different than reported. A new issue would be best.

rhysh

comment created time in 3 months

issue commentgolang/go

net/http: ReadTimeout is not honored when ReadHeaderTimeout > ReadTimeout

Sorry I should have been more clear. I dont think the current behavior is valuable given what I would have expected the proper behavior to be.

If there is no ReadHeaderTimeout, the request timeout is exactly the read timeout. If there is a ReadHeaderTimeout set, the headers must be read within that timeout window, but the request timeout is the remaining amount given the sum of ReadHeaderTimeout + ReadTimeout. Personally I would have preferred the ReadHeaderTimeout when set to only govern the header part and allow the ReadTimeout to be the rest. But now the remainder is carried over.

What you see above seems correct to me. The header timeout expired and so does the request. For a GET request, having this split timeout doesn't make much sense given there is usually no body. It would have been useful if the timeouts were disconnected so one could guarantee headers are read within time X and bodies within time Y.

I guess if one looks at the current ReadHeaderTimeout usage, we could determine if changing the behavior affects people and whether it would be for the better.

james-johnston-thumbtack

comment created time in 3 months

issue commentgolang/go

net/http: ReadTimeout is not honored when ReadHeaderTimeout > ReadTimeout

Changing this behavior could break applications. The current behavior is the requestTimeout = readHeaderTimeout + readTimeout. It would be safer to just adjust the documentation to match the behavior.

james-johnston-thumbtack

comment created time in 3 months

delete branch fraenkel/pipeline

delete branch : sidecar_status

delete time in 3 months

push eventfraenkel/pipeline

Michael Fraenkel

commit sha 55debdb891f9e98c3e586480ee7a6594592042c3

Sidecar container names prefixed with sidecar Prefix all sidecar container names with 'sidecar-' Counting, stopping and status collection of sidecar status will all look at the container name to determine if the operation applies.

view details

push time in 3 months

push eventfraenkel/pipeline

Michael Fraenkel

commit sha 21b7535f6909cb404f4225cf73dd04941c43f909

Sidecar container names prefixed with sidecar Prefix all sidecar container names with 'sidecar-' Counting, stopping and status collection of sidecar status will all look at the container name to determine if the operation applies.

view details

push time in 3 months

push eventfraenkel/pipeline

Michael Fraenkel

commit sha bb7d0b71c4a077ba16c25826531aa057c9778385

TaskRunStatus includes sidecar status Record the sidecar name and image id for posterity. Fixes #1511

view details

push time in 3 months

Pull request review commenttektoncd/pipeline

TaskRunStatus includes sidecar status

 func UpdateStatusFromPod(taskRun *v1alpha1.TaskRun, pod *corev1.Pod, resourceLis 				ContainerName:  s.Name, 				ImageID:        s.ImageID, 			})+		} else {+			sidecars = append(taskRun.Status.Sidecars, v1alpha1.SidecarState{+				Name:    s.Name,+				ImageID: s.ImageID,+			}) 		} 	}+	if len(sidecars) > 0 {

I did that first and didn't want to update all the tests, but I switched to be consistent.

fraenkel

comment created time in 3 months

push eventfraenkel/pipeline

Michael Fraenkel

commit sha 15387de050271943f6201898b6b9a8a1d291c457

TaskRunStatus includes sidecar status Record the sidecar name and image id for posterity. Fixes #1511

view details

push time in 3 months

Pull request review commenttektoncd/pipeline

TaskRunStatus includes sidecar status

 func UpdateStatusFromPod(taskRun *v1alpha1.TaskRun, pod *corev1.Pod, resourceLis 				ContainerName:  s.Name, 				ImageID:        s.ImageID, 			})+		} else {

I will do it in a separate commit. If its on the larger side I will just split it off into a separate PR.

fraenkel

comment created time in 3 months

PR opened tektoncd/pipeline

TaskRunStatus includes sidecar status

Changes

Record the sidecar name and image id for posterity.

Fixes #1511

I didn't include the container name because it is the same as the name.

Submitter Checklist

These are the criteria that every PR should meet, please check them off as you review them:

See the contribution guide for more details.

Reviewer Notes

If API changes are included, additive changes must be approved by at least two OWNERS and backwards incompatible changes must be approved by more than 50% of the OWNERS, and they must first be added in a backwards compatible way.

Release Notes

The status of a task run includes the image ids of all sidecars.
+25 -1

0 comment

3 changed files

pr created time in 3 months

create barnchfraenkel/pipeline

branch : sidecar_status

created branch time in 3 months

issue commentgolang/go

proposal: net: add BufferedPipe (buffered Pipe)

Additional ones in http2: x/net/http2: #33425 #32388

iangudger

comment created time in 4 months

issue commentgolang/go

x/net/http2: Blocked Write on single connection causes all future calls to block indefinitely

The only solution I can think of right now is to build your own client pool which tracks outstanding requests per client. Don't share the transport.

prashantv

comment created time in 4 months

issue commentgolang/go

x/net/http2: Blocked Write on single connection causes all future calls to block indefinitely

And eventually the channel can be blocked. You can't just short circuit idleStateLocked, it actually computes the answer. Returning an incorrect value will break other guarantees. As I already stated, my patch attempted to break the read/write mutex and did so up until I hit the one place where control needs to transfer sequentially from the read lock to the write lock. I haven't determined a good way to accomplish this once the write side is blocked. Any solution must fix the write blocking issue which then allows us to disentangle the read/write mutex overlaps.

prashantv

comment created time in 4 months

issue commentgolang/go

x/net/http2: Blocked Write on single connection causes all future calls to block indefinitely

Here is a simple race that you have: if 2 routines both perform a Lock(), locked = 1 and one blocks on the mutex. When Unlock() is invoked, the second routine will execute but locked = 0.

prashantv

comment created time in 4 months

issue commentgolang/go

x/net/http2: Blocked Write on single connection causes all future calls to block indefinitely

@prashantv While your solution fixes idleState it breaks everything else that relies on the read mutex. The solution is dictated by the problem we are trying to solve. Almost all the reports are regarding the Write blocking which is difficult to solve. I can easily add one more mutex to my code to disconnect the read/write mutex during new connections and that might solve a majority of the cases. But I know it won't solve the write issue everyone keeps reporting.

prashantv

comment created time in 4 months

issue commentgolang/go

x/net/http2: Blocked Write on single connection causes all future calls to block indefinitely

The mentioned change doesn't fix anything because its incomplete. All of these problems are all the same with no good solution because eventually the write lock will become the issue that backs up to the mu given the relationship that exists between them.

prashantv

comment created time in 4 months

issue openedjackc/pgx

pgxpool: rows.CommandTag() goes into an infinite loop

Looks like there was a simple typo.

github.com/jackc/pgx/v4/pgxpool.(*poolRows).CommandTag(0xc0004849d8, 0x0, 0x0, 0x0)
	go/pkg/mod/github.com/jackc/pgx/v4@v4.0.1/pgxpool/rows.go:50 +0x2b
github.com/jackc/pgx/v4/pgxpool.(*poolRows).CommandTag(0xc0004849d8, 0x0, 0x0, 0x0)
	go/pkg/mod/github.com/jackc/pgx/v4@v4.0.1/pgxpool/rows.go:50 +0x2b

The code should be rows.r.CommandTag()

created time in 4 months

issue commentgolang/go

net/http: HTTP/2 with MaxConnsPerHost hangs or crashes

The http2 server imposes a limit of 250 concurrent streams. Once we reach that number which does happen when we stampede with 300 requests, there is a point where a new connection is created.

rhysh

comment created time in 4 months

issue commentgolang/go

net/http: HTTP/2 with MaxConnsPerHost hangs or crashes

Part of the issue is the connection coordination between http and http2. When using 300 clients, the first 50+ compete to create the connection on the http side before the http2 side is aware of it. There is some glitch (still investigating) when the http2 side is aware, it actually causes a second connection. By changing the test case to first do a single client request, the failure rate is greatly reduced but the problem still occurs. However, it shows that we will always reuse the first connection but may dial/tls handshake a second. I don't think this will ever be perfect but I would at least like to better understand what is causing the second connection. We may need to relax the test case slightly.

rhysh

comment created time in 4 months

issue commentgolang/go

net/http: HTTP/2 with MaxConnsPerHost hangs or crashes

All I have determined at this point is that the http2 side starts sending back http.http2noCachedConnError after some time. The test passes when it doesn't occur which is not very often.

rhysh

comment created time in 4 months

issue commentgolang/go

net/http: HTTP/2 with MaxConnsPerHost hangs or crashes

Bah. A simple tweak to our existing test for MaxConnsPerHost has uncovered yet another issue. Just tweak the loop to 300 and tip will hit the same issue reported. My fix will fix it but now I have

--- FAIL: TestTransportMaxConnsPerHost (0.06s)
    transport_test.go:658: round 1: too many dials (http2): 2 != 1
    transport_test.go:661: round 1: too many get connections (http2): 2 != 1
    transport_test.go:664: round 1: too many tls handshakes (http2): 2 != 1
rhysh

comment created time in 4 months

issue commentgolang/go

net/http: HTTP/2 with MaxConnsPerHost hangs or crashes

A fix is coming shortly. We cannot blindly decrement the conn count. We need to only decrement if we have removed the idle connection.

rhysh

comment created time in 4 months

issue commentgolang/go

net/http: HTTP/2 with MaxConnsPerHost hangs or crashes

As written, the test case as a data race. I changed the test case slightly:

  1. Fixing the data race, (transport.MaxConnsHost)
  2. using ForceAttemptHTTP2 to simplifiy the setup
  3. count successful finishes

The failure does still occur but I can get a few successful runs. There is obviously some book keeping issue.

It is the same for 1.13.3 and tip (46aa8354fa)

package issue34941

import (
	"context"
	"crypto/tls"
	"net/http"
	"net/http/httptest"
	"sync"
	"sync/atomic"
	"testing"
	"time"
)

func TestMaxConns(t *testing.T) {
	totalRequests := 300

	allow := make(chan struct{})
	var (
		starts   int64
		finishes int64
	)
	h := func(w http.ResponseWriter, r *http.Request) {
		if !r.ProtoAtLeast(2, 0) {
			t.Errorf("Request is not http/2: %q", r.Proto)
			return
		}
		atomic.AddInt64(&starts, 1)
		<-allow
	}

	s := httptest.NewUnstartedServer(http.HandlerFunc(h))
	s.TLS = &tls.Config{
		NextProtos: []string{"h2"},
	}
	s.StartTLS()
	defer s.Close()

	transport := s.Client().Transport.(*http.Transport)
	// clientConfig := transport.TLSClientConfig
	// transport.TLSClientConfig = nil
	transport.MaxConnsPerHost = 1
	transport.ForceAttemptHTTP2 = true

	// make a request to trigger HTTP/2 autoconfiguration
	// resp, err := s.Client().Get(s.URL)
	// if err == nil {
	// 	resp.Body.Close()
	// }
	// now allow the client to connect to the ad-hoc test server
	// transport.TLSClientConfig.RootCAs = clientConfig.RootCAs

	ctx := context.Background()
	ctx, cancel := context.WithCancel(ctx)
	defer cancel()

	var wg sync.WaitGroup
	for i := 0; i < totalRequests; i++ {
		req, err := http.NewRequest("GET", s.URL, nil)
		if err != nil {
			t.Fatalf("NewRequest: %s", err)
		}
		wg.Add(1)
		go func() {
			defer wg.Done()
			ctx, cancel := context.WithCancel(ctx)
			defer cancel()
			req = req.WithContext(ctx)
			resp, err := s.Client().Do(req)
			if err != nil {
				return
			}
			resp.Body.Close()
			atomic.AddInt64(&finishes, 1)
		}()
	}

	for i := 0; i < 10; i++ {
		if i == 5 {
			close(allow)
		}
		time.Sleep(100 * time.Millisecond)
		t.Logf("starts=%d finishes=%d", atomic.LoadInt64(&starts), atomic.LoadInt64(&finishes))
	}

	if have, want := atomic.LoadInt64(&starts), int64(totalRequests); have != want {
		t.Errorf("HTTP/2 requests started: %d != %d", have, want)
	}
	if have, want := atomic.LoadInt64(&finishes), int64(totalRequests); have != want {
		t.Errorf("HTTP/2 requests completed: %d != %d", have, want)
	}
}
rhysh

comment created time in 4 months

fork fraenkel/examples

Apache Kafka and Confluent Platform examples and demos

fork in 4 months

push eventfraenkel/pipeline

Will Plusnick

commit sha 037f6b7c6790c19a8338a1a21096ebec8ae696fa

Add docker for desktop and minikube instructions This change will add instructions for creating a local development environment using both minikube and Kubernetes on Docker for Desktop. This change will make it easier for those without a cloud account to contribute tekton pipeline.

view details

Jason Hall

commit sha 3407dc6efbbfe854e2a8e06bc904a6771c5c64ca

Include vendored source in release-built images This adds logic to the nightly release Task that targz's up everything in vendor/ and includes it in ko-built container images. Some of our dependencies' licenses require their source to be included in distributed artifacts (like container images). Once we've determined this works fine for nightly releases, I'll copy this to publish.yaml so it's also done for official releases.

view details

Jason Hall

commit sha c5dfdd873831d123d8c6c2552b4a87101fbb48e1

Fix line breaks in PR template

view details

Andrea Frittoli

commit sha 8515c87838e6d22ee3f0e9a264732719d1360eb3

Tekton 0.3.1 does not support $() syntax Tekton 0.3.1 is used for release. It does not support the $() syntax so ${} should be used everywhere.

view details

cappyzawa

commit sha cd5b973826c353b3c917cf8300a771a55e616dae

fix export comment Signed-off-by: cappyzawa <cappyzawa@yahoo.ne.jp>

view details

pengli

commit sha 04dfa5bc3232fcca3a7a4d12b502aa329544c4b7

Correct pod watching in Taskrun controller Should use `cache.FilteringResourceEventHandler` rather than `cache.ResourceEventHandlerFuncs`. The `pod` with incorrect owner will not go into the queue.

view details

Jason Hall

commit sha c3db3487127967207dbc7b7b403a33a999297990

Actually fix PR template line breaks

view details

Jason Hall

commit sha 3873c3ce5223f54cdd097215ebf71d10aebd7c37

Clean up YAML tests - Use generateName where appropriate in TaskRuns; this makes it easier to re-run them multiple times, and we should probably recommend this more widely. - Create YAML resources instead of applying them (this is required to support generateName) - Rename files to remove unnecessary "taskrun-" prefix. - Rename TaskRuns to remove unnecessary "test-" prefix, and in general to match the name of the file -- this should help identifying the file that contains a failed TaskRun.

view details

Dan Lorenc

commit sha e83fb4c0349023e8681c30411b6876ca2f178364

Allow PipelineResource implementations to modify the entire Pod spec. This change simplifies the interface by removing the GetUpload/Download container and volume methods and replaces it with a more generic "modifier" system. This can be cleaned up a bit more still, and is intended for an early review at this point.

view details

Dan Lorenc

commit sha 0b29a307581872d3c92cfd80367d6c56adc4979b

This commit fixes some style issues noticed after #1345 was merged. This adds missing docstrings to the new interface methods and changes the way TaskSpec is passed so it can't be mutated.

view details

Priti Desai

commit sha bcbba978e8ccc0f52b673a7501bbf6e48bf38ca9

Adding support to enable resourceSpec Its now possible to embed the resource specifications into Pipeline Run using resourceSpec, for example: apiVersion: tekton.dev/v1alpha1 kind: PipelineRun metadata: name: pipelinerun-echo-greetings spec: resources: - name: git-repo resourceRef: name: my-git-repo Can be specified as: apiVersion: tekton.dev/v1alpha1 kind: PipelineRun name: pipelinerun-echo-greetings spec: resources: - name: git-repo resourceSpec: type: git params: - name: url value: https://github.com/myrepo/myrepo.git

view details

Jason Hall

commit sha 40e340f2d0a6f5476869b33c62ca032ed61de06a

Use Tekton's nightly-built build-base image Apparently the knative-nightly build-base image hasn't been built since February?!

view details

Dan Lorenc

commit sha 4fc62318d5370f8e675ae48ec9682e4d53335354

Enable the "gosec" linter for CI, and fix the one issue in our code. The "issue" is actually a false positive, so it is fixed by adding an annotation.

view details

Chmouel Boudjnah

commit sha 2bf801ec924389cf6ee06a97a38ea93cda812a05

Avoid cases when comparing in TestGitPipelineRun Since https://github.com/tektoncd/pipeline/commit/40e340f2d0a6f5476869b33c62ca032ed61de06a sometime git have `could` and some git version has `Could` so let's not worry about this. Signed-off-by: Chmouel Boudjnah <chmouel@redhat.com>

view details

Dan Lorenc

commit sha 4b759111b41df7e9cbfd0e23c50998cd6bebcaf0

Enable "gocritic" in CI, and fix associated errors. "gocritic" is "the most opinionated go linter". I was expecting to be terrified by the number of errors it would report when run, but it was surprisngly reasonable.

view details

Dan Lorenc

commit sha 52bc0037d31974cae452518cb3ba3d18e83a18fc

Add support for specifiying "0" as no-timeout for PipelineRuns. This was already done in #1040 for TaskRuns, but PipelineRuns seem to have been missed. This fixes #1303.

view details

16yuki0702

commit sha 73bf89bb6bb335d8b720d2c764bba21d8165301e

Add checking insecure flag when creating pipeline resources

view details

Christie Wilson

commit sha 0e9066bacb6fb087a3a3b43492877c94420d03c0

Resolve all PipelineResources first before continuing As part of #1184 I need to call `GetSetup` on all PipelineResources early on in PipelineRun execution. Since PipelineRuns declare all their resource up front, I wanted to be able to resolve all of them at once, then call `GetSetup` on all of them. Also, as Pipelines got more complex (we added Conditions) it turned out we were retrieving the resources in a few different places. Also in #1324 @pritidesai is making it so that these Resources can be provided by spec. By resolving all of this up front at once, we can simplify the logic later on. And you can see in this commit that we are able to reduce the responsibilities of ResolvePipelineRun a bit too!

view details

Christie Wilson

commit sha 73ba02be1b3f3c4eada8ae515de579ef09db2e6e

Use the same logic to resolve spec vs ref in Pipelines + Tasks In #1324 we updated PipelineRuns to allow for embedding ResourceSpecs in PipelineRuns. This commit makes it so that the logic for resolving (i.e. deciding if PipelineResources are specified by Spec or Ref) is shared by PipelineRuns + TaskRuns. This is done by making it so that the "binding" uses the same type underneath. The only reason they can't be the exact same type is that TaskRuns additionally need the "path" attribute, which is actually only used for PVC copying, which will be removed in #1284, and then we should be able to remove paths entirely and the type can be the same. Also added some additional comments around the use of `SelfLink`, and made sure it was well covered in the reconciler test.

view details

Vincent Demeester

commit sha a510d489a6a8ff7639d3634dbb9b81115d912385

github.com/Azure/azure-sdk-for-go: v21.4.0 -> v33.2.0 Additionnal updates… github.com/Azure/go-autorest: v11.1.2 -> v13.0.1 Signed-off-by: Vincent Demeester <vdemeest@redhat.com>

view details

push time in 4 months

delete branch fraenkel/pipeline

delete branch : serviceAcctName

delete time in 4 months

pull request commenttektoncd/pipeline

ServiceAccountName replaces ServiceAccount

looks like a network issue

I1008 15:06:29.351]         {"level":"error","ts":1570547180.7409866,"logger":"fallback-logger","caller":"git/git.go:35","msg":"Error running git [fetch --depth=1 --recurse-submodules=yes origin c15aced]: exit status 128\nfatal: unable to access 'https://github.com/tektoncd/pipeline/': Could not resolve host: github.com\n","stacktrace":"github.com/tektoncd/pipeline/pkg/git.run\n\t/go/src/github.com/tektoncd/pipeline/pkg/git/git.go:35\ngithub.com/tektoncd/pipeline/pkg/git.Fetch\n\t/go/src/github.com/tektoncd/pipeline/pkg/git/git.go:86\nmain.main\n\t/go/src/github.com/tektoncd/pipeline/cmd/git-init/main.go:36\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:201"}
fraenkel

comment created time in 4 months

pull request commentVSCodeVim/Vim

<C-r> <C-w> (insert word under cursor) in search/commmand line

@J-Fields Thanks for the hint. I think I have replicated the vim behavior as I best I could.

fraenkel

comment created time in 4 months

push eventfraenkel/Vim

Michael Fraenkel

commit sha 09a919cdad9ca4fdd00fb7311fd51f2bf9c91641

<C-r> <C-w> (insert word under cursor) in search/commmand line fixes #4102

view details

push time in 4 months

push eventfraenkel/Vim

hetmankp

commit sha 58945cd6f59ab1d6279738c2d5fca22a29a4bade

Update CONTRIBUTING.md after webpack bundling addition (#4128) With the addition of Webpack bundling (issue #3127, commit 1f80b2d4c) the instructions for how to run tests were no longer working.

view details

Mateusz Paprocki

commit sha 0350dad97a7b3dba061afe798d9389a50adabb19

Improve support for :tabm[ove] (#3960) - Adds support for :tabm +[N] and :tabm -[N] - Allows for N in :tabm [N] have multiple digits - Adds :tabmove alias Fixes #3959

view details

Michael Fraenkel

commit sha abf46c2b92e1f4d9bdce3f60ef5a488ec315f327

Merge branch 'master' into insert_word

view details

push time in 4 months

push eventfraenkel/pipeline

akihikokuroda

commit sha a5220c1204e798e0d4d924239a40f107bdf99aed

Some of the task and pipeline names had capital letters that were invalid

view details

akihikokuroda

commit sha 477ae8e7ac194f6cf041aa88dd2e9f3bd9e35e0c

kubectl apply not work for examples with the genereateName

view details

Mark Nuttall

commit sha 151f5da71e49da385643b9e6f8b46efd69e02a96

Remove Docker Edge requirement from tutorial

view details

Vincent Demeester

commit sha 57bb618404032e5cbf1013307071b699b96a6c31

Move Images struct to pkg/api/pipeline 🏃 This allows to use that struct from other package more safely, aka without depending on reconciler package(s). Signed-off-by: Vincent Demeester <vdemeest@redhat.com>

view details

Vincent Demeester

commit sha 1f0e642d3e01de6c7882ccd739d7c239eefee4a8

Move gitImage from pkg/…/git-resource package to cmd/controller This is part of a set of changes to make sure we don't define CLI flags in our `pkg/…` packages… so that importing packages from there do not pollute the cli flags. Signed-off-by: Vincent Demeester <vdemeest@redhat.com>

view details

Vincent Demeester

commit sha 5c789e080459ba49608cf3afb18f51d2818389c2

Move credsImage from pkg/…/resources package to cmd/controller This is part of a set of changes to make sure we don't define CLI flags in our `pkg/…` packages… so that importing packages from there do not pollute the cli flags. Signed-off-by: Vincent Demeester <vdemeest@redhat.com>

view details

Vincent Demeester

commit sha bb9f59e81218305df42a92fe2f86a3edd482baff

Move kubeconfigWriterImage from pkg/…/v1alpha1 package to cmd/controller This is part of a set of changes to make sure we don't define CLI flags in our `pkg/…` packages… so that importing packages from there do not pollute the cli flags. Signed-off-by: Vincent Demeester <vdemeest@redhat.com>

view details

Vincent Demeester

commit sha 891b5dea2b6d2a8867f0ece431549c8ede66b539

Move bash-noop-image from pkg/…/v1alpha1 package to cmd/controller This is part of a set of changes to make sure we don't define CLI flags in our `pkg/…` packages… so that importing packages from there do not pollute the cli flags. Signed-off-by: Vincent Demeester <vdemeest@redhat.com>

view details

Vincent Demeester

commit sha 6b69a9891235b0691eec845396e554aeca06744b

Move gsutil-image from pkg/…/v1alpha1 package to cmd/controller This is part of a set of changes to make sure we don't define CLI flags in our `pkg/…` packages… so that importing packages from there do not pollute the cli flags. Signed-off-by: Vincent Demeester <vdemeest@redhat.com>

view details

Vincent Demeester

commit sha 094b3301382255748708aaddbb30a0c0b2d6b4a6

Move build-gcs-fetcher-image from pkg/…/v1alpha1 package to cmd/controller This is part of a set of changes to make sure we don't define CLI flags in our `pkg/…` packages… so that importing packages from there do not pollute the cli flags. Signed-off-by: Vincent Demeester <vdemeest@redhat.com>

view details

Vincent Demeester

commit sha 791a6436a7db66ace0f1a1f247f51feec6bac079

Move pr-image from pkg/…/v1alpha1 package to cmd/controller This is part of a set of changes to make sure we don't define CLI flags in our `pkg/…` packages… so that importing packages from there do not pollute the cli flags. Signed-off-by: Vincent Demeester <vdemeest@redhat.com>

view details

Vincent Demeester

commit sha 7962731ab7c50ebc6ca465190db9e1191ffd7fc2

Move imagedigest-exporter-image from pkg/…/resources package to cmd/controller This is part of a set of changes to make sure we don't define CLI flags in our `pkg/…` packages… so that importing packages from there do not pollute the cli flags. Signed-off-by: Vincent Demeester <vdemeest@redhat.com>

view details

Michael Fraenkel

commit sha 463c7cfa64b1ac4cc844fbfd8e4f9a0653b97a44

ServiceAccountName(s) replaces ServiceAccount(s) Following in the k8s footsteps, deprecate ServiceAccount and ServiceAccounts, replace them with ServiceAccountName and ServiceAccountNames respectively. ServiceAccountName and ServiceAccountNames will always take precedence over ServiceAccount and ServiceAccounts respectively. If ServiceAccountName is not set, the value provided by ServiceAccount will be used instead. ServiceAccountNames will always take precedence over ServiceAccounts.

view details

push time in 4 months

push eventfraenkel/Vim

Michael Fraenkel

commit sha 3760e11986f4a70894ed8fbb591c3be937bc89da

<C-r> <C-w> (insert word under cursor) in search/commmand line fixes #4102

view details

push time in 4 months

PR opened VSCodeVim/Vim

<C-r> <C-w> (insert word under cursor) in search/commmand line

What this PR does / why we need it*: <C-r> <C-w> inserts the word under the cursor in search or command line.

Which issue(s) this PR fixes fixes #4102

Special notes for your reviewer:

+37 -0

0 comment

2 changed files

pr created time in 4 months

push eventfraenkel/Vim

Michael Fraenkel

commit sha 5f819bd4322411b37cc8f0b0dbc8c43d0553c2b6

<C-r> <C-w> (insert word under cursor) in search/commmand line fixes #4102

view details

push time in 4 months

push eventfraenkel/Vim

Michael Fraenkel

commit sha 4b4bae3beb3bf1c7ea6a1612c661d6a87d6fb0d1

<C-r> <C-w> (insert word under cursor) in search/commmand line Fixes #4102

view details

push time in 4 months

create barnchfraenkel/Vim

branch : insert_word

created branch time in 4 months

fork fraenkel/Vim

:star: Vim for Visual Studio Code

http://aka.ms/vscodevim

fork in 4 months

fork fraenkel/Vim

:star: Vim for Visual Studio Code

http://aka.ms/vscodevim

fork in 4 months

pull request commenttektoncd/pipeline

ServiceAccountName replaces ServiceAccount

I don't see why the coverage report for pipelinerun_types.go has dropped so much but it looks like GetServiceAccountName could use some unit tests. Apart from that I think this PR is ready to go.

The coverage accounting is a bit incorrect. These methods are fully tested via testcases in other packages but the default coverage will only calculate within the package. There are tools to accumulate coverage across packages for a more accurate picture.

fraenkel

comment created time in 4 months

Pull request review commenttektoncd/pipeline

ServiceAccountName replaces ServiceAccount

 type PipelineRunSpec struct { 	// Params is a list of parameter names and values. 	Params []Param `json:"params,omitempty"` 	// +optional-	ServiceAccount string `json:"serviceAccount"`+	ServiceAccountName string `json:"serviceAccountName,omitempty"`+	// DeprecatedServiceAccount is a depreciated alias for ServiceAccountName.+	// Deprecated: Use serviceAccountName instead. 	// +optional-	ServiceAccounts []PipelineRunSpecServiceAccount `json:"serviceAccounts,omitempty"`+	DeprecatedServiceAccount string `json:"serviceAccount,omitempty"`+	// +optional+	DeprecatedServiceAccounts []DeprecatedPipelineRunSpecServiceAccount `json:"serviceAccounts,omitempty"`+	ServiceAccountNames       []PipelineRunSpecServiceAccountName       `json:"serviceAccountNames,omitempty"`

yup.

fraenkel

comment created time in 4 months

Pull request review commenttektoncd/pipeline

ServiceAccountName replaces ServiceAccount

 following fields:    - [`resources`](#resources) - Specifies which     [`PipelineResources`](resources.md) to use for this `PipelineRun`.-  - [`serviceAccount`](#service-account) - Specifies a `ServiceAccount` resource+  - [`serviceAccountName`](#service-account) - Specifies a `ServiceAccount` resource     object that enables your build to run with the defined authentication+<<<<<<< HEAD     information. When a `ServiceAccount` isn't specified, the `default-service-account`     specified in the configmap - config-defaults will be applied.   - [`serviceAccounts`](#service-accounts) - Specifies a list of `ServiceAccount`+=======+    information.+  - [`serviceAccountNames`](#service-accounts) - Specifies a list of `ServiceAccountName`+>>>>>>>  ServiceAccountNames replaces ServiceAccounts

fixed

fraenkel

comment created time in 4 months

push eventfraenkel/pipeline

Michael Fraenkel

commit sha 086de6ba2e1ef3123e8e51f9ac1e1a2e4b38fe6b

ServiceAccountName(s) replaces ServiceAccount(s) Following in the k8s footsteps, deprecate ServiceAccount and ServiceAccounts, replace them with ServiceAccountName and ServiceAccountNames respectively. ServiceAccountName and ServiceAccountNames will always take precedence over ServiceAccount and ServiceAccounts respectively. If ServiceAccountName is not set, the value provided by ServiceAccount will be used instead. ServiceAccountNames will always take precedence over ServiceAccounts.

view details

push time in 4 months

push eventfraenkel/pipeline

Jason Hall

commit sha 9ff6aca4b7aa1f573fb9ff2e0d7f868fa9467417

Update Deployments to use the apps/v1 API version The previous version, apps/v1beta1, is deprecated and will be removed in Kubernetes 1.16

view details

Andrea Frittoli

commit sha 8eed0e7f5b2b00216d0b862a029684716dd54ea7

Add versioned links to docs and examples for v0.7.0

view details

Andrea Frittoli

commit sha 44a6887bdc1bc7936ab80f086d6e2668cd806f69

Small fixes to the release guide I've been running through the release guide, and fixed a few minor issues.

view details

Andrea Frittoli

commit sha 8696190a57fa355b4a14eb72c114ffdb2039898e

Fix release pipeline to handle #1122 Since #1122, we do not treat outputs that were inputs as well in any special way, meaning that the output folder will be empty unless we copy the input to the output. Fix the release pipeline to handle that.

view details

Will Plusnick

commit sha 037f6b7c6790c19a8338a1a21096ebec8ae696fa

Add docker for desktop and minikube instructions This change will add instructions for creating a local development environment using both minikube and Kubernetes on Docker for Desktop. This change will make it easier for those without a cloud account to contribute tekton pipeline.

view details

Jason Hall

commit sha 3407dc6efbbfe854e2a8e06bc904a6771c5c64ca

Include vendored source in release-built images This adds logic to the nightly release Task that targz's up everything in vendor/ and includes it in ko-built container images. Some of our dependencies' licenses require their source to be included in distributed artifacts (like container images). Once we've determined this works fine for nightly releases, I'll copy this to publish.yaml so it's also done for official releases.

view details

Jason Hall

commit sha c5dfdd873831d123d8c6c2552b4a87101fbb48e1

Fix line breaks in PR template

view details

Andrea Frittoli

commit sha 8515c87838e6d22ee3f0e9a264732719d1360eb3

Tekton 0.3.1 does not support $() syntax Tekton 0.3.1 is used for release. It does not support the $() syntax so ${} should be used everywhere.

view details

cappyzawa

commit sha cd5b973826c353b3c917cf8300a771a55e616dae

fix export comment Signed-off-by: cappyzawa <cappyzawa@yahoo.ne.jp>

view details

pengli

commit sha 04dfa5bc3232fcca3a7a4d12b502aa329544c4b7

Correct pod watching in Taskrun controller Should use `cache.FilteringResourceEventHandler` rather than `cache.ResourceEventHandlerFuncs`. The `pod` with incorrect owner will not go into the queue.

view details

Jason Hall

commit sha c3db3487127967207dbc7b7b403a33a999297990

Actually fix PR template line breaks

view details

Jason Hall

commit sha 3873c3ce5223f54cdd097215ebf71d10aebd7c37

Clean up YAML tests - Use generateName where appropriate in TaskRuns; this makes it easier to re-run them multiple times, and we should probably recommend this more widely. - Create YAML resources instead of applying them (this is required to support generateName) - Rename files to remove unnecessary "taskrun-" prefix. - Rename TaskRuns to remove unnecessary "test-" prefix, and in general to match the name of the file -- this should help identifying the file that contains a failed TaskRun.

view details

Dan Lorenc

commit sha e83fb4c0349023e8681c30411b6876ca2f178364

Allow PipelineResource implementations to modify the entire Pod spec. This change simplifies the interface by removing the GetUpload/Download container and volume methods and replaces it with a more generic "modifier" system. This can be cleaned up a bit more still, and is intended for an early review at this point.

view details

Dan Lorenc

commit sha 0b29a307581872d3c92cfd80367d6c56adc4979b

This commit fixes some style issues noticed after #1345 was merged. This adds missing docstrings to the new interface methods and changes the way TaskSpec is passed so it can't be mutated.

view details

Priti Desai

commit sha bcbba978e8ccc0f52b673a7501bbf6e48bf38ca9

Adding support to enable resourceSpec Its now possible to embed the resource specifications into Pipeline Run using resourceSpec, for example: apiVersion: tekton.dev/v1alpha1 kind: PipelineRun metadata: name: pipelinerun-echo-greetings spec: resources: - name: git-repo resourceRef: name: my-git-repo Can be specified as: apiVersion: tekton.dev/v1alpha1 kind: PipelineRun name: pipelinerun-echo-greetings spec: resources: - name: git-repo resourceSpec: type: git params: - name: url value: https://github.com/myrepo/myrepo.git

view details

Jason Hall

commit sha 40e340f2d0a6f5476869b33c62ca032ed61de06a

Use Tekton's nightly-built build-base image Apparently the knative-nightly build-base image hasn't been built since February?!

view details

Dan Lorenc

commit sha 4fc62318d5370f8e675ae48ec9682e4d53335354

Enable the "gosec" linter for CI, and fix the one issue in our code. The "issue" is actually a false positive, so it is fixed by adding an annotation.

view details

Chmouel Boudjnah

commit sha 2bf801ec924389cf6ee06a97a38ea93cda812a05

Avoid cases when comparing in TestGitPipelineRun Since https://github.com/tektoncd/pipeline/commit/40e340f2d0a6f5476869b33c62ca032ed61de06a sometime git have `could` and some git version has `Could` so let's not worry about this. Signed-off-by: Chmouel Boudjnah <chmouel@redhat.com>

view details

Dan Lorenc

commit sha 4b759111b41df7e9cbfd0e23c50998cd6bebcaf0

Enable "gocritic" in CI, and fix associated errors. "gocritic" is "the most opinionated go linter". I was expecting to be terrified by the number of errors it would report when run, but it was surprisngly reasonable.

view details

Dan Lorenc

commit sha 52bc0037d31974cae452518cb3ba3d18e83a18fc

Add support for specifiying "0" as no-timeout for PipelineRuns. This was already done in #1040 for TaskRuns, but PipelineRuns seem to have been missed. This fixes #1303.

view details

push time in 4 months

issue closedhashicorp/terraform

JSON Output Format has multi-type values

https://www.terraform.io/docs/internals/json-format.html documents the json format, index is shown to be an integer.

It turns out that index is sometimes an integer and sometimes a string depending on when you use count vs for_each. This creates a real issue when parsing unless you are using a forgiving parser.

closed time in 5 months

fraenkel

issue commenthashicorp/terraform

JSON Output Format has multi-type values

I see the actual definition is now interface{} to support both int and string. The documentation doesn't reflect that but the repo does.

fraenkel

comment created time in 5 months

issue openedhashicorp/terraform

JSON Output Format has multi-type values

https://www.terraform.io/docs/internals/json-format.html documents the json format, index is shown to be an integer.

It turns out that index is sometimes an integer and sometimes a string depending on when you use count vs for_each. This creates a real issue when parsing unless you are using a forgiving parser.

created time in 5 months

issue commentgolang/go

net/http: Connection to HTTP/2 site with IdleConnTimeout hangs

@bradfitz It's a regression. It is a bit difficult to come up with a good set of tests to cover the connection management between the http and http2 side.

flexfrank

comment created time in 5 months

issue commenttektoncd/pipeline

params should support valueFrom

I started to look at this closer. The question to answer is how the data should be reflected back. If one uses a Value, its just as its done today.

  1. If we support a valueFrom, should it be an envVar or volume?
  2. We can't verify the type unless we pull the value which seems odd.
  3. The current Secrets can be deprecated and replaced with Secrets via valueFrom hence the question above regarding env vars vs volumes.
  4. I am also assuming we wouldn't carry over the optional behavior and just fail if set.
skaegi

comment created time in 5 months

issue commentgolang/go

net/http: Connection to HTTP/2 site with IdleConnTimeout hangs

The second request is handed the idle connection which has been marked dead. When the http2.Transport returns with an http2.noCachedConnError, only the idle connection is removed. However, the http.Transport believes there is still a connection in the connsPerHost that can be used which isn't really true.

flexfrank

comment created time in 5 months

PR opened tektoncd/pipeline

Add logging to TimeoutHandler

Fixes #1307

Changes

The reconcile tests were causing panics or tripping race detection because the timeout handler was using the *testing.T methods after the test was already marked done. The solution is to use loggers that are not tied to the testing framework. The downside is that you may see logs from prior tests intermixed but that is usually just the one associated with stopping the timeout.

Submitter Checklist

These are the criteria that every PR should meet, please check them off as you review them:

See the contribution guide for more details.

Double check this list of stuff that's easy to miss:

Reviewer Notes

If API changes are included, additive changes must be approved by at least two OWNERS and backwards incompatible changes must be approved by more than 50% of the OWNERS, and they must first be added in a backwards compatible way.

+55 -17

0 comment

9 changed files

pr created time in 5 months

create barnchfraenkel/pipeline

branch : logging

created branch time in 5 months

push eventfraenkel/pipeline

hriships

commit sha f12922fdd38e0aa6f0451f7ec18cd3993999d232

Enhancements for PullRequest Resource docs It seems like users may not able able to understand how to use the `PullRequestResource` with secret configuration. This patch updates the docs with example Task with Secret to demonstrate how to use the PR Resource. Fixes https://github.com/tektoncd/pipeline/issues/1275

view details

Christie Wilson

commit sha 5d72079e9a1f2bd58c794fa079d459e17e717e4b

Add nightly release pipeline 🌙 This Pipeline will be triggered via prow over in the tektoncd/plumbing repo every night. It will create releases of all images normally released when doing official releases, plus also the image used for building with ko, and tag them with the date and commit they were built at, and will create the release.yaml as well. This Pipeline is missing a few things that are in the manual release Pipeline - due to #1124 unit tests have a race condition, due to #1205 the linting is flakey and it would be frustrating to lose a whole nightly release, and finally due to using v0.3.1 it's not possible to use workingDir, which is required by the golang build Task. The Pipelines and Tasks have been updated to work with Tekton Pipelines v0.3.1 because that's what we're using in our official cluster (since currently Prow requires it). Made release instructions more oriented toward someone actually making a release vs. a random person trying to run the same pipeline against their own infrastructure. Removed example Runs b/c it's much simpler to invoke via `tkn`, or Prow (these were falling out of date with how we were actually using the Pipelines/Tasks as well). Removed the `gcs-uploader-image` PipelineResource which is no longer being used. Fixes #860

view details

Tejal Desai

commit sha 94e4f63ef79481eb8be36384835eb2724ab42fce

Fix Pull Request resource example url. The Pull Request URL example in the docs points to an invalid url.

view details

Dibyo Mukherjee

commit sha f87abe0cf3c356f735ed85ad3a5dc41c634d17f1

Fix typos in release doc. Signed-off-by: Dibyo Mukherjee <dibyo@google.com>

view details

Dibyo Mukherjee

commit sha e70ca3405aaabfab39ac5dc076f429717ee147f2

Increase linter timeout to 3 minutes We have had many instances of the build tests failing due to the linter exceeding the default timeout of 1m. Signed-off-by: Dibyo Mukherjee <dibyo@google.com>

view details

Vincent-DeSousa-Tereso

commit sha 6747f355e23a3e796774daa085b4f880d5437db3

Update release README numbering

view details

Andrea Frittoli

commit sha 05c41647a7b3fd856a679d241a611ccc9d3f9bc4

Fix release pipeline to handle #1122 Since #1122, we do not treat outputs that were inputs as well in any special way, meaning that the output folder will be empty unless we copy the input to the output. Fix the release pipeline to handle that. Fixes: #1325

view details

letty

commit sha 11c95a06ef8d6da11ecc09c730f8fded0bd5467f

Emit pipelinerun event when it is canceled Fixes #1229 Signed-off-by: letty <letty.ll@alibaba-inc.com>

view details

Jason Hall

commit sha eb6edee4d54ba1ea62fcb26eceda1dbf6d1dadf5

Add managed-by label to Pods created from TaskRuns Also annotate the controllers' Deployments and created Pods with labels denoting their purpose. This is a Kubernetes best practice (https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/) and will allow operators to understand at a high level where these Pods come from.

view details

Michael Fraenkel

commit sha 0d3e834e6ef956e07ab81a56727d6a6b9a49431e

ServiceAccountName replaces ServiceAccount Following in the k8s footsteps, deprecate ServiceAccount and replace it with ServiceAccountName. ServiceAccountName will always take precedence over ServiceAccount. If ServiceAccountName is not set, the value provided by ServiceAccount will be used instead.

view details

Michael Fraenkel

commit sha 056ffefd4c4ab334a2f6cebdf43101eece3e7d42

ServiceAccountNames replaces ServiceAccounts Following in the k8s footsteps, deprecate ServiceAccounts and replace it with ServiceAccountNames. ServiceAccountNames will always take precedence over ServiceAccounts.

view details

push time in 5 months

pull request commenttektoncd/pipeline

ServiceAccountName replaces ServiceAccount

/test tekton-pipeline-unit-tests

fraenkel

comment created time in 5 months

pull request commenttektoncd/pipeline

ServiceAccountName replaces ServiceAccount

/retest

fraenkel

comment created time in 5 months

push eventfraenkel/pipeline

Michael Fraenkel

commit sha f6509e170aad6aed7de554ddd2ae81c4143ef611

ServiceAccountNames replaces ServiceAccounts Following in the k8s footsteps, deprecate ServiceAccounts and replace it with ServiceAccountNames. ServiceAccountNames will always take precedence over ServiceAccounts.

view details

push time in 5 months

issue commenthashicorp/terraform

Destroyed module leaves config behind in state

I hit the same issue although I am not using modules. I believe its related to when each is set.

mattlqx

comment created time in 5 months

Pull request review commenttektoncd/pipeline

ServiceAccountName replaces ServiceAccount

 type PipelineRunSpec struct { 	// Params is a list of parameter names and values. 	Params []Param `json:"params,omitempty"` 	// +optional-	ServiceAccount string `json:"serviceAccount"`+	ServiceAccountName string `json:"serviceAccountName,omitempty"`+	// DeprecatedServiceAccount is a depreciated alias for ServiceAccountName.+	// Deprecated: Use serviceAccountName instead.+	// +optional+	DeprecatedServiceAccount string `json:"serviceAccount,omitempty"` 	// +optional 	ServiceAccounts []PipelineRunSpecServiceAccount `json:"serviceAccounts,omitempty"`

I couldn't decide if it was similar to ServiceAccountName -> ServiceAccount in v1.core or not. Let me know if you want it switched.

fraenkel

comment created time in 5 months

pull request commenttektoncd/pipeline

ServiceAccountName replaces ServiceAccount

The failed integration test confuses me because the pipeline that it cannot find is listed right above it.

fraenkel

comment created time in 5 months

push eventfraenkel/pipeline

Eric Sorenson

commit sha 037a01bfb36d7b93abde7e0cd902a93a220f398e

Typos and correctness fixes for creds-init CLI doc Quick patch to address documentation errors. Thanks @MemorySpring for the initial change, I'm just carrying this forward. This closes and supercedes #1238

view details

Vincent Demeester

commit sha 5bb204e789b5120cc91a9b165be5591e112022fb

Remove deprecated podSpec field in favor of podTemplate 🛄 We introduced `podTemplate` in last release to specify podSpec specific field, and deprecated `nodeSelector`, `tolerations` and `affinity`. This removes those deprecated fields. Signed-off-by: Vincent Demeester <vdemeest@redhat.com>

view details

Dan Lorenc

commit sha f5ff8a6a24551289801519e2887aefd0bd80ee8d

Change the behavior of outputs that are also used as inputs. This change makes the handling of Resources within a Task consistent, regardless of whether the same Resource is used as both an input and an output. Previously these were special cased, which made it hard to write Tasks consistently. This commit also makes a few minor changes to the way our bash output gets logged. I discovered this was missing during debugging, and made it consistent with the gsutil wrapper. This is a followup to https://github.com/tektoncd/pipeline/pull/1119 and should be submitted once the next release is cut.

view details

Christie Wilson

commit sha 5e3276d3df90c4d9b7c8d42c16c058a8d2348809

Update fan in / fan out test (no automatic copy) 📋 Now that we don't automatically copy the content of an input to an output (when the same resource is used as both an input and an output), this means that: - Our fan-in / fan-out test will need to explicitly write to the output path, instead of writing to the input path and assuming it would get copied over (the very behaviour we're changing in #1188) - Data previously written to an output that is used as an input, and then an output later on, will be lost unless explicitly copied. In the update to the examples this was handled by symlinking the input to the output, I decided to instead update the test to no longer expect to see a file that was written by the first task in the graph and to not copy it explicitly. Note that there is actually a race between the two tasks fanning out - if the were writing the same file we would not be able to reliably predict which would win. Part of fixing #1188

view details

Christie Wilson

commit sha 17c45eea313fc78b3d1f8f3d977bb91b25f62eaa

Remove unused function 👻 The function `GetLogMessages` isn't used anywhere. I had tried to remove it to see if it was causing the data race in #1124 - it _isnt_ but still it's not being used anywhere so why not remove :)

view details

Christie Wilson

commit sha ee5f2d0ee9e9e400134cc3072808ea5acdfb493b

Remove logging from timeout handler ✏️ Logging in the timeout handler was added as part of #731 cuz it helped us debug when the timeout handler didn't work as expected. Unfortunately it looks like the logger we're using can't be used in multiple go routines (https://github.com/uber-go/zap/issues/99 may be related). Removing this logging to fix #1124, hopefully can find a safe way to add logging back in #1307.

view details

Christie Wilson

commit sha 10b64272fd2fabbf022acd634d4ddb45866528d2

Make GetRunKey threadsafe 🔒 GetRunKey is accessed in goroutines via the timeout_handler; accessing attributes of an object in a goroutine is not threadsafe. This is used as a key in a map, so for now replacing this with a value that should be unique but also threadsafe to fix #1124

view details

Christie Wilson

commit sha 1c4d4b21414c0ee72e439b299bda612e6aad5d5d

Remove support for ${} syntax 🗑️ In #850 we decided that to make our syntax more similar to k8s style syntax, we will use $() for variable substitution instead of ${}. In a later iteration we may also want to make it so that anything that can be acesssed as a variable is also available as an env var to the running container, but that is TBD. In #1172 we added support for the new syntax, $(), and continued support for ${}, which was released in 0.6. This commit removes support for ${}. Fixes #1170

view details

Michael Fraenkel

commit sha f4b4e2482ccdd25fccbcd1c8f71b3dbb5175addc

ServiceAccountName replaces ServiceAccount Following in the k8s footsteps, deprecate ServiceAccount and replace it with ServiceAccountName. ServiceAccountName will always take precedence over ServiceAccount. If ServiceAccountName is not set, the value provided by ServiceAccount will be used instead.

view details

push time in 5 months

push eventfraenkel/pipeline

Michael Fraenkel

commit sha 5190761a1accb320c2040106f6fb90bfc3b5e59e

ServiceAccountName replaces ServiceAccount Following in the k8s footsteps, deprecate ServiceAccount and replace it with ServiceAccountName. ServiceAccountName will always take precedence over ServiceAccount. If ServiceAccountName is not set, the value provided by ServiceAccount will be used instead.

view details

push time in 5 months

PR opened terraform-providers/terraform-provider-vault

Allow identity group alias name to be updated

The identity group alias name is allowed to be updated. Update the testcase to update the identity group alias fields.

+26 -14

0 comment

2 changed files

pr created time in 5 months

push eventfraenkel/terraform-provider-vault

Michael Fraenkel

commit sha 81e8631c81859f5f4acdd4c8c5f574a7a5c84a1a

Allow identity group alias name to be updated - test updating identity group alias fields

view details

push time in 5 months

more