profile
viewpoint

GoogleCloudPlatform/golang-docker 204

Docker images for Go on Google App Engine

shantuo/The-Art-Of-Programming-By-July 3

此为《程序员编程艺术:面试和算法心得》的初稿,于14年6月转移到Word上优化,纸质版15年上市

shantuo/appengine 0

Go App Engine packages

shantuo/appengine-sidecars-docker 0

A set of services that run along side of your Google App Engine Flexible VM application containers. Each service runs inside of its own docker container along with your application's source code.

shantuo/cracking-the-coding-interview 0

Solutions for the book: Cracking the coding interview V4. Written in C++.

shantuo/gddo 0

Go Doc Dot Org

shantuo/gddoexp 0

Library to determinate if a Godoc package should be archived

shantuo/go-cloud 0

A library and tools for open cloud development in Go.

shantuo/golang-docker 0

Docker images for Go on Google App Engine

shantuo/node-tutorial-2-restful-app 0

Learn the basics of REST and use them to build an easy, fast, single-page web app.

push eventgolang/text

Patrick Gundlach

commit sha 967b8f6126b019daebc17c221889cb59560fa8d1

text/unicod/bidi: implement API, remove panics The bidi API splits strings with mixed left-to-right (ltr) and right-to-left (rtl) parts into substrings (segments). Each segment contains a substring, a direction of text flow (either ltr or rtl) and the start and end positions in the input. The paragraph validators do not panic, instead the newParagraph function returns an error message in case the input is invalid. Fixes golang/go#42356 Change-Id: I90cafc8fadb0cf6936dfb1ab373586017147d709 Reviewed-on: https://go-review.googlesource.com/c/text/+/267857 Trust: Ian Lance Taylor <iant@golang.org> Reviewed-by: Marcel van Lohuizen <mpvl@golang.org>

view details

push time in 4 hours

issue openedgoogle/wire

Can't use wire.Bind inside wire.NewSet

Describe the bug

Can't use wire.Bind inside wire.NewSet. Wire fails to generate with error

wire.Bind of concrete type "..." to interface "...", but mySet does not include a provider for "..."

To Reproduce

type AType string
type AInterface interface{}
type Result string

func NewResult(a AInterface) Result {
	return "abc"
}

var mySet = wire.NewSet(
	NewResult,
	wire.Bind(new(AInterface), new(AType)),
)

func BuildFromSet(a AType) (Result, error) {
	panic(wire.Build(
		mySet,
	))
}

Expected behavior

It should generate the same code as for following

func BuildDirectly(a AType) (Result, error) {
	panic(wire.Build(
		NewResult,
		wire.Bind(new(AInterface), new(AType)),
	))
}

Version

v0.4.0

created time in 8 hours

issue commentgoogle/go-cloud

pubsub/azuresb: receive blocks while it received at least one message

Will try it tomorrow. I think that the current solution for all driver can be improved. It would be great if we come up with a way that always deliver messages when there is one available, not unnecessarily wait for some goroutines to timeout.

Not it only it's faster, but also less network traffic and probably less costly for the end-user.

Segflow

comment created time in 20 hours

issue commentgoogle/go-cloud

pubsub/azuresb: receive blocks while it received at least one message

Hrm. Remind me to be very careful about changes to ReceiveBatch :-|.

I have restored the rCtx in that PR and clarified the driver docstring; it is important that ReceiveBatch not block for too long waiting for messages. I checked, the other drivers already do this correctly (as did azuresb before we started messing with it).

Can you try again with the latest on the PR?

Segflow

comment created time in 21 hours

push eventgoogle/go-cloud

Alex Snast

commit sha e0472ce30c667fcbd379d88ce9beb8958ce05539

pubsub: fix data race accessing subscription error after shutdown (#2901)

view details

push time in a day

PR merged google/go-cloud

pubsub: fix data race accessing subscription error cla: yes

When running go test with -race I hit the following splat:

WARNING: DATA RACE
Write at 0x00c00048d1e8 by goroutine 68:
  gocloud.dev/pubsub.(*Subscription).Receive.func2()
      /home/runner/go/pkg/mod/gocloud.dev@v0.20.1-0.20200914152856-6be5a462804a/pubsub/pubsub.go:560 +0x4a4

Previous read at 0x00c00048d1e8 by goroutine 119:
  gocloud.dev/pubsub.(*Subscription).Shutdown()
      /home/runner/go/pkg/mod/gocloud.dev@v0.20.1-0.20200914152856-6be5a462804a/pubsub/pubsub.go:680 +0x61d
  github.com/facebookincubator/symphony/pkg/ev.(*TopicReceiver).Shutdown()

The issue occurs as .Shutdown reads s.err after releasing the lock.

+1 -1

1 comment

1 changed file

alexsn

pr closed time in a day

push eventgolang/gofrontend

Ian Lance Taylor

commit sha 78c9a657fdbc9e812d39910fb93fbae4affe4360

log/syslog: correct asm name for C function Patch from Rainer Orth. Change-Id: Ie93e0a4f003fa14e7236fd3d952afb37af3caeaa Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/272259 Trust: Ian Lance Taylor <iant@golang.org> Reviewed-by: Than McIntosh <thanm@google.com> Reviewed-by: Cherry Zhang <cherryyz@google.com>

view details

push time in a day

pull request commentgoogle/go-cloud

pubsub: fix data race accessing subscription error

Codecov Report

Merging #2901 (5657c61) into master (613a96d) will increase coverage by 0.03%. The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #2901      +/-   ##
==========================================
+ Coverage   66.54%   66.58%   +0.03%     
==========================================
  Files         116      116              
  Lines       11996    11989       -7     
==========================================
  Hits         7983     7983              
+ Misses       3354     3347       -7     
  Partials      659      659              
Impacted Files Coverage Δ
pubsub/pubsub.go 91.95% <100.00%> (ø)
docstore/driver/actionkind_string.go 0.00% <0.00%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 613a96d...5657c61. Read the comment docs.

alexsn

comment created time in a day

PR opened google/go-cloud

pubsub: fix data race accessing subscription error

When running go test with -race I hit the following splat:

WARNING: DATA RACE
Write at 0x00c00048d1e8 by goroutine 68:
  gocloud.dev/pubsub.(*Subscription).Receive.func2()
      /home/runner/go/pkg/mod/gocloud.dev@v0.20.1-0.20200914152856-6be5a462804a/pubsub/pubsub.go:560 +0x4a4

Previous read at 0x00c00048d1e8 by goroutine 119:
  gocloud.dev/pubsub.(*Subscription).Shutdown()
      /home/runner/go/pkg/mod/gocloud.dev@v0.20.1-0.20200914152856-6be5a462804a/pubsub/pubsub.go:680 +0x61d
  github.com/facebookincubator/symphony/pkg/ev.(*TopicReceiver).Shutdown()

The issue occurs as .Shutdown read s.err after releasing the lock.

+1 -1

0 comment

1 changed file

pr created time in a day

issue openedgoogle/go-cloud

pubsub/azuresb: receive blocks while it received at least one message

Describe the bug

My code uses https://github.com/google/go-cloud/pull/2898

And it's just an infinite loop doing sub.Receive

ctx := context.Background()
for {
	m, err := s.sub.Receive(ctx)
	if err != nil {
		select {
		case <-ctx.Done():
			return
		default:
			log.With(zap.Error(err)).Fatal("non retryable sub.Receive() error")
		}
	}
	fmt.Printf("Got Message: %s", string(m.Body))

	m.Ack()
}
  1. I published 1 message m1 -> .Receive returned it (as expected)
  2. I published second message m2 -> Receive still blocked forever
  3. I published a third message m3 -> Receive returned m2, and then m3

I added some debug statement to pubsub.go, noticed that batchSize get's doubled every time, and Receive only return when there is batchSize message available.

I added more debug statement to azuresb.go, and tried the same thing again, I noted that at step [2], ReceiveOne actually received message m2, but that message was not returned by sub.Receive because it was waiting for 2 goroutines to finish before returning.

for _, maxMessagesInBatch := range batches {
		// Make a copy of the loop variable since it will be used by a goroutine.
		curMaxMessagesInBatch := maxMessagesInBatch
		g.Go(func() error {
			var msgs []*driver.Message
			err := retry.Call(ctx, gax.Backoff{}, s.driver.IsRetryable, func() error {
				var err error
				ctx2 := s.tracer.Start(ctx, "driver.Subscription.ReceiveBatch")
				defer func() { s.tracer.End(ctx2, err) }()
				msgs, err = s.driver.ReceiveBatch(ctx2, curMaxMessagesInBatch)
				return err
			})
			if err != nil {
				return wrapError(s.driver, err)
			}
			mu.Lock()
			defer mu.Unlock()
			q = append(q, msgs...)
			return nil
		})
	}

In step 2, batches arrays is [1, 1], so i spawned 2 goroutines in an errgroup and kept waiting for both of them to finish using g.Wait() before returning.

Expected behavior

When there is a message, .Receive should return it

Version

v0.20 - branch https://github.com/google/go-cloud/pull/2898

created time in a day

issue commentgoogle/go-cloud

pubsub/azuresb: sub.Receive returned on transient error

Works like expected!

Segflow

comment created time in 3 days

issue commentgoogle/go-cloud

pubsub/azuresb: sub.Receive returned on transient error

Can you try again from the branch? It should make some known errors like PermissionDenied or NotFound permanent (IsRetryable returns false), but most other errors retryable.

Segflow

comment created time in 3 days

issue commentgoogle/go-cloud

pubsub/azuresb: sub.Receive returned on transient error

Ah, OK. Before #2891, we would basically treat everything as retryable since we threw away any errors; after it, nothing is retryable. The #2898 fixes the "rctx timeout" errors, but not anything else.

It's safest to treat everything as retryable unless you know it's not. I'll update the PR to base IsRetryable on the errorCode.

Segflow

comment created time in 3 days

issue commentgoogle/go-cloud

pubsub/azuresb: sub.Receive returned on transient error

The concrete type decide if it should retry or not, only if the driver explicitly make IsRetryable(err) return true. The concrete type does not decide anything. It just ask the driver if the error is retrybale and retry if the answer if yes.

// IsRetryable implements driver.Subscription.IsRetryable.
func (s *subscription) IsRetryable(error) bool {
	// Let the Service Bus SDK recover from any transient connectivity issue.
	return false
}

But the service bus SDK is not retrying anything.

Segflow

comment created time in 3 days

issue commentgoogle/go-cloud

pubsub/azuresb: sub.Receive returned on transient error

FYI, the returned error was castable to https://golang.org/pkg/net/#Error and both Temporary() and Timeout() returned true.

Segflow

comment created time in 3 days

issue commentgoogle/go-cloud

pubsub/azuresb: sub.Receive returned on transient error

I based my work on that branch, as I mentioned in the PR.

Segflow

comment created time in 3 days

issue commentgoogle/go-cloud

pubsub/azuresb: sub.Receive returned on transient error

Yes, I think this was introduced by #2891. It should be fixed by #2898.

Segflow

comment created time in 3 days

issue openedgoogle/go-cloud

pubsub/azuresb: driver retured on transient error.

Describe the bug

I basically have this loop:

for {
	m, err := s.sub.Receive(ctx)
	if err != nil {
		select {
		case <-ctx.Done():
			return
		default:
			log.Fatalf("non retryable sub.Receive(): %v", err)
		}
	}

	fmt.Println(string(m.Body))

	m.Ack()
}

Left it running for a while, and kept using my laptop. Few moments later I got this error on my terminal

pubsub (code=Unknown):\n    gocloud.dev/pubsub.(*Subscription).getNextBatch.func1\n        /Users/xxxx/go-cloud/pubsub/pubsub.go:655\n  - read tcp 192.168.1.5:49930->168.62.54.52:5671: i/o timeout

To Reproduce

  • Run Receive in a loop
  • Try to trigger a io/timeout error

Expected behavior

Concrete type should automatically handle this type of transient errors

Version

0.20

Additional context

Using the changes introduced here https://github.com/google/go-cloud/pull/2898

I also have some more debug statement, here is the error returned by ReceiveOne call

(*net.OpError)(0xc000098640)(read tcp 192.168.1.5:49930->168.62.54.52:5671: i/o timeout)

created time in 3 days

issue commentgoogle/go-cloud

pubsub/azuresb: retry on internal context "rctx" expiration

@vangent before the PR, the retry still happen inside the azuresb driver. I believe the new code is much better, it also better reflect how azuresb really works (singe receive). Thanks for taking care of this, much appreciated.

Segflow

comment created time in 3 days

issue commentgoogle/go-cloud

pubsub/azuresb: retry on internal context "rctx" expiration

I sent a PR to simplify this code a lot. I'm not sure why we're trying to collect multiple messages here when the underlying call to Azure just returns a single one; I suspect that the previous call to Receive returned multiple and didn't refactor correctly when changing.

The new code tells the concrete type that ReceiveBatch for azuresb only fetches a single message at a time, and simplifies the driver code significantly (no rctx, no special error handling). The concrete type reacts with multiple concurrent ReceiveBatch calls to the driver under load, so the performance is equivalent.

Segflow

comment created time in 3 days

pull request commentgoogle/go-cloud

pubsub/azuresb: Simplify ReceiveBatch to only return 1 message; fixes error handling

Codecov Report

Merging #2898 (4ea5202) into master (613a96d) will increase coverage by 0.06%. The diff coverage is 5.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #2898      +/-   ##
==========================================
+ Coverage   66.54%   66.61%   +0.06%     
==========================================
  Files         116      116              
  Lines       11996    11988       -8     
==========================================
+ Hits         7983     7986       +3     
+ Misses       3354     3342      -12     
- Partials      659      660       +1     
Impacted Files Coverage Δ
pubsub/azuresb/azuresb.go 22.04% <5.00%> (+0.08%) :arrow_up:
pubsub/rabbitpubsub/rabbit.go 78.78% <0.00%> (-1.14%) :arrow_down:
docstore/driver/actionkind_string.go 0.00% <0.00%> (ø)
blob/s3blob/s3blob.go 89.47% <0.00%> (+0.52%) :arrow_up:
pubsub/pubsub.go 93.00% <0.00%> (+1.04%) :arrow_up:
internal/retry/retry.go 100.00% <0.00%> (+7.69%) :arrow_up:

Continue to review full report at Codecov.

Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 613a96d...b4b3a35. Read the comment docs.

vangent

comment created time in 3 days

PR opened google/go-cloud

pubsub/azuresb: Simplify ReceiveBatch to only return 1 message; fixes error handling

Fixes #2897.

This code is simpler than before; it relies on the concrete type behavior of retrying driver.ReceiveBatch calls as needed, instead of trying to batch up some messages artificially inside ReceiveBatch.

It also fixes a bug introduced in #2891, where a timeout in the call to ReceiveOne would cause failures.

+41 -35

0 comment

1 changed file

pr created time in 3 days

issue commentgoogle/go-cloud

pubsub/azuresb: retry on internal context "rctx" expiration

Didn't you just change this in https://github.com/google/go-cloud/pull/2891 ? Before that PR, we would retry here.

Segflow

comment created time in 3 days

issue openedgoogle/go-cloud

pubsub/azuresb: retry on internal context "rctx" expiration

Describe the bug

Unlike gcp and aws' implemenation, azure's ReceiveBatch reads for message one by one.

For each single message read we use a new context with 1 second timeout.

rctx, cancel := context.WithTimeout(ctx, listenerTimeout)
defer cancel()

If any request take more then 1 second to respond, ReceiveOne will return a DeadlineExceeded error to the concrete type.

The concrete type calls ReceiveBatch as follow:

err := retry.Call(ctx, gax.Backoff{}, s.driver.IsRetryable, func() error {
	var err error
	ctx2 := s.tracer.Start(ctx, "driver.Subscription.ReceiveBatch")
	defer func() { s.tracer.End(ctx2, err) }()
	msgs, err = s.driver.ReceiveBatch(ctx2, curMaxMessagesInBatch)
	return err
})
if err != nil {
	return wrapError(s.driver, err)
}

And azure's pubsub IsRetryable always return false.

Thus a single slow request make .Receive to return immediately with no retry. And this is caused by an internal context (rctx) not even user context.

To Reproduce

  • Use azure's pubsub
  • Reduce https://github.com/google/go-cloud/blob/master/pubsub/azuresb/azuresb.go#L85:L85 enough to make it fail sometimes
  • Call .Receive with context.Background()

Expected behavior

I think the expected behavior is that Receive should block until one message is available -- Not return with DeadlineExceeded.

In azure's ReceiveBatch we have:

	rctx, cancel := context.WithTimeout(ctx, listenerTimeout)
	defer cancel()
	var messages []*driver.Message

	// Loop until rctx is Done, or until we've received maxMessages.
	for len(messages) < maxMessages && rctx.Err() == nil {
           ...
        }

I think we should not return if rctx is done, and retry here! Because the concrete type is supposed to handle ReceiveBatch errors + external context, not how internally ReceiveBatch is done.

Version

0.20

Additional context

Add any other context about the problem here.

created time in 3 days

push eventgolang/gofrontend

Ian Lance Taylor

commit sha 36a7b789130b415c2fe7f8e3fc62ffbca265e3aa

libgo: update to Go 1.15.5 release Change-Id: Id7f3bbbbdb7c97944579b0ef0f3c254b4346a041 Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/272146 Trust: Ian Lance Taylor <iant@golang.org> Reviewed-by: Than McIntosh <thanm@google.com>

view details

push time in 4 days

pull request commentgoogle/go-cloud

blob/azureblob: Interpret more errors into error codes

Codecov Report

Merging #2896 (2df16ae) into master (613a96d) will increase coverage by 0.03%. The diff coverage is 0.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #2896      +/-   ##
==========================================
+ Coverage   66.54%   66.58%   +0.03%     
==========================================
  Files         116      116              
  Lines       11996    11993       -3     
==========================================
+ Hits         7983     7985       +2     
+ Misses       3354     3349       -5     
  Partials      659      659              
Impacted Files Coverage Δ
blob/azureblob/azureblob.go 79.31% <0.00%> (-0.69%) :arrow_down:
docstore/driver/actionkind_string.go 0.00% <0.00%> (ø)
blob/s3blob/s3blob.go 89.47% <0.00%> (+0.52%) :arrow_up:

Continue to review full report at Codecov.

Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 613a96d...2df16ae. Read the comment docs.

vangent

comment created time in 4 days

pull request commentgoogle/go-cloud

blob/azureblob: Interpret more errors into error codes

@bensussman FYI

vangent

comment created time in 4 days

issue commentgoogle/go-cloud

blob/azureblob: Non-existent storage containers don't error until timeout after 60s, and error code is "Unknown"

If you create your Pipeline with PipelineOptions like this: azblob.PipelineOptions{Retry:azblob.RetryOptions{MaxTries:1}} then it fails right away. It looks like Pipeline by default retries for up to a minute.

Can you try patching in the changes in https://github.com/google/go-cloud/pull/2896 and see if that helps? I think it should make the error codes better (and in particular, bucket.IsAccessible might work for invalid storage account name.

It doesn't solve the "it takes 60 seconds" problem, I think you need to change the PipelineOptions as described above for that.

bensussman

comment created time in 4 days

PR opened google/go-cloud

blob/azureblob: Interpret more errors into error codes

Updates #2893 .

+5 -0

0 comment

1 changed file

pr created time in 4 days

more