profile
viewpoint
Sotirios Mantziaris mantzas Beat Athens, Greece http://blog.mantziaris.eu https://gr.linkedin.com/in/sotirismantziaris

beatlabs/harvester 59

Harvest configuration, watch and notify subscriber

beatlabs/patron 56

Microservice framework following best cloud practices with a focus on productivity.

mantzas/incata 18

Event Sourcing Data Access Library

dbaltas/ergo 9

moved to beatlabs/ergo

mantzas/adaptlog 9

Logging adapter

mantzas/parwork 7

Parallel work processing library

mantzas/DotNetBenchmarks 1

Various benchmarks

mantzas/afero 0

A FileSystem Abstraction System for Go

mantzas/agnostic_backend 0

A ruby gem to index and query ruby objects to/from remote backends

mantzas/andreimihu.com 0

Source for http://andreimihu.com

PR opened beatlabs/patron

Reviewers
Patron Team Refactoring
+1 -1

0 comment

1 changed file

pr created time in an hour

PR closed mantzas/patron

Patron Team Refactoring

<!-- Thanks for taking precious time for making a PR.

Before creating a pull request, please make sure:

  • Your PR solves one problem for which a issue exist and a solution has been discussed
  • You have read the guide for contributing
    • See https://github.com/beatlabs/patron/blob/master/CONTRIBUTING.md
  • You signed all your commits (otherwise we won't be able to merge the PR)
    • See https://github.com/beatlabs/patron/blob/master/CONTRIBUTING.md#sign-your-work
  • You added unit tests for the new functionality
  • You mention in the PR description which issue it is addressing, e.g. "Resolves #123" -->

Which problem is this PR solving?

<!-- REQUIRED -->

Short description of the changes

<!-- REQUIRED -->

+609008 -20800

0 comment

1976 changed files

mantzas

pr closed time in an hour

PR opened mantzas/patron

Patron Team Refactoring

<!-- Thanks for taking precious time for making a PR.

Before creating a pull request, please make sure:

  • Your PR solves one problem for which a issue exist and a solution has been discussed
  • You have read the guide for contributing
    • See https://github.com/beatlabs/patron/blob/master/CONTRIBUTING.md
  • You signed all your commits (otherwise we won't be able to merge the PR)
    • See https://github.com/beatlabs/patron/blob/master/CONTRIBUTING.md#sign-your-work
  • You added unit tests for the new functionality
  • You mention in the PR description which issue it is addressing, e.g. "Resolves #123" -->

Which problem is this PR solving?

<!-- REQUIRED -->

Short description of the changes

<!-- REQUIRED -->

+609008 -20800

0 comment

1976 changed files

pr created time in an hour

create barnchbeatlabs/patron

branch : add-new-member

created branch time in an hour

created tagbeatlabs/patron

tagv0.42.0

Microservice framework following best cloud practices with a focus on productivity.

created time in 6 days

release beatlabs/patron

v0.42.0

released time in 6 days

issue closedbeatlabs/patron

Consume Kafka messages since a given duration

Is your feature request related to a problem? Please describe

We would like to extend the simple Kafka consumer to make it start consuming messages since a given duration.

Describe the solution

Create an additional option WithDurationOffset that would have two parameters:

  • A time.Duration
  • A func(sarama.ConsumerMessage) (error, time.Time) that would be used to extract the time from a given Kafka message. We want this solution to be generic, not necessarily based on the message timestamp.

Then, for each partition, we would create a function that would retrieve the starting offset (the first one being in the time interval we are looking for).

Last but not least, we would enhance the simple consumer to consume messages from the offsets retrieved (to be done on each partition).

We have also to find a way to inform the service using Patron that all the consumers have reached the latest offset so that it can potentially update the readiness.

closed time in 6 days

teivah

push eventbeatlabs/patron

Teiva Harsanyi

commit sha 11f9bb2dad740b04b7b34446dc45cd8fd5ee028c

Consume Kafka messages since a given duration (#227)

view details

push time in 6 days

PR merged beatlabs/patron

Reviewers
Consume Kafka messages since a given duration

Which problem is this PR solving?

https://github.com/beatlabs/patron/issues/226

Short description of the changes

  • New option WithDurationOffset
  • New feature to retrieve the offsets per partition given a duration
  • Change in simple Consume() to consume from the offsets previously retrieved
+803 -11

6 comments

9 changed files

teivah

pr closed time in 6 days

issue closedbeatlabs/patron

Ordering is Not Guaranteed with a Simple Kafka Consumer

Is this a bug? Please provide steps to reproduce, a failing test etc.

While iterating over the messages of a partition consumer, we spin up some work into another goroutine: https://github.com/beatlabs/patron/blob/7c8e35c9547d99025d53a97b25526496849c8068/component/async/kafka/simple/simple.go#L132 There is no guarantee that a goroutine created before another will be triggered and complete before. Hence, we cannot guarantee the delivery of the messages to the Patron clients.

For information, I was able to trigger this issue while testing TestSimpleConsume for example.

Describe the solution

The solution would be to remove the extra goroutine, just like what was done in the group consumer: https://github.com/beatlabs/patron/blob/726c0299dcf9f99b14357429e54b7eca89317c7c/component/async/kafka/group/group.go#L171:L177

closed time in 6 days

teivah

push eventbeatlabs/patron

Teiva Harsanyi

commit sha 2001975f5c411f3168fd92224769e1f234e1e007

Ordering is Not Guaranteed with a Simple Kafka Consumer (#229)

view details

push time in 6 days

pull request commentbeatlabs/patron

Consume Kafka messages since a given duration

@teivah CI is failing

@mantzas A good example of #228

Not equal: 
  expected: []string{"2020-06-29T10:10:14Z", "2020-06-29T11:10:14Z", "2020-06-29T12:10:14Z"}
  actual  : []string{"2020-06-29T12:10:14Z", "2020-06-29T10:10:14Z", "2020-06-29T11:10:14Z"}

@teivah so we should probably merge #228 first correct. Agree?

teivah

comment created time in 7 days

pull request commentbeatlabs/patron

Consume Kafka messages since a given duration

@teivah CI is failing

teivah

comment created time in 7 days

delete branch mantzas/sdk-go

delete branch : 546-bump-sarama-dependency

delete time in 7 days

PR opened cloudevents/sdk-go

Bumped sarama to v1.25.0 and updated samples

Closes #546.

  • Bumped go mod sarama dependency to v1.25.0
  • Updated the Kafka sample dependencies

Signed-off-by: Sotirios Mantziaris smantziaris@gmail.com

+66 -10

0 comment

4 changed files

pr created time in 7 days

create barnchmantzas/sdk-go

branch : 546-bump-sarama-dependency

created branch time in 7 days

issue commentcloudevents/sdk-go

Old Sarama Dependency

Maybe not exactly the latest, but the one before that. 1.26 seemed to have some troubles. I could also do this for you if you like!

mantzas

comment created time in 7 days

issue openedcloudevents/sdk-go

Kafka Partition Key set up

Hi, it would be nice if we could set the partition key on an event. This way we can use Kafka's ability to send the same key always to the same partition and keep ordering of events intact. Thanks

created time in 7 days

issue openedcloudevents/sdk-go

Old Sarama Dependency

Hi,

I was wondering if there is any reason for sarama to be in version 1.19.0 (2018-09-27)?

created time in 8 days

Pull request review commentbeatlabs/patron

Consume Kafka messages since a given duration

+package simple++import (+	"context"+	"errors"+	"fmt"+	"time"++	"github.com/beatlabs/patron/component/async/kafka"+	"github.com/beatlabs/patron/log"+)++type durationClient struct {+	client durationKafkaClientAPI+}++func newDurationClient(client durationKafkaClientAPI) (durationClient, error) {+	if client == nil {+		return durationClient{}, errors.New("empty client api")+	}+	return durationClient{client: client}, nil+}++func (d durationClient) getTimeBasedOffsetsPerPartition(ctx context.Context, topic string, since time.Time, timeExtractor kafka.TimeExtractor) (map[int32]int64, error) {+	partitionIDs, err := d.client.getPartitionIDs(topic)+	if err != nil {+		return nil, err+	}++	responseCh := make(chan partitionOffsetResponse, len(partitionIDs))+	d.triggerWorkers(ctx, topic, since, timeExtractor, partitionIDs, responseCh)+	return d.aggregateResponses(ctx, partitionIDs, responseCh)+}++type partitionOffsetResponse struct {+	partitionID int32+	offset      int64+	err         error+}++func (d durationClient) triggerWorkers(ctx context.Context, topic string, since time.Time, timeExtractor kafka.TimeExtractor, partitionIDs []int32, responseCh chan<- partitionOffsetResponse) {+	for _, partitionID := range partitionIDs {+		partitionID := partitionID+		go func() {+			offset, err := d.getTimeBasedOffset(ctx, topic, since, partitionID, timeExtractor)+			select {+			case <-ctx.Done():+				return+			case responseCh <- partitionOffsetResponse{+				partitionID: partitionID,+				offset:      offset,+				err:         err,+			}:

yeah, coreect. Didn't see it.

teivah

comment created time in 8 days

Pull request review commentbeatlabs/patron

Consume Kafka messages since a given duration

+package simple++import (+	"context"+	"errors"+	"fmt"+	"time"++	"github.com/beatlabs/patron/component/async/kafka"+	"github.com/beatlabs/patron/log"+)++type durationClient struct {+	client durationKafkaClientAPI+}++func newDurationClient(client durationKafkaClientAPI) (durationClient, error) {+	if client == nil {+		return durationClient{}, errors.New("empty client api")+	}+	return durationClient{client: client}, nil+}++func (d durationClient) getTimeBasedOffsetsPerPartition(ctx context.Context, topic string, since time.Time, timeExtractor kafka.TimeExtractor) (map[int32]int64, error) {+	partitionIDs, err := d.client.getPartitionIDs(topic)+	if err != nil {+		return nil, err+	}++	responseCh := make(chan partitionOffsetResponse, len(partitionIDs))+	d.triggerWorkers(ctx, topic, since, timeExtractor, partitionIDs, responseCh)+	return d.aggregateResponses(ctx, partitionIDs, responseCh)+}++type partitionOffsetResponse struct {+	partitionID int32+	offset      int64+	err         error+}++func (d durationClient) triggerWorkers(ctx context.Context, topic string, since time.Time, timeExtractor kafka.TimeExtractor, partitionIDs []int32, responseCh chan<- partitionOffsetResponse) {+	for _, partitionID := range partitionIDs {+		partitionID := partitionID+		go func() {+			offset, err := d.getTimeBasedOffset(ctx, topic, since, partitionID, timeExtractor)+			select {+			case <-ctx.Done():+				return+			case responseCh <- partitionOffsetResponse{+				partitionID: partitionID,+				offset:      offset,+				err:         err,+			}:

am i missing something?

			}
teivah

comment created time in 10 days

Pull request review commentbeatlabs/patron

Consume Kafka messages since a given duration

 func TestDecoderJSON(t *testing.T) { 		reflect.ValueOf(c.DecoderFunc).Pointer(), 	) }++func TestWithDurationOffset(t *testing.T) {

The test should be table/map driven now and handle also the failure scenarios.

teivah

comment created time in 10 days

Pull request review commentbeatlabs/patron

Consume Kafka messages since a given duration

+package simple++import (+	"context"+	"errors"+	"fmt"+	"time"++	"github.com/beatlabs/patron/component/async/kafka"+	"github.com/beatlabs/patron/log"+)++type durationClient struct {+	client durationKafkaClientAPI+}++func newDurationClient(client durationKafkaClientAPI) (durationClient, error) {+	if client == nil {+		return durationClient{}, errors.New("empty client api")+	}+	return durationClient{client: client}, nil+}++func (d durationClient) getTimeBasedOffsetsPerPartition(ctx context.Context, topic string, since time.Time, timeExtractor kafka.TimeExtractor) (map[int32]int64, error) {+	partitionIDs, err := d.client.getPartitionIDs(topic)+	if err != nil {+		return nil, err+	}++	responseCh := make(chan partitionOffsetResponse, 1)

in order to not block at all

	responseCh := make(chan partitionOffsetResponse, len(partitionIDs))
teivah

comment created time in 11 days

Pull request review commentbeatlabs/patron

Consume Kafka messages since a given duration

 func DecoderJSON() OptionFunc { 		return nil 	} }++// WithDurationOffset allows creating a consumer from a given duration.+// It accepts a function indicating how to extract the time from a Kafka message.+func WithDurationOffset(since time.Duration, timeExtractor TimeExtractor) OptionFunc {

We should check that the values provided are valid.

teivah

comment created time in 11 days

issue commentbeatlabs/patron

HTTP custom encoding headers and encoders/decoders

@tpaschalis The idea here was to be able to provide content-type/accept-header and provide specific encode/decoders. e.g. application/xml for content-type and accept-header and providing xml encode and decoders... With this approach somebody could support everything and we dont need to use the raw http stuff and do all by hand... Currently we support JSON and Protobuf... Makes sense?

mantzas

comment created time in 12 days

startedgolangci/golangci-lint

started time in 13 days

issue commentcloudevents/sdk-go

v2 Release dependency issue

@n3wscott Hey, sorry for reopening this but it seems to be still broken. Any ETA on a release?

mantzas

comment created time in 13 days

push eventbeatlabs/patron

Paschalis Tsilias

commit sha d51c80b45417c5aa83439f5bc8a32f0e6f36ef28

Improve URL parameter handling for RawRoutes (#225) Signed-off-by: Paschalis Tsilias <p.tsilias@thebeat.co>

view details

Sotirios Mantziaris

commit sha ea17494581ea0674d8822940d34523fa438c020d

Merge branch 'master' into sns-sqs-integration-tests

view details

push time in 14 days

push eventbeatlabs/patron

Paschalis Tsilias

commit sha d51c80b45417c5aa83439f5bc8a32f0e6f36ef28

Improve URL parameter handling for RawRoutes (#225) Signed-off-by: Paschalis Tsilias <p.tsilias@thebeat.co>

view details

push time in 14 days

PR merged beatlabs/patron

Reviewers
Improve URL parameter handling for RawRoutes

Fixes #224 Improve URL parameter handling for RawRoutes

Which problem is this PR solving?

This PR provides an easy and transparent way for users to retrieve dynamic URL parameters when using RawRoutes.

Short description of the changes

This might seem the laziest/easiest/quickest solution, but I feel it might be the correct one.

  • Provides a standard way for users to retrieve dynamic URL parameters.
  • In case we might use another router in the future (eg. gorilla/mux, or go-chi/chi), we would only have to change the un-exported implementation and use the framework-specific way to retrieve these parameters (eg. mux.Vars(r) or chi.URLParam(r, "*")).
    I expect that all routers will store their URL parameters in the matched request object.
  • It leaves us room to perform extra checks in the future if we wish so (eg. disable exposing parameters of requests beginning with /internal, or fields like /user/:password.
+25 -2

1 comment

2 changed files

tpaschalis

pr closed time in 14 days

issue closedbeatlabs/patron

Improve URL parameter handling for RawRoutes

Is your feature request related to a problem? Please describe

I think that there isn't a straightforward way for RawRoutes to capture dynamic URL parameters, such as /user/:id/profile. The URL is matched correctly via our router, as both normal and raw routes are handled similarly.

The easiest way to do this now involves adding an explicit dependency on julienschmidt/httprouter, and using params := httprouter.ParamsFromContext(r.Context()).

This kind of defeats the purpose of 'hiding' our router, and making it pluggable.

<!-- REQUIRED A clear and concise description of what the problem is. -->

Is this a bug? Please provide steps to reproduce, a failing test etc.

Not a bug, more than a feature request.

Describe the solution

An extracted, wrapper function that works like extractParams could be added, allowing people that use RawRoutes to access the dynamic URL parameters without resorting to regular expressions or strings.Split(r.URL.Path, "/")

Additional context

I could take a look over the weekend and see if we can come up with a non-intrusive solution.

closed time in 14 days

tpaschalis

Pull request review commentbeatlabs/patron

Improve URL parameter handling for RawRoutes

 func extractParams(r *http.Request) map[string]string { 	} 	return p }++// ExtractParams extracts dynamic URL parameters using httprouter's functionality+func ExtractParams(r *http.Request) map[string]string {

As far as I can see, the signatures are the same so there is no real value in this.

tpaschalis

comment created time in 14 days

Pull request review commentbeatlabs/patron

Improve URL parameter handling for RawRoutes

 func extractParams(r *http.Request) map[string]string { 	} 	return p }++// ExtractParams extracts dynamic URL parameters using httprouter's functionality+func ExtractParams(r *http.Request) map[string]string {

Why not directly export the unexported one?

tpaschalis

comment created time in 15 days

issue commentbeatlabs/patron

Improve URL parameter handling for RawRoutes

Hey, nice catch... The problem here is that every router handles the dynamic URL parts a little bit differently. Would exporting the extract function work? Since we are using the specific http router we need to use the specific extractor... Just a thought.

tpaschalis

comment created time in 17 days

created tagbeatlabs/harvester

tagv0.6.2

Harvest configuration, watch and notify subscriber

created time in 18 days

release beatlabs/harvester

v0.6.2

released time in 18 days

push eventbeatlabs/harvester

Giuseppe

commit sha 684d4c5f24da2f1419e8b1cad369521111855394

Add support for Debugf (#51)

view details

push time in 18 days

PR merged beatlabs/harvester

Add support for Debugf

Which problem is this PR solving?

Some logging could perhaps be reported as DEBUG level; this PR adds such method and uses it in 1 instance.

There might be more places when this could be used.

Short description of the changes

Added a logging method Debugf

+25 -8

3 comments

3 changed files

gmazzotta

pr closed time in 18 days

pull request commentbeatlabs/harvester

Add support for Debugf

@gmazzotta we remove fmtcheck also when running locally unti tests... why not run it?

gmazzotta

comment created time in 19 days

issue commentvirt-manager/virt-manager

ModuleNotFoundError: No module named 'libxml2' on Ubuntu 20.04

@Antique already done... https://bugs.launchpad.net/ubuntu/+source/virt-manager/+bug/1883565

theeomm

comment created time in 21 days

issue commentvirt-manager/virt-manager

ModuleNotFoundError: No module named 'libxml2' on Ubuntu 20.04

Hey, unfortunately it did not work for me... I am pretty much blocked and i have to install virtualbox for now until this is resolve... :(

theeomm

comment created time in 22 days

issue closedbeatlabs/patron

Go 1.14.1 breaks profiling tests

Is your feature request related to a problem? Please describe

After upgrading to Go 1.14.1 the test patron/component/http/profiling_test.go:47 breaks with the following panic.

`2020/03/22 10:35:54 http: panic serving 127.0.0.1:46438: runtime error: index out of range [-1] goroutine 145 [running]: net/http.(*conn).serve.func1(0xc0002763c0) /usr/local/go/src/net/http/server.go:1772 +0x147 panic(0xc07c80, 0xc000536440) /usr/local/go/src/runtime/panic.go:973 +0x3e3 runtime/pprof.runtime_expandFinalInlineFrame(0xc00036e000, 0x0, 0x20, 0x0, 0x0, 0x0) /usr/local/go/src/runtime/symtab.go:156 +0x3aa runtime/pprof.(*profileBuilder).appendLocsForStack(0xc0000ff080, 0x0, 0x0, 0x0, 0xc00036e000, 0x0, 0x20, 0x2, 0xc00, 0x7f86ecf2da00) /usr/local/go/src/runtime/pprof/proto.go:389 +0x10c runtime/pprof.printCountProfile(0xd29240, 0xc0002421c0, 0x0, 0xc4eb40, 0xc, 0xd2e8c0, 0xc0000d8320, 0x181800c00029e410, 0x30) /usr/local/go/src/runtime/pprof/pprof.go:449 +0xbfc runtime/pprof.writeRuntimeProfile(0xd29240, 0xc0002421c0, 0x0, 0xc4eb40, 0xc, 0xc72758, 0x43498f, 0xc0003f40ff) /usr/local/go/src/runtime/pprof/pprof.go:702 +0x12d runtime/pprof.writeThreadCreate(0xd29240, 0xc0002421c0, 0x0, 0xc0003f40ff, 0xc0003f4000) /usr/local/go/src/runtime/pprof/pprof.go:643 +0x72 runtime/pprof.(*Profile).WriteTo(0x1155fc0, 0xd29240, 0xc0002421c0, 0x0, 0xc0002421c0, 0xc0000c01b0) /usr/local/go/src/runtime/pprof/pprof.go:329 +0x4fc net/http/pprof.handler.ServeHTTP(0xc4eb40, 0xc, 0xd32520, 0xc0002421c0, 0xc00029e400) /usr/local/go/src/net/http/pprof/pprof.go:248 +0x3b4 github.com/beatlabs/patron/component/http.profThreadcreate(0xd32520, 0xc0002421c0, 0xc00029e400) /home/mantzas/src/go/beat/patron/component/http/profiling.go:61 +0x99 net/http.HandlerFunc.ServeHTTP(0xc716b8, 0xd32520, 0xc0002421c0, 0xc00029e400) /usr/local/go/src/net/http/server.go:2012 +0x52 net/http.(*ServeMux).ServeHTTP(0xc000020780, 0xd32520, 0xc0002421c0, 0xc00029e400) /usr/local/go/src/net/http/server.go:2387 +0x289 net/http.serverHandler.ServeHTTP(0xc0001bc0e0, 0xd32520, 0xc0002421c0, 0xc00029e400) /usr/local/go/src/net/http/server.go:2807 +0xcf net/http.(*conn).serve(0xc0002763c0, 0xd33b60, 0xc0000201c0) /usr/local/go/src/net/http/server.go:1895 +0x838 created by net/http.(*Server).Serve /usr/local/go/src/net/http/server.go:2933 +0x5b7 2020/03/22 10:35:54 http: panic serving 127.0.0.1:46440: runtime error: index out of range [-1] goroutine 258 [running]: net/http.(*conn).serve.func1(0xc000194000) /usr/local/go/src/net/http/server.go:1772 +0x147 panic(0xc07c80, 0xc0002a4360) /usr/local/go/src/runtime/panic.go:973 +0x3e3 runtime/pprof.runtime_expandFinalInlineFrame(0xc0002e2000, 0x0, 0x20, 0x0, 0x0, 0x0) /usr/local/go/src/runtime/symtab.go:156 +0x3aa runtime/pprof.(*profileBuilder).appendLocsForStack(0xc0003809a0, 0x0, 0x0, 0x0, 0xc0002e2000, 0x0, 0x20, 0x2, 0xc00, 0x7f86ecf29400) /usr/local/go/src/runtime/pprof/proto.go:389 +0x10c runtime/pprof.printCountProfile(0xd29240, 0xc0001bc1c0, 0x0, 0xc4eb40, 0xc, 0xd2e8c0, 0xc00000e0c0, 0x181800c000192710, 0x30) /usr/local/go/src/runtime/pprof/pprof.go:449 +0xbfc runtime/pprof.writeRuntimeProfile(0xd29240, 0xc0001bc1c0, 0x0, 0xc4eb40, 0xc, 0xc72758, 0x43498f, 0xc0003f41ff) /usr/local/go/src/runtime/pprof/pprof.go:702 +0x12d runtime/pprof.writeThreadCreate(0xd29240, 0xc0001bc1c0, 0x0, 0xc0003f41ff, 0xc0003f4000) /usr/local/go/src/runtime/pprof/pprof.go:643 +0x72 runtime/pprof.(*Profile).WriteTo(0x1155fc0, 0xd29240, 0xc0001bc1c0, 0x0, 0xc0001bc1c0, 0xc000026420) /usr/local/go/src/runtime/pprof/pprof.go:329 +0x4fc net/http/pprof.handler.ServeHTTP(0xc4eb40, 0xc, 0xd32520, 0xc0001bc1c0, 0xc000192700) /usr/local/go/src/net/http/pprof/pprof.go:248 +0x3b4 github.com/beatlabs/patron/component/http.profThreadcreate(0xd32520, 0xc0001bc1c0, 0xc000192700) /home/mantzas/src/go/beat/patron/component/http/profiling.go:61 +0x99 net/http.HandlerFunc.ServeHTTP(0xc716b8, 0xd32520, 0xc0001bc1c0, 0xc000192700) /usr/local/go/src/net/http/server.go:2012 +0x52 net/http.(*ServeMux).ServeHTTP(0xc000020780, 0xd32520, 0xc0001bc1c0, 0xc000192700) /usr/local/go/src/net/http/server.go:2387 +0x289 net/http.serverHandler.ServeHTTP(0xc0001bc0e0, 0xd32520, 0xc0001bc1c0, 0xc000192700) /usr/local/go/src/net/http/server.go:2807 +0xcf net/http.(*conn).serve(0xc000194000, 0xd33b60, 0xc000020440) /usr/local/go/src/net/http/server.go:1895 +0x838 created by net/http.(*Server).Serve /usr/local/go/src/net/http/server.go:2933 +0x5b7 --- FAIL: Test_PprofHandlers (1.18s) --- FAIL: Test_PprofHandlers/threadcreate (0.01s) profiling_test.go:46: Error Trace: profiling_test.go:46 Error: Received unexpected error: Get "http://127.0.0.1:35381/debug/pprof/threadcreate/": EOF Test: Test_PprofHandlers/threadcreate panic: runtime error: invalid memory address or nil pointer dereference [recovered] panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0xb32f90]

goroutine 226 [running]: testing.tRunner.func1.1(0xbb06e0, 0x1153030) /usr/local/go/src/testing/testing.go:941 +0x5d0 testing.tRunner.func1(0xc0000f2360) /usr/local/go/src/testing/testing.go:944 +0x600 panic(0xbb06e0, 0x1153030) /usr/local/go/src/runtime/panic.go:973 +0x3e3 github.com/beatlabs/patron/component/http.Test_PprofHandlers.func1(0xc0000f2360) /home/mantzas/src/go/beat/patron/component/http/profiling_test.go:47 +0x200 testing.tRunner(0xc0000f2360, 0xc0000d8260) /usr/local/go/src/testing/testing.go:992 +0x1ec created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1043 +0x661`

Is this a bug? Please provide steps to reproduce, a failing test etc.

It seems to be a bug because the only thing that patron does is to route the request to the underlying pprof handler.

closed time in 23 days

mantzas

issue commentbeatlabs/patron

Go 1.14.1 breaks profiling tests

Seems to be fixed now

mantzas

comment created time in 23 days

startedsger/RustBooks

started time in 25 days

issue commentbeatlabs/patron

Add optional gzip compression for HTTP responses

Sounds like a nice idea for a middleware maybe?

superstas

comment created time in 25 days

issue commentbeatlabs/patron

Add support for kafka transactions

I agree. Even if we cannot to transactions due to sarama limitation we can always send the same message multiple times assuming that the consumers are idempotent (and they should be if you ask me). If the topic is log compacted it will also work since the last message is the surviving one (eventually). There was also the notion of de-duplication on the producer side but I am not sure about this topic...

Yexo

comment created time in a month

issue commentbeatlabs/patron

Add support for kafka transactions

@Yexo we have probably to take a deeper look into the support in sarama to determine if it is supported but AFAIK it is not... There is a function in the producer that allows sending multiple messages but I do not believe it is transactional...

      // SendMessages produces a given set of messages, and returns only when all
	// messages in the set have either succeeded or failed. Note that messages
	// can succeed and fail individually; if some succeed and some fail,
	// SendMessages will return an error.
	SendMessages(msgs []*ProducerMessage) error
Yexo

comment created time in a month

issue commentbeatlabs/patron

Add support for kafka transactions

@Yexo Hey, AFAIK sarama does not support Transactions. Has this changed?

Yexo

comment created time in a month

issue openedcloudevents/sdk-go

v2 Release dependency issue

After upgrading to the new v2.0.0 release the following error occurs.

go: github.com/cloudevents/sdk-go/v2/protocol/kafka_sarama@v1.0.0-RC5 requires github.com/cloudevents/sdk-go/v2@v2.0.0-00010101000000-000000000000: invalid version: unknown revision 000000000000

created time in a month

Pull request review commentbeatlabs/patron

AWS SNS/SQS Integration Tests

+// +build integration++package aws++import (+	"context"+	"encoding/json"+	"testing"++	"github.com/aws/aws-sdk-go/service/sqs/sqsiface"+	patronSQS "github.com/beatlabs/patron/client/sqs"+	sqsConsumer "github.com/beatlabs/patron/component/async/sqs"+	"github.com/beatlabs/patron/correlation"+	opentracing "github.com/opentracing/opentracing-go"+	"github.com/opentracing/opentracing-go/ext"+	"github.com/opentracing/opentracing-go/mocktracer"+	"github.com/stretchr/testify/assert"+	"github.com/stretchr/testify/require"+)++type message struct {+	ID string `json:"id"`+}++func Test_SQS_Consume(t *testing.T) {+	const queueName = "test-sqs-consume"+	const correlationID = "123"++	api, err := createSQSAPI(runtime.getSQSEndpoint())+	require.NoError(t, err)+	queue, err := createSQSQueue(api, queueName)+	require.NoError(t, err)++	sent := sendMessage(t, api, correlationID, queue, "1", "2", "3")++	mtr := mocktracer.New()+	defer mtr.Reset()+	opentracing.SetGlobalTracer(mtr)++	factory, err := sqsConsumer.NewFactory(api, queueName)+	require.NoError(t, err)+	cns, err := factory.Create()+	require.NoError(t, err)+	ch, chErr, err := cns.Consume(context.Background())+	require.NoError(t, err)++	count := 0++	chReceived := make(chan []*message)++	go func() {+		received := make([]*message, 0, len(sent))++		for {+			select {

could make sense.. I am also using the CLI argument for timeout also... maybe we do not need to define another one?

mantzas

comment created time in a month

Pull request review commentbeatlabs/patron

AWS SNS/SQS Integration Tests

+// +build integration++package aws++import (+	"context"+	"encoding/json"+	"testing"++	"github.com/aws/aws-sdk-go/service/sqs/sqsiface"+	patronSQS "github.com/beatlabs/patron/client/sqs"+	sqsConsumer "github.com/beatlabs/patron/component/async/sqs"+	"github.com/beatlabs/patron/correlation"+	opentracing "github.com/opentracing/opentracing-go"+	"github.com/opentracing/opentracing-go/ext"+	"github.com/opentracing/opentracing-go/mocktracer"+	"github.com/stretchr/testify/assert"+	"github.com/stretchr/testify/require"+)++type message struct {+	ID string `json:"id"`+}++func Test_SQS_Consume(t *testing.T) {+	const queueName = "test-sqs-consume"+	const correlationID = "123"++	api, err := createSQSAPI(runtime.getSQSEndpoint())+	require.NoError(t, err)+	queue, err := createSQSQueue(api, queueName)+	require.NoError(t, err)++	sent := sendMessage(t, api, correlationID, queue, "1", "2", "3")++	mtr := mocktracer.New()+	defer mtr.Reset()+	opentracing.SetGlobalTracer(mtr)++	factory, err := sqsConsumer.NewFactory(api, queueName)+	require.NoError(t, err)+	cns, err := factory.Create()+	require.NoError(t, err)+	ch, chErr, err := cns.Consume(context.Background())+	require.NoError(t, err)++	count := 0++	chReceived := make(chan []*message)++	go func() {+		received := make([]*message, 0, len(sent))++		for {+			select {

besides the timeouts of the tests?

mantzas

comment created time in a month

push eventbeatlabs/patron

Sotirios Mantziaris

commit sha c65fa5333bddab65f04012ae4993c9dd6c101205

Removed docker-compose from GitHub actions Signed-off-by: Sotirios Mantziaris <smantziaris@gmail.com>

view details

push time in a month

PR opened beatlabs/patron

AWS SNS/SQS Integration Tests

<!-- Thanks for taking precious time for making a PR.

Before creating a pull request, please make sure:

  • Your PR solves one problem for which a issue exist and a solution has been discussed
  • You have read the guide for contributing
    • See https://github.com/beatlabs/patron/blob/master/CONTRIBUTING.md
  • You signed all your commits (otherwise we won't be able to merge the PR)
    • See https://github.com/beatlabs/patron/blob/master/CONTRIBUTING.md#sign-your-work
  • You added unit tests for the new functionality
  • You mention in the PR description which issue it is addressing, e.g. "Resolves #123" -->

Which problem is this PR solving?

Closes #214. <!-- REQUIRED -->

Short description of the changes

  • Deleted old integration tests
  • Introduced dockertest based tests
  • Removed docker-compose file

<!-- REQUIRED -->

+517 -346

0 comment

11 changed files

pr created time in a month

PR closed mantzas/patron

AWS SNS/SQS Integration Tests

<!-- Thanks for taking precious time for making a PR.

Before creating a pull request, please make sure:

  • Your PR solves one problem for which a issue exist and a solution has been discussed
  • You have read the guide for contributing
    • See https://github.com/beatlabs/patron/blob/master/CONTRIBUTING.md
  • You signed all your commits (otherwise we won't be able to merge the PR)
    • See https://github.com/beatlabs/patron/blob/master/CONTRIBUTING.md#sign-your-work
  • You added unit tests for the new functionality
  • You mention in the PR description which issue it is addressing, e.g. "Resolves #123" -->

Which problem is this PR solving?

Closes #214.

Short description of the changes

  • Deleted old integration tests
  • Introduced dockertest based tests
  • Removed docker-compose file
+608366 -20800

0 comment

1976 changed files

mantzas

pr closed time in a month

push eventbeatlabs/patron

Sotirios Mantziaris

commit sha 7bf46d2add12125e0d889491fcd39ab32fade060

Added SQS consumer intergration tests Signed-off-by: Sotirios Mantziaris <smantziaris@gmail.com>

view details

push time in a month

push eventbeatlabs/patron

Sotirios Mantziaris

commit sha d85b19c78065870350ff297211eb581f6eb9da14

Codefactor fixes Signed-off-by: Sotirios Mantziaris <smantziaris@gmail.com>

view details

push time in a month

PR opened mantzas/patron

AWS SNS/SQS Integration Tests

<!-- Thanks for taking precious time for making a PR.

Before creating a pull request, please make sure:

  • Your PR solves one problem for which a issue exist and a solution has been discussed
  • You have read the guide for contributing
    • See https://github.com/beatlabs/patron/blob/master/CONTRIBUTING.md
  • You signed all your commits (otherwise we won't be able to merge the PR)
    • See https://github.com/beatlabs/patron/blob/master/CONTRIBUTING.md#sign-your-work
  • You added unit tests for the new functionality
  • You mention in the PR description which issue it is addressing, e.g. "Resolves #123" -->

Which problem is this PR solving?

Closes #214.

Short description of the changes

  • Deleted old integration tests
  • Introduced dockertest based tests
  • Removed docker-compose file
+608277 -20800

0 comment

1976 changed files

pr created time in a month

create barnchbeatlabs/patron

branch : sns-sqs-integration-tests

created branch time in a month

push eventbeatlabs/patron

Giuseppe

commit sha 042c7024f08722aee7bce8626493002cfb99d6d2

Change logging output to os.Stderr (#213) Signed-off-by: Giuseppe Mazzotta <g.mazzotta@thebeat.co>

view details

push time in a month

PR merged beatlabs/patron

Reviewers
Change logging output to os.Stderr

Which problem is this PR solving?

~It is not safe to write directly to os.Stdout/os.Stderr in concurrent scenarios; writes can intermingle and cause corrupt log lines output. See also: https://stackoverflow.com/a/14694666~

It turns out it is safe instead, see:

  • https://github.com/golang/go/blob/go1.14.3/src/internal/poll/fd_unix.go#L255
  • https://github.com/golang/go/blob/go1.14.3/src/internal/poll/fd_windows.go#L684

(No issue created yet)

Short description of the changes

Switch logging output to os.Stderr; this is the usual file descriptor for log output, also used by Go's log package: https://golang.org/src/log/log.go?s=9636:9664#L76

+1 -1

1 comment

1 changed file

gm42

pr closed time in a month

push eventgm42/patron

Sotirios Mantziaris

commit sha 2c6c55a481787e4663d3832d48dab82b19eced0f

Fixed broken tests (#218)

view details

Sotirios Mantziaris

commit sha ccf37d72d05e13649218e7935f54db0ca00ca7e5

Merge branch 'master' into zerolog/sync-writer

view details

push time in a month

created tagbeatlabs/patron

tagv0.41.1

Microservice framework following best cloud practices with a focus on productivity.

created time in a month

release beatlabs/patron

v0.41.1

released time in a month

push eventbeatlabs/patron

Sotirios Mantziaris

commit sha 2c6c55a481787e4663d3832d48dab82b19eced0f

Fixed broken tests (#218)

view details

push time in a month

delete branch beatlabs/patron

delete branch : fix-broken-kafka-tests

delete time in a month

PR merged beatlabs/patron

Reviewers
Fixed broken Kafka tests

<!-- Thanks for taking precious time for making a PR.

Before creating a pull request, please make sure:

  • Your PR solves one problem for which a issue exist and a solution has been discussed
  • You have read the guide for contributing
    • See https://github.com/beatlabs/patron/blob/master/CONTRIBUTING.md
  • You signed all your commits (otherwise we won't be able to merge the PR)
    • See https://github.com/beatlabs/patron/blob/master/CONTRIBUTING.md#sign-your-work
  • You added unit tests for the new functionality
  • You mention in the PR description which issue it is addressing, e.g. "Resolves #123" -->

Which problem is this PR solving?

Kafka integration tests are broken.

Short description of the changes

Fixed broken tests.

+6 -10

1 comment

3 changed files

mantzas

pr closed time in a month

PR opened beatlabs/patron

Reviewers
Fixed broken Kafka tests

<!-- Thanks for taking precious time for making a PR.

Before creating a pull request, please make sure:

  • Your PR solves one problem for which a issue exist and a solution has been discussed
  • You have read the guide for contributing
    • See https://github.com/beatlabs/patron/blob/master/CONTRIBUTING.md
  • You signed all your commits (otherwise we won't be able to merge the PR)
    • See https://github.com/beatlabs/patron/blob/master/CONTRIBUTING.md#sign-your-work
  • You added unit tests for the new functionality
  • You mention in the PR description which issue it is addressing, e.g. "Resolves #123" -->

Which problem is this PR solving?

Kafka integration tests are broken.

Short description of the changes

Fixed broken tests.

+6 -10

0 comment

3 changed files

pr created time in a month

create barnchbeatlabs/patron

branch : fix-broken-kafka-tests

created branch time in a month

pull request commentbeatlabs/patron

Add integration tests for async/amqp, trace/amqp

I think it's better if I remove the dependency on the docker-compose right now; it's not going to be much work since we have other working examples.

I'll get to it and ping you for a review. Nice, please follow the same approach as we did in kafka and test client and consumer with the same docker container of rabbit...

tpaschalis

comment created time in a month

pull request commentbeatlabs/patron

Add integration tests for async/amqp, trace/amqp

@tpaschalis Hi, since we moved to docker test and handle the integration tests without the need for a docker-compose file i just wonder if it would make sense to move this PR also in this direction or should we merge it and open a new issue for this?

tpaschalis

comment created time in a month

created tagmantzas/mark

tagv0.9.0

tool for syncing your markdown documentation with Atlassian Confluence pages

created time in a month

release mantzas/mark

v0.9.0

released time in a month

Pull request review commentbeatlabs/patron

Change logging output to os.Stderr

 func Create(lvl log.Level) log.FactoryFunc { 	zerolog.LevelFieldName = "lvl" 	zerolog.MessageFieldName = "msg" 	zerolog.TimeFieldFormat = time.RFC3339Nano-	zl := zerolog.New(os.Stdout).With().Timestamp().Logger().Hook(sourceHook{skip: 7})+	zl := zerolog.New(zerolog.SyncWriter(os.Stderr)).With().Timestamp().Logger().Hook(sourceHook{skip: 7})

I agree with stderr also! Let's keep it

gm42

comment created time in a month

created tagbeatlabs/patron

tagv0.41.0

Microservice framework following best cloud practices with a focus on productivity.

created time in a month

release beatlabs/patron

v0.41.0

released time in a month

push eventdrakos74/patron

Sotirios Mantziaris

commit sha 7ef6482fb3770fa26f0b2fe483ecebebc27e82e2

Kafka integration tests (#212) Signed-off-by: Sotirios Mantziaris <smantziaris@gmail.com>

view details

Sotirios Mantziaris

commit sha a6e1b61fefe87d03d843437087d7100a8cff6722

Merge branch 'master' into server-side-caching

view details

push time in a month

push eventtpaschalis/patron

Sotirios Mantziaris

commit sha 7ef6482fb3770fa26f0b2fe483ecebebc27e82e2

Kafka integration tests (#212) Signed-off-by: Sotirios Mantziaris <smantziaris@gmail.com>

view details

Sotirios Mantziaris

commit sha 1a967cca35cd3a8d89f3392f5f079b09009fb941

Merge branch 'master' into extend-amqp-testsuite

view details

push time in a month

push eventastrikos/patron

Sotirios Mantziaris

commit sha 7ef6482fb3770fa26f0b2fe483ecebebc27e82e2

Kafka integration tests (#212) Signed-off-by: Sotirios Mantziaris <smantziaris@gmail.com>

view details

Sotirios Mantziaris

commit sha 96a186b8ad35eaaf11bc6ee34cdf58afe5ba2baf

Merge branch 'master' into multi-value-http-headers

view details

push time in a month

push eventgm42/patron

Sotirios Mantziaris

commit sha 7ef6482fb3770fa26f0b2fe483ecebebc27e82e2

Kafka integration tests (#212) Signed-off-by: Sotirios Mantziaris <smantziaris@gmail.com>

view details

Sotirios Mantziaris

commit sha 9383745071d6c002f356d3ab2617b519294f1edb

Merge branch 'master' into zerolog/sync-writer

view details

push time in a month

push eventbeatlabs/patron

Sotirios Mantziaris

commit sha 7ef6482fb3770fa26f0b2fe483ecebebc27e82e2

Kafka integration tests (#212) Signed-off-by: Sotirios Mantziaris <smantziaris@gmail.com>

view details

push time in a month

delete branch beatlabs/patron

delete branch : kafka-integration-test

delete time in a month

PR merged beatlabs/patron

Reviewers
Kafka integration tests

<!-- Thanks for taking precious time for making a PR.

Before creating a pull request, please make sure:

  • Your PR solves one problem for which a issue exist and a solution has been discussed
  • You have read the guide for contributing
    • See https://github.com/beatlabs/patron/blob/master/CONTRIBUTING.md
  • You signed all your commits (otherwise we won't be able to merge the PR)
    • See https://github.com/beatlabs/patron/blob/master/CONTRIBUTING.md#sign-your-work
  • You added unit tests for the new functionality
  • You mention in the PR description which issue it is addressing, e.g. "Resolves #123" -->

Which problem is this PR solving?

Closes #199.

Short description of the changes

  • Added Kafka dockertest
  • Refactored dockertest base
  • Removed MySQL from docker-compose
+957 -664

1 comment

15 changed files

mantzas

pr closed time in a month

issue closedbeatlabs/patron

Introduce Kafka integration tests

Is your feature request related to a problem? Please describe

In order to conduct integration tests with Kafka we should introduce Kafka and replace the MockBroker implementation.

Describe the solution

Use dockertest to spin up kafka, run tests tear down.

closed time in a month

mantzas

Pull request review commentbeatlabs/patron

Kafka integration tests

+package kafka++import (+	"context"+	"fmt"+	"os"+	"strings"+	"testing"+	"time"++	"github.com/beatlabs/patron/component/async"++	"github.com/Shopify/sarama"+	"github.com/beatlabs/patron/log"+	patrondocker "github.com/beatlabs/patron/test/docker"+	"github.com/ory/dockertest"+	"github.com/ory/dockertest/docker"+)++const (+	kafkaHost     = "localhost"+	kafkaPort     = "9092"+	zookeeperPort = "2181"+)++func TestMain(m *testing.M) {+	topics := []string{+		getTopic(simpleTopic1),+		getTopic(simpleTopic2),+		getTopic(groupTopic1),+		getTopic(groupTopic2),+	}+	k, err := create(60*time.Second, topics...)+	if err != nil {+		fmt.Printf("could not create kafka runtime: %v\n", err)+		os.Exit(1)+	}+	err = k.setup()+	if err != nil {+		fmt.Printf("could not start containers: %v\n", err)+		os.Exit(1)+	}++	exitCode := m.Run()++	ee := k.Teardown()+	if len(ee) > 0 {+		for _, err := range ee {+			fmt.Printf("could not tear down containers: %v\n", err)+		}+	}++	os.Exit(exitCode)+}++type kafkaRuntime struct {+	patrondocker.Runtime+	topics []string+}++func create(expiration time.Duration, topics ...string) (*kafkaRuntime, error) {+	br, err := patrondocker.NewRuntime(expiration)+	if err != nil {+		return nil, fmt.Errorf("could not create base runtime: %w", err)+	}+	return &kafkaRuntime{topics: topics, Runtime: *br}, nil+}++func (k *kafkaRuntime) setup() error {++	var err error++	runOptions := &dockertest.RunOptions{Repository: "wurstmeister/zookeeper",+		PortBindings: map[docker.Port][]docker.PortBinding{+			docker.Port(fmt.Sprintf("%s/tcp", zookeeperPort)): {{HostIP: "", HostPort: zookeeperPort}},+			// port 22 is too generic to be used for the test+			"29/tcp":   {{HostIP: "", HostPort: "22"}},+			"2888/tcp": {{HostIP: "", HostPort: "2888"}},+			"3888/tcp": {{HostIP: "", HostPort: "3888"}},+		},+	}+	zookeeper, err := k.RunWithOptions(runOptions)+	if err != nil {+		return fmt.Errorf("could not start zookeeper: %w", err)+	}++	ip := zookeeper.Container.NetworkSettings.Networks["bridge"].IPAddress++	kafkaTCPPort := fmt.Sprintf("%s/tcp", kafkaPort)++	runOptions = &dockertest.RunOptions{+		Repository: "wurstmeister/kafka",+		Tag:        "2.12-2.5.0",+		PortBindings: map[docker.Port][]docker.PortBinding{+			docker.Port(kafkaTCPPort): {{HostIP: "", HostPort: kafkaPort}},+		},+		ExposedPorts: []string{kafkaTCPPort},+		Mounts:       []string{"/tmp/local-kafka:/etc/kafka"},+		Env: []string{+			"KAFKA_ADVERTISED_HOST_NAME=127.0.0.1",+			fmt.Sprintf("KAFKA_CREATE_TOPICS=%s", strings.Join(k.topics, ",")),+			fmt.Sprintf("KAFKA_ZOOKEEPER_CONNECT=%s:%s", ip, zookeeperPort),+		}}++	_, err = k.RunWithOptions(runOptions)+	if err != nil {+		return fmt.Errorf("could not start kafka: %w", err)+	}++	return k.Pool().Retry(func() error {+		consumer, err := NewConsumer()+		if err != nil {+			return err+		}+		topics, err := consumer.Topics()+		if err != nil {+			log.Infof("err or during topic retrieval = %v", err)+			return err+		}++		return validateTopics(topics, k.topics)+	})+}++func validateTopics(clusterTopics, wantTopics []string) error {+	var found int+	for _, wantTopic := range wantTopics {+		topic := strings.Split(wantTopic, ":")+		for _, clusterTopic := range clusterTopics {+			if topic[0] == clusterTopic {+				found+++			}+		}+	}++	if found != len(wantTopics) {+		return fmt.Errorf("failed to find topics %v in cluster topics %v", wantTopics, clusterTopics)+	}++	return nil+}++// NewProducer creates a new sync producer.+func NewProducer() (sarama.SyncProducer, error) {+	config := sarama.NewConfig()+	config.Producer.Return.Successes = true+	config.Producer.Return.Errors = true++	brokers := []string{fmt.Sprintf("%s:%s", kafkaHost, kafkaPort)}++	return sarama.NewSyncProducer(brokers, config)+}++// NewConsumer creates a new consumer.+func NewConsumer() (sarama.Consumer, error) {+	config := sarama.NewConfig()+	config.Consumer.Return.Errors = true++	return sarama.NewConsumer(Brokers(), config)+}++// Brokers returns a list of brokers.+func Brokers() []string {+	return []string{fmt.Sprintf("%s:%s", kafkaHost, kafkaPort)}+}++func getTopic(name string) string {+	return fmt.Sprintf("%s:1:1", name)

We could in order to make sure that it works correctly... Care to open one?

mantzas

comment created time in a month

pull request commentbeatlabs/patron

Add integration tests for async/amqp, trace/amqp

Sigh... It seems that flakiness issues are not ironed out yet. This time it's about the Kafka broker not being available.

What's your thoughts on this @mantzas? Should we just close this pull request for now, so I can start working on some other issues on the v1.0.0 roadmap and revisit it further down the line? It's been open for like two months, so I'm sorry for that.

@tpaschalis this is not your fault... the PR https://github.com/beatlabs/patron/pull/212 will solve this problem as soon as it is merged...

tpaschalis

comment created time in a month

push eventtpaschalis/patron

Kasper Bentsen

commit sha 726c0299dcf9f99b14357429e54b7eca89317c7c

Fix nil consumer group (#208)

view details

Teiva Harsanyi

commit sha e15c900df968e715adfdba9f75e82fc59053d806

Get access to message payload (#211)

view details

Giuseppe

commit sha 5bd08ad7cf8530ffaddae80c417b18c973882209

Add support for Kafka synchronous producer through builder (#197) Signed-off-by: Giuseppe Mazzotta <g.mazzotta@thebeat.co>

view details

Sotirios Mantziaris

commit sha a0978479ea122ada45c1cd948358716eadb1e6d1

Merge branch 'master' into extend-amqp-testsuite

view details

push time in a month

push eventmantzas/mark

Sotirios Mantziaris

commit sha e55e6fb663550724852c3581c4635216b0bb1381

Fixed image reference replacement

view details

push time in a month

push eventmantzas/mark

Sotirios Mantziaris

commit sha 3c7201a600a05667c1dec021b1d5a9ea70c0d2fc

Fixed image reference replacement

view details

push time in a month

created tagbeatlabs/harvester

tagv0.6.1

Harvest configuration, watch and notify subscriber

created time in a month

release beatlabs/harvester

v0.6.1

released time in a month

delete branch beatlabs/harvester

delete branch : consul-log-error-fix

delete time in a month

push eventbeatlabs/harvester

Sotirios Mantziaris

commit sha 440d83bfa8dd0828ebd84eda80eb6056d7d6190d

Consul watcher error logs (#50) Signed-off-by: Sotirios Mantziaris <smantziaris@gmail.com>

view details

push time in a month

PR merged beatlabs/harvester

Consul watcher error logs

<!-- Thanks for taking precious time for making a PR.

Before creating a pull request, please make sure:

  • Your PR solves one problem for which a issue exist and a solution has been discussed
  • You have read the guide for contributing
    • See https://github.com/beatlabs/harvester/blob/master/CONTRIBUTE.md
  • You signed all your commits (otherwise we won't be able to merge the PR)
    • See https://github.com/beatlabs/harvester/blob/master/SIGNYOURWORK.md
  • You added unit tests for the new functionality
  • You mention in the PR description which issue it is addressing, e.g. "Resolves #123" -->

Which problem is this PR solving?

Closes #49.

Short description of the changes

Early exist when data is nil.

+6 -0

1 comment

1 changed file

mantzas

pr closed time in a month

issue closedbeatlabs/harvester

Consul watcher error logs

<!-- Welcome to the harvester project.

  • Please search for existing issues to avoid creating duplicate bugs/feature requests.
  • Please be respectful and considerate of others when commenting on issues.
  • Please provide as much information as possible so we all understand the issue.

-->

Is your feature request related to a problem? Please describe

There are cases where harvester logs data is not kv pair: <nil> when watching for changes in from Consul. We should check for nil and early exit here.

closed time in a month

mantzas

push eventbeatlabs/harvester

Sotirios Mantziaris

commit sha 3a98b1734595ebe2ca0afe3203a7d7521cda8630

Early exit when consul value change is nil Signed-off-by: Sotirios Mantziaris <smantziaris@gmail.com>

view details

push time in a month

PR opened beatlabs/harvester

Reviewers
Consul watcher error logs

<!-- Thanks for taking precious time for making a PR.

Before creating a pull request, please make sure:

  • Your PR solves one problem for which a issue exist and a solution has been discussed
  • You have read the guide for contributing
    • See https://github.com/beatlabs/harvester/blob/master/CONTRIBUTE.md
  • You signed all your commits (otherwise we won't be able to merge the PR)
    • See https://github.com/beatlabs/harvester/blob/master/SIGNYOURWORK.md
  • You added unit tests for the new functionality
  • You mention in the PR description which issue it is addressing, e.g. "Resolves #123" -->

Which problem is this PR solving?

Closes #49.

Short description of the changes

Early exist when data is nil.

+3 -0

0 comment

1 changed file

pr created time in a month

create barnchbeatlabs/harvester

branch : consul-log-error-fix

created branch time in a month

issue openedbeatlabs/harvester

Consul watcher error logs

<!-- Welcome to the harvester project.

  • Please search for existing issues to avoid creating duplicate bugs/feature requests.
  • Please be respectful and considerate of others when commenting on issues.
  • Please provide as much information as possible so we all understand the issue.

-->

Is your feature request related to a problem? Please describe

There are cases where harvester logs data is not kv pair: <nil> when watching for changes in from Consul. We should check for nil and early exit here.

created time in a month

issue openedbeatlabs/patron

Investigation for the feasability of having a pluggable HTTP router

Is your feature request related to a problem? Please describe

We are using httprouter hardcoded in our codebase. But it has limitations and we could investigate to make the HTTP router part pluggable. We can provide after this a way to let the end-user decide which one they want to support.

created time in a month

more