profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/go-logr/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.

go-logr/logr 375

A simple logging interface for Go

go-logr/zapr 40

A logr implementation using Zap

go-logr/stdr 14

logr implementation against the stdlib log package

go-logr/glogr 9

An implementation of logr (Go logging) with glog

fork kiragoo/logr

A simple logging interface for Go

fork in 2 days

startedgo-logr/logr

started time in 2 days

startedgo-logr/logr

started time in 5 days

startedgo-logr/logr

started time in 7 days

startedgo-logr/zapr

started time in 10 days

startedgo-logr/logr

started time in 10 days

PullRequestReviewEvent

startedgo-logr/logr

started time in 11 days

startedgo-logr/logr

started time in 13 days

fork danmrichards/zapr

A logr implementation using Zap

fork in 13 days

startedgo-logr/logr

started time in 15 days

startedgo-logr/logr

started time in 15 days

startedgo-logr/logr

started time in 16 days

startedgo-logr/logr

started time in 17 days

startedgo-logr/logr

started time in 17 days

startedgo-logr/logr

started time in 18 days

push eventgo-logr/zapr

Noah Kantrowitz

commit sha 28f20ccb85032f2109bc148412d974ade000ae04

✏️ Minor typo, "suggared".

view details

Solly Ross

commit sha 9f3e0b1ce51bfe35435d3470ef41ba77ccc047eb

Merge pull request #27 from coderanger/patch-1 ✏️ Minor typo, "suggared".

view details

push time in 18 days

PR merged go-logr/zapr

✏️ Minor typo, "suggared".
+1 -1

0 comment

1 changed file

coderanger

pr closed time in 18 days

Pull request review commentgo-logr/logr

WIP: logr performance ideas

 func (l fnlogger) caller() callerID { 	// +1 for this frame, +1 for logr itself. 	// FIXME: Maybe logr should offer a clue as to how many frames are 	// needed here?  Or is it part of the contract to LogSinks?-	_, file, line, ok := runtime.Caller(framesToCaller() + 2)+	_, file, line, ok := runtime.Caller(framesToCaller() + l.depth + 2)

resolving this, discussing below

thockin

comment created time in 19 days

PullRequestReviewEvent

Pull request review commentgo-logr/logr

WIP: logr performance ideas

 import ( 	"context" ) -// TODO: consider adding back in format strings if they're really needed-// TODO: consider other bits of zap/zapcore functionality like ObjectMarshaller (for arbitrary objects)-// TODO: consider other bits of glog functionality like Flush, OutputStats--// Logger represents the ability to log messages, both errors and not.-type Logger interface {-	// Enabled tests whether this Logger is enabled.  For example, commandline+// LogSink represents the ability to log messages, both errors and not.+type LogSink interface {+	// Enabled tests whether this LogSink is enabled.  For example, commandline 	// flags might be used to set the logging verbosity and disable some info 	// logs.-	Enabled() bool+	Enabled(level int) bool

Perhaps this LogContext can also contain the number of stack frames added by the log stack. This would allow the creation of middleware-like log sinks that delegate to other sinks, while keeping the call stack count correct.

This is what WithCallDepth() is for. I agree that assuming literal 1 in the LogSink is kind of a mess, though. Passing it to every call to Info() feels a bit silly - it's not going to change, like the levels. It almost feels like logr.CallFrameCount() (simple, ugly) or something like:

logr.New(sink) calls sink.Init(numCallFrames), (probably a struct) which can choose to store that info if it needs.

thockin

comment created time in 19 days

PullRequestReviewEvent

Pull request review commentgo-logr/logr

WIP: logr performance ideas

 func FromContext(ctx context.Context) Logger { 		return v 	} -	return nil+	//FIXME: what to do here?  Could switch to pointers, but yuck?

agree

thockin

comment created time in 19 days

PullRequestReviewEvent

Pull request review commentgo-logr/logr

WIP: logr performance ideas

 type Logger interface { 	// triggered this log line, if present. 	Error(err error, msg string, keysAndValues ...interface{}) -	// V returns an Logger value for a specific verbosity level, relative to-	// this Logger.  In other words, V values are additive.  V higher verbosity-	// level means a log message is less important.  It's illegal to pass a log-	// level less than zero.-	V(level int) Logger- 	// WithValues adds some key-value pairs of context to a logger. 	// See Info for documentation on how key/value pairs work.-	WithValues(keysAndValues ...interface{}) Logger+	WithValues(keysAndValues ...interface{}) LogSink  	// WithName adds a new element to the logger's name. 	// Successive calls with WithName continue to append 	// suffixes to the logger's name.  It's strongly recommended 	// that name segments contain only letters, digits, and hyphens 	// (see the package documentation for more information).-	WithName(name string) Logger+	WithName(name string) LogSink+}++func New(level int, sink LogSink) Logger {

Good point. Early drafts needed this, but not any more.

thockin

comment created time in 19 days

PullRequestReviewEvent

pull request commentgo-logr/logr

WIP: logr performance ideas

On compat: There are some things that are just awkward after this, like DiscardLogger (the type) no longer making sense (it's a LogSink). We could jump thru hoops to make things work (make DiscardLogger a struct that embeds a Logger) or we could just make this the one breaking change. MOST clients will be fine with a recompile, and a small number will hit edge cases.

If we do that, we can EOL InfoLogger and a few other things.

thockin

comment created time in 19 days

issue commentgo-logr/logr

Consider run time performance

Forcing noinline in the benchmark makes it more representative.

Before:

$ GO111MODULE=off go test -bench=. ./benchmark/
goos: linux
goarch: amd64
pkg: github.com/go-logr/logr/benchmark
cpu: Intel(R) Xeon(R) W-2135 CPU @ 3.70GHz
BenchmarkDiscardInfoOneArg-6        	25409384	        54.15 ns/op
BenchmarkDiscardInfoSeveralArgs-6   	 8511532	       146.8 ns/op
BenchmarkDiscardV0Info-6            	 8260953	       152.7 ns/op
BenchmarkDiscardV9Info-6            	 8753974	       147.9 ns/op
BenchmarkDiscardError-6             	 8291468	       146.7 ns/op
BenchmarkDiscardWithValues-6        	18875984	        67.67 ns/op
BenchmarkDiscardWithName-6          	528824234	         2.077 ns/op
BenchmarkFuncrInfoOneArg-6          	 2606715	       476.2 ns/op
BenchmarkFuncrInfoSeveralArgs-6     	  823752	      1302 ns/op
BenchmarkFuncrV0Info-6              	  915470	      1422 ns/op
BenchmarkFuncrV9Info-6              	 4300730	       277.6 ns/op
BenchmarkFuncrError-6               	  837588	      1369 ns/op
BenchmarkFuncrWithValues-6          	 4193628	       294.4 ns/op
BenchmarkFuncrWithName-6            	10532635	       106.4 ns/op

After:

$ GO111MODULE=off go test -bench=. ./benchmark/
goos: linux
goarch: amd64
pkg: github.com/go-logr/logr/benchmark
cpu: Intel(R) Xeon(R) W-2135 CPU @ 3.70GHz
BenchmarkDiscardInfoOneArg-6        	23285353	        52.58 ns/op
BenchmarkDiscardInfoSeveralArgs-6   	 8481985	       146.9 ns/op
BenchmarkDiscardV0Info-6            	 7851495	       144.8 ns/op
BenchmarkDiscardV9Info-6            	 8797778	       144.8 ns/op
BenchmarkDiscardError-6             	 7889066	       148.1 ns/op
BenchmarkDiscardWithValues-6        	19125424	        68.10 ns/op
BenchmarkDiscardWithName-6          	500619270	         2.165 ns/op
BenchmarkFuncrInfoOneArg-6          	 2841159	       490.9 ns/op
BenchmarkFuncrInfoSeveralArgs-6     	  887854	      1266 ns/op
BenchmarkFuncrV0Info-6              	  885891	      1221 ns/op
BenchmarkFuncrV9Info-6              	 7902538	       145.6 ns/op
BenchmarkFuncrError-6               	  852745	      1401 ns/op
BenchmarkFuncrWithValues-6          	 4401705	       295.1 ns/op
BenchmarkFuncrWithName-6            	11857100	       105.0 ns/op

Notably BenchmarkFuncrV9Info was cut almost in half, and there's room for more optimization.

I will make a new push with this change.

thockin

comment created time in 20 days

issue commentgo-logr/logr

Consider run time performance

Hmm, I think a big part of this is the benchmark benefitting from optimizations that probably are not realistic in the wild. I'll look into it more later

thockin

comment created time in 20 days

issue commentgo-logr/logr

Consider run time performance

Switching benchmark to Discard():

Before:

BenchmarkInfoOneArg-6        	309439507	         3.628 ns/op
BenchmarkInfoSeveralArgs-6   	103684642	        11.17 ns/op
BenchmarkV0Info-6            	10411196	       115.8 ns/op
BenchmarkV9Info-6            	 9963486	       118.2 ns/op
BenchmarkError-6             	82650194	        12.14 ns/op
BenchmarkWithValues-6        	220570576	         5.585 ns/op
BenchmarkWithName-6          	723292029	         1.441 ns/op

After:

BenchmarkInfoOneArg-6        	28276839	        46.68 ns/op
BenchmarkInfoSeveralArgs-6   	 9974653	       114.4 ns/op
BenchmarkV0Info-6            	11343361	       118.7 ns/op
BenchmarkV9Info-6            	10160247	       117.6 ns/op
BenchmarkError-6             	10033260	       113.5 ns/op
BenchmarkWithValues-6        	21537408	        59.31 ns/op
BenchmarkWithName-6          	502713001	         2.189 ns/op
thockin

comment created time in 20 days