profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/wojas/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.

wojas/docker-mac-network 181

Access your Docker for Mac's internal networks from your macOS host machine

PowerDNS/pdns-builder 9

Infrastructure for creating Dockerfiles for package building

wojas/envy 8

Shell helper that automatically sets and unsets environment variables

wojas/account_bankimport 4

OpenERP account_bankimport fork with better MT940 support

wojas/django-pgrunner 4

Create and run an independent local PostgreSQL database for your Django project

wojas/genericr 3

Go generic implementation of the logr interface

hackerbeerstest/hackerbeerstest 1

Yep, this is only for testing purposes. No fancy code. Sorry.

mpdaugherty/python-github3 1

Python wrapper for GitHub API v3

Pull request review commentrestic/rest-server

Split Server component and add support for subrepositories

+package quota++import (+	"fmt"+	"io"+	"net/http"+	"os"+	"path/filepath"+	"strconv"+	"sync/atomic"+)++// New creates a new quota Manager for given path.+// It will tally the current disk usage before returning.+func New(path string, maxSize int64) (*Manager, error) {+	m := &Manager{+		path:        path,+		maxRepoSize: maxSize,+	}+	if err := m.updateSize(); err != nil {+		return nil, err+	}+	return m, nil+}++// Manager manages the repo quota for given filesystem root path, including subrepos+type Manager struct {+	path        string+	maxRepoSize int64+	repoSize    int64 // must be accessed using sync/atomic+}++// WrapWriter limits the number of bytes written+// to the space that is currently available as given by+// the server's MaxRepoSize. This type is safe for use+// by multiple goroutines sharing the same *Server.+type maxSizeWriter struct {+	io.Writer+	m *Manager+}++func (w maxSizeWriter) Write(p []byte) (n int, err error) {+	if int64(len(p)) > w.m.SpaceRemaining() {+		return 0, fmt.Errorf("repository has reached maximum size (%d bytes)", w.m.maxRepoSize)+	}+	n, err = w.Writer.Write(p)+	w.m.IncUsage(int64(n))+	return n, err+}++func (m *Manager) updateSize() error {+	// if we haven't yet computed the size of the repo, do so now+	initialSize, err := tallySize(m.path)+	if err != nil {+		return err+	}+	atomic.StoreInt64(&m.repoSize, initialSize)+	return nil+}++// WrapWriter wraps w in a writer that enforces s.MaxRepoSize.+// If there is an error, a status code and the error are returned.+func (m *Manager) WrapWriter(req *http.Request, w io.Writer) (io.Writer, int, error) {+	currentSize := atomic.LoadInt64(&m.repoSize)++	// if content-length is set and is trustworthy, we can save some time+	// and issue a polite error if it declares a size that's too big; since+	// we expect the vast majority of clients will be honest, so this check+	// can only help save time+	if contentLenStr := req.Header.Get("Content-Length"); contentLenStr != "" {+		contentLen, err := strconv.ParseInt(contentLenStr, 10, 64)+		if err != nil {+			return nil, http.StatusLengthRequired, err+		}+		if currentSize+contentLen > m.maxRepoSize {+			err := fmt.Errorf("incoming blob (%d bytes) would exceed maximum size of repository (%d bytes)",+				contentLen, m.maxRepoSize)+			return nil, http.StatusRequestEntityTooLarge, err+		}+	}++	// since we can't always trust content-length, we will wrap the writer+	// in a custom writer that enforces the size limit during writes+	return maxSizeWriter{Writer: w, m: m}, 0, nil+}++// SpaceRemaining returns how much space is available in the repo+// according to s.MaxRepoSize. s.repoSize must already be set.+// If there is no limit, -1 is returned.+func (m *Manager) SpaceRemaining() int64 {+	if m.maxRepoSize == 0 {+		return -1+	}+	maxSize := m.maxRepoSize+	currentSize := atomic.LoadInt64(&m.repoSize)+	return maxSize - currentSize+}++// SpaceUsed returns how much space is used in the repo.+func (m *Manager) SpaceUsed() int64 {+	return atomic.LoadInt64(&m.repoSize)+}++// IncUsage increments the current repo size (which+// must already be initialized).+func (m *Manager) IncUsage(by int64) {+	atomic.AddInt64(&m.repoSize, by)+}++// tallySize counts the size of the contents of path.+func tallySize(path string) (int64, error) {+	if path == "" {+		path = "."+	}+	var size int64+	err := filepath.Walk(path, func(path string, info os.FileInfo, err error) error {

I don't mean this to be critical in any way. I haven't written one line of go. But I am drooling for this PR. I was looking at this code and saw this is the walk documentation. Wasn't sure if WalkDir might be a better route. I know this is nit picky stuff.

Via: https://golang.org/pkg/path/filepath/#Walk "Walk is less efficient than WalkDir, introduced in Go 1.16, which avoids calling os.Lstat on every visited file or directory. "

wojas

comment created time in 2 days

startedwojas/docker-mac-network

started time in 3 days

Pull request review commentgo-logr/logr

WIP: logr performance ideas

 func (l fnlogger) caller() callerID { 	// +1 for this frame, +1 for logr itself. 	// FIXME: Maybe logr should offer a clue as to how many frames are 	// needed here?  Or is it part of the contract to LogSinks?-	_, file, line, ok := runtime.Caller(framesToCaller() + 2)+	_, file, line, ok := runtime.Caller(framesToCaller() + l.depth + 2)

resolving this, discussing below

thockin

comment created time in 14 days

Pull request review commentgo-logr/logr

WIP: logr performance ideas

 import ( 	"context" ) -// TODO: consider adding back in format strings if they're really needed-// TODO: consider other bits of zap/zapcore functionality like ObjectMarshaller (for arbitrary objects)-// TODO: consider other bits of glog functionality like Flush, OutputStats--// Logger represents the ability to log messages, both errors and not.-type Logger interface {-	// Enabled tests whether this Logger is enabled.  For example, commandline+// LogSink represents the ability to log messages, both errors and not.+type LogSink interface {+	// Enabled tests whether this LogSink is enabled.  For example, commandline 	// flags might be used to set the logging verbosity and disable some info 	// logs.-	Enabled() bool+	Enabled(level int) bool

Perhaps this LogContext can also contain the number of stack frames added by the log stack. This would allow the creation of middleware-like log sinks that delegate to other sinks, while keeping the call stack count correct.

This is what WithCallDepth() is for. I agree that assuming literal 1 in the LogSink is kind of a mess, though. Passing it to every call to Info() feels a bit silly - it's not going to change, like the levels. It almost feels like logr.CallFrameCount() (simple, ugly) or something like:

logr.New(sink) calls sink.Init(numCallFrames), (probably a struct) which can choose to store that info if it needs.

thockin

comment created time in 14 days

Pull request review commentgo-logr/logr

WIP: logr performance ideas

 func FromContext(ctx context.Context) Logger { 		return v 	} -	return nil+	//FIXME: what to do here?  Could switch to pointers, but yuck?

agree

thockin

comment created time in 14 days

Pull request review commentgo-logr/logr

WIP: logr performance ideas

 type Logger interface { 	// triggered this log line, if present. 	Error(err error, msg string, keysAndValues ...interface{}) -	// V returns an Logger value for a specific verbosity level, relative to-	// this Logger.  In other words, V values are additive.  V higher verbosity-	// level means a log message is less important.  It's illegal to pass a log-	// level less than zero.-	V(level int) Logger- 	// WithValues adds some key-value pairs of context to a logger. 	// See Info for documentation on how key/value pairs work.-	WithValues(keysAndValues ...interface{}) Logger+	WithValues(keysAndValues ...interface{}) LogSink  	// WithName adds a new element to the logger's name. 	// Successive calls with WithName continue to append 	// suffixes to the logger's name.  It's strongly recommended 	// that name segments contain only letters, digits, and hyphens 	// (see the package documentation for more information).-	WithName(name string) Logger+	WithName(name string) LogSink+}++func New(level int, sink LogSink) Logger {

Good point. Early drafts needed this, but not any more.

thockin

comment created time in 14 days

pull request commentgo-logr/logr

WIP: logr performance ideas

On compat: There are some things that are just awkward after this, like DiscardLogger (the type) no longer making sense (it's a LogSink). We could jump thru hoops to make things work (make DiscardLogger a struct that embeds a Logger) or we could just make this the one breaking change. MOST clients will be fine with a recompile, and a small number will hit edge cases.

If we do that, we can EOL InfoLogger and a few other things.

thockin

comment created time in 14 days

issue commentgo-logr/logr

Consider run time performance

Forcing noinline in the benchmark makes it more representative.

Before:

$ GO111MODULE=off go test -bench=. ./benchmark/
goos: linux
goarch: amd64
pkg: github.com/go-logr/logr/benchmark
cpu: Intel(R) Xeon(R) W-2135 CPU @ 3.70GHz
BenchmarkDiscardInfoOneArg-6        	25409384	        54.15 ns/op
BenchmarkDiscardInfoSeveralArgs-6   	 8511532	       146.8 ns/op
BenchmarkDiscardV0Info-6            	 8260953	       152.7 ns/op
BenchmarkDiscardV9Info-6            	 8753974	       147.9 ns/op
BenchmarkDiscardError-6             	 8291468	       146.7 ns/op
BenchmarkDiscardWithValues-6        	18875984	        67.67 ns/op
BenchmarkDiscardWithName-6          	528824234	         2.077 ns/op
BenchmarkFuncrInfoOneArg-6          	 2606715	       476.2 ns/op
BenchmarkFuncrInfoSeveralArgs-6     	  823752	      1302 ns/op
BenchmarkFuncrV0Info-6              	  915470	      1422 ns/op
BenchmarkFuncrV9Info-6              	 4300730	       277.6 ns/op
BenchmarkFuncrError-6               	  837588	      1369 ns/op
BenchmarkFuncrWithValues-6          	 4193628	       294.4 ns/op
BenchmarkFuncrWithName-6            	10532635	       106.4 ns/op

After:

$ GO111MODULE=off go test -bench=. ./benchmark/
goos: linux
goarch: amd64
pkg: github.com/go-logr/logr/benchmark
cpu: Intel(R) Xeon(R) W-2135 CPU @ 3.70GHz
BenchmarkDiscardInfoOneArg-6        	23285353	        52.58 ns/op
BenchmarkDiscardInfoSeveralArgs-6   	 8481985	       146.9 ns/op
BenchmarkDiscardV0Info-6            	 7851495	       144.8 ns/op
BenchmarkDiscardV9Info-6            	 8797778	       144.8 ns/op
BenchmarkDiscardError-6             	 7889066	       148.1 ns/op
BenchmarkDiscardWithValues-6        	19125424	        68.10 ns/op
BenchmarkDiscardWithName-6          	500619270	         2.165 ns/op
BenchmarkFuncrInfoOneArg-6          	 2841159	       490.9 ns/op
BenchmarkFuncrInfoSeveralArgs-6     	  887854	      1266 ns/op
BenchmarkFuncrV0Info-6              	  885891	      1221 ns/op
BenchmarkFuncrV9Info-6              	 7902538	       145.6 ns/op
BenchmarkFuncrError-6               	  852745	      1401 ns/op
BenchmarkFuncrWithValues-6          	 4401705	       295.1 ns/op
BenchmarkFuncrWithName-6            	11857100	       105.0 ns/op

Notably BenchmarkFuncrV9Info was cut almost in half, and there's room for more optimization.

I will make a new push with this change.

thockin

comment created time in 14 days

issue commentgo-logr/logr

Consider run time performance

Hmm, I think a big part of this is the benchmark benefitting from optimizations that probably are not realistic in the wild. I'll look into it more later

thockin

comment created time in 14 days

issue commentgo-logr/logr

Consider run time performance

Switching benchmark to Discard():

Before:

BenchmarkInfoOneArg-6        	309439507	         3.628 ns/op
BenchmarkInfoSeveralArgs-6   	103684642	        11.17 ns/op
BenchmarkV0Info-6            	10411196	       115.8 ns/op
BenchmarkV9Info-6            	 9963486	       118.2 ns/op
BenchmarkError-6             	82650194	        12.14 ns/op
BenchmarkWithValues-6        	220570576	         5.585 ns/op
BenchmarkWithName-6          	723292029	         1.441 ns/op

After:

BenchmarkInfoOneArg-6        	28276839	        46.68 ns/op
BenchmarkInfoSeveralArgs-6   	 9974653	       114.4 ns/op
BenchmarkV0Info-6            	11343361	       118.7 ns/op
BenchmarkV9Info-6            	10160247	       117.6 ns/op
BenchmarkError-6             	10033260	       113.5 ns/op
BenchmarkWithValues-6        	21537408	        59.31 ns/op
BenchmarkWithName-6          	502713001	         2.189 ns/op
thockin

comment created time in 14 days

issue commentgo-logr/logr

Consider run time performance

the main goal here is to make log calls as cheap as possible if the actual logging at that level is disabled.

Yes. Other wins are possible, but this is a big one.

I think that's a goal worthy of some breakage before a 1.0.

I agree :)

it would be good to verify that this actually works as desired.

Without #42:

BenchmarkInfoOneArg-6        	 3013077	       400.9 ns/op
BenchmarkInfoSeveralArgs-6   	 1162088	      1039 ns/op
BenchmarkV0Info-6            	  754347	      1418 ns/op
BenchmarkV9Info-6            	 4605950	       267.8 ns/op
BenchmarkError-6             	  935028	      1135 ns/op
BenchmarkWithValues-6        	 5127631	       232.5 ns/op
BenchmarkWithName-6          	12850569	       106.5 ns/op

With #42:

BenchmarkInfoOneArg-6        	 2465788	       475.7 ns/op
BenchmarkInfoSeveralArgs-6   	  893026	      1226 ns/op
BenchmarkV0Info-6            	  817473	      1250 ns/op
BenchmarkV9Info-6            	 8595180	       155.1 ns/op
BenchmarkError-6             	 1000000	      1371 ns/op
BenchmarkWithValues-6        	 3902214	       292.0 ns/op
BenchmarkWithName-6          	11271037	       106.6 ns/op

So, weirdly, everything got slower except calls through V(). I guess that has to do with the extra variadic-slice pack/unpack.

Changing Info/Error to pass the slice (instead of the slice...) to the LogSink doesn't make perf better.

I'll have to make time to disassemble it all and see what I can find.

thockin

comment created time in 14 days

push eventgo-logr/logr

Tim Hockin

commit sha 6439f1d1db6e7ecc693c61d0c14b2da3e294bb65

Add logr/funcr as a simple implementation This is useful especially for benchmarking. It's a real-enough implementation but doesn't get dominated by IO.

view details

Tim Hockin

commit sha b6cce3fd1d6123f3605f0333ddf6c05c4644b187

funcr: save 1 allocation in Error()

view details

Tim Hockin

commit sha e050e1f780bcccb54288b944d105af3d976da461

funcr: manual encoding, save a few 100ns per log

view details

Tim Hockin

commit sha 2aace7df7c78d2cf269f9844e7b4d17e9b37f636

funcr: Cache framesToCaller, save a few 100 ns

view details

Tim Hockin

commit sha 79b280b8d4b9bcf9b2d6d2a6e82e60fb20a15b29

funcr: Make caller-logging optional (800+ns)

view details

Tim Hockin

commit sha ae301505e30a45b5648c07202c75289d4c08420d

funcr: Optimize flatten, shave 50-80 ns

view details

Tim Hockin

commit sha 3d91d2e5e544a07d2d37e8f7f11e99feba44c2c1

Implement funcr verbosity

view details

Tim Hockin

commit sha 56122408d93eeb48333c13845024eb931cd95c52

Merge pull request #43 from thockin/funcr Update benchmark and add a no-op "funcr" implementation.

view details

push time in 15 days

PR merged go-logr/logr

Update benchmark and add a no-op "funcr" implementation.
+438 -7

0 comment

3 changed files

thockin

pr closed time in 15 days

PR opened go-logr/logr

Update benchmark and add a no-op "funcr" implementation.
+438 -7

0 comment

3 changed files

pr created time in 15 days

pull request commentgo-logr/logr

logr Perf ideas

@wojas @DirectXMan12

It's been a crazy few months. If you (or others) have time to look at this (specifically the last commit) it would be super helpful

thockin

comment created time in 15 days

PR opened PowerDNS/lmdb-go

Reviewers
Fix release numbers in the readme
+1 -1

0 comment

1 changed file

pr created time in 16 days

startedwojas/docker-mac-network

started time in 19 days

startedwojas/genericr

started time in 20 days

issue closedmachine-drivers/docker-machine-driver-vmware

Homebrew

Is this available on Homebrew?

closed time in 20 days

allanchau

issue commentmachine-drivers/docker-machine-driver-vmware

Homebrew

it's on brew! https://formulae.brew.sh/formula/docker-machine-driver-vmware

allanchau

comment created time in 20 days

issue closedmachine-drivers/docker-machine-driver-vmware

Compatibility with future Go versions

Just a heads up that Go 1.17 (and the just-released 1.16 without first changing GO111MODULE) won't support builds that aren't in module-aware mode.

I'm not sure what the status of this project is (along with the other docker-machine-driver-* projects). Is there any plans to migrate to Go modules?

https://blog.golang.org/go116-module-changes

closed time in 20 days

Bo98

issue closedmachine-drivers/docker-machine-driver-vmware

No success using Bridged network with Workstation 15 Pro on Windows

I'm trying to use docker-machine (v0.16.1) with v0.1.0 of this driver on Windows 10 - 1903 with VMware Workstation 15 Pro. Basic machine creation and usage works as expected BUT I would really, really like to use Bridged networking and not NAT. If I manually change the configuration of a previously created machine to use bridged network then docker-machine never finds it's IP address. DHCP is used on the bridged network and vmrun getGuestIPAddress returns the assigned address so why can't this driver pick it up?

closed time in 20 days

kmpm

issue commentmachine-drivers/docker-machine-driver-vmware

No success using Bridged network with Workstation 15 Pro on Windows

#34 merged, should work now when building from source, we'll cut a new release soon

kmpm

comment created time in 20 days

issue closedmachine-drivers/docker-machine-driver-vmware

A reworked version

https://github.com/AZ-X/docker-machine-driver-vmware Thanks for attention unless nobody visits.

closed time in 20 days

AZ-X

issue commentmachine-drivers/docker-machine-driver-vmware

A reworked version

thanks for sharing :)

AZ-X

comment created time in 20 days

issue commentmachine-drivers/docker-machine-driver-vmware

Compatibility with future Go versions

I've merged in the PR (#32) for supporting modules, so this should be addressed now... I haven't cut another "release" yet as I still want to pull in the other PR that's sitting there (the Workstation team made that change in the one we ship inside Workstation itself), and I want to move a few other things around to tidy up the repo first, but I should have that done mid-next week.

Bo98

comment created time in 20 days

pull request commentmachine-drivers/docker-machine-driver-vmware

Add network connection type option

Thanks! I'll keep an eye out for whenever you decide to and figure out how to create a release 🙂

zarenner

comment created time in 20 days

push eventmachine-drivers/docker-machine-driver-vmware

Zach Renner

commit sha 53da0c911e9c2317e71d226bd1894b300b0ef39c

Add network connection type option

view details

Michael Roy

commit sha 50ddcf034be5b16feb1308f7e23f996eaebaaf8c

Merge pull request #34 from zarenner/zarenner/add-connection-type Add network connection type option

view details

push time in 20 days

PR merged machine-drivers/docker-machine-driver-vmware

Add network connection type option

#31 (#30) made it possible to start on Big Sur, but for some users the default 'nat' connection type may no longer work on Big Sur's internet sharing for those running DNS proxies due to https://communities.vmware.com/t5/VMware-Fusion-Discussions/DNS-Forwarder-Does-Not-Seem-to-Exist-in-VMware-Fusion-12-on-Big/td-p/2808576

Well here's the problem! Before Big Sur, VMware Fusion did its own DNS stuff, so it was unaffected by a service already bound to port 53 on the host. But starting with Big Sur, VMware leverages os-level DNS dispatching. The problem here is that the OS-level mDNSResponder handling of DNS requests appears to want to bind to every address. And if something has already bound port 53, even just for the loopback address, this greedy attempt to bind port 53 everywhere results in port 53 being bound nowhere. So as described above, any NAT-enabled VM in such a situation will have nothing to answer its DNS requests.

If it's possible, a VMware-level fix would be to cause mDNSResponder to bind port 53 only on the network interfaces that VMware has configured / is using. That way, VMware Fusion should be guaranteed to not conflict with any network configuration that the machine operator has configured.

Since the NAT-based networking is therefore currently unusable in some cases, this PR makes it possible to use bridged networking instead until VMWare fixes the issue.

+23 -13

5 comments

3 changed files

zarenner

pr closed time in 20 days

pull request commentmachine-drivers/docker-machine-driver-vmware

Use Go modules

Thanks for this!

middagj

comment created time in 20 days