profile
viewpoint

push eventsuwang48404/libnetwork

Su Wang

commit sha 54e7900fb89b1aeeb188d935f29cf05514fd419b

DOCKER-USER chain not created when IPTableEnable=false. This fix addresses https://docker.atlassian.net/browse/ENGCORE-1115 Expected behaviors upon docker engine restarts: 1. IPTableEnable=true, DOCKER-USER chain present -- no change to DOCKER-USER chain 2. IPTableEnable=true, DOCKER-USER chain not present -- DOCKER-USER chain created and inserted top of FORWARD chain. 3. IPTableEnable=false, DOCKER-USER chain present -- no change to DOCKER-USER chain the rational is that DOCKER-USER is populated and may be used by end-user for purpose other than filtering docker container traffic. Thus even if IPTableEnable=false, docker engine does not touch pre-existing DOCKER-USER chain. 4. IPTableEnable=false, DOCKER-USER chain not present -- DOCKER-USER chain is not created. Signed-off-by: Su Wang <su.wang@docker.com>

view details

push time in 13 hours

pull request commentdocker/libnetwork

DOCKER-USER chain not created when IPTableEnable=false.

@euanh @arkodg can u guys help reviewing?

suwang48404

comment created time in 14 hours

pull request commentdocker/libnetwork

DOCKER-USER chain not created when IPTableEnable=false.

@suwang48404 , changes look good to me. If I remember correctly there were few places I mentioned #2339 that this code path fix didnt cover. Can you pls take a look and let me know if that is ok. coping from #2339 PR comment.

fwMarker()
redirector()
programIngress()

with the PR, I don't think we are covering these code path. we might end up leaving the daemon iptable configuration in faulty state.```

@selansen The fix is pointed fix for docekr-user chain only. The iptable usage is pervasive in libnetwork, we do not have a good understanding and/or test metrics to confidently disable all iptable programming when iptable=false. IMHO, the risk is to0 high for such general fix.

suwang48404

comment created time in 14 hours

push eventsuwang48404/libnetwork

Su Wang

commit sha f1376351d039e8d26e43796e379de83ff7fe4508

DOCKER-USER chain not created when IPTableEnable=false. This fix addresses https://docker.atlassian.net/browse/ENGCORE-1115 Expected behaviors upon docker engine restarts: 1. IPTableEnable=true, DOCKER-USER chain present -- no change to DOCKER-USER chain 2. IPTableEnable=true, DOCKER-USER chain not present -- DOCKER-USER chain created and inserted top of FORWARD chain. 3. IPTableEnable=false, DOCKER-USER chain present -- no change to DOCKER-USER chain the rational is that DOCKER-USER is populated and may be used by end-user for purpose other than filtering docker container traffic. Thus even if IPTableEnable=false, docker engine does not touch pre-existing DOCKER-USER chain. 4. IPTableEnable=false, DOCKER-USER chain not present -- DOCKER-USER chain is not created. Signed-off-by: Su Wang <su.wang@docker.com>

view details

push time in 4 days

Pull request review commentdocker/libnetwork

DOCKER-USER chain not created when IPTableEnable=false.

 package libnetwork import ( 	"github.com/docker/libnetwork/iptables" 	"github.com/sirupsen/logrus"+	"sync" )  const userChain = "DOCKER-USER" -func (c *controller) arrangeUserFilterRule() {-	c.Lock()-	arrangeUserFilterRule()-	c.Unlock()-	iptables.OnReloaded(func() {+var (+	ctrl          *controller = nil+	userChainOnce sync.Once

sorry I misunderstood u. I have moved invocation of setupArrangeUserFilterRule to controller.go:New(), and do away with sync.once. Thx, Su

suwang48404

comment created time in 4 days

Pull request review commentdocker/libnetwork

DOCKER-USER chain not created when IPTableEnable=false.

 package libnetwork import ( 	"github.com/docker/libnetwork/iptables" 	"github.com/sirupsen/logrus"+	"sync" )  const userChain = "DOCKER-USER" -func (c *controller) arrangeUserFilterRule() {-	c.Lock()-	arrangeUserFilterRule()-	c.Unlock()-	iptables.OnReloaded(func() {+var (+	ctrl          *controller = nil+	userChainOnce sync.Once

in another word, iptables enabled/disabled is a global option, it is executed based on singleton controller configuration. Hope this make sense.

suwang48404

comment created time in 5 days

Pull request review commentdocker/libnetwork

DOCKER-USER chain not created when IPTableEnable=false.

 package libnetwork import ( 	"github.com/docker/libnetwork/iptables" 	"github.com/sirupsen/logrus"+	"sync" )  const userChain = "DOCKER-USER" -func (c *controller) arrangeUserFilterRule() {-	c.Lock()-	arrangeUserFilterRule()-	c.Unlock()-	iptables.OnReloaded(func() {+var (+	ctrl          *controller = nil+	userChainOnce sync.Once

This fix assumes there shall be only one instance of controller per docker engine (instantiated moby/moby/daemon/daemon_unix.go). I'd think (please correct me) many things will be broken if multiple libnetwork controllers are created within the same engine.

suwang48404

comment created time in 5 days

push eventsuwang48404/libnetwork

Su Wang

commit sha 3806be4785da55bb7ac399efc407fbb3bf661f81

DOCKER-USER chain not created when IPTableEnable=false. This fix addresses https://docker.atlassian.net/browse/ENGCORE-1115 Expected behaviors upon docker engine restarts: 1. IPTableEnable=true, DOCKER-USER chain present -- no change to DOCKER-USER chain 2. IPTableEnable=true, DOCKER-USER chain not present -- DOCKER-USER chain created and inserted top of FORWARD chain. 3. IPTableEnable=false, DOCKER-USER chain present -- no change to DOCKER-USER chain the rational is that DOCKER-USER is populated and may be used by end-user for purpose other than filtering docker container traffic. Thus even if IPTableEnable=false, docker engine does not touch pre-existing DOCKER-USER chain. 4. IPTableEnable=false, DOCKER-USER chain not present -- DOCKER-USER chain is not created. Signed-off-by: Su Wang <su.wang@docker.com>

view details

push time in 5 days

pull request commentdocker/libnetwork

DOCKER-USER chain not created when IPTableEnable=false.

@thaJeztah I have taken ur firewall_test.go, it is much cleaner, thanks alot. Beyond that, let me know if there is anything else I should address.

thx, Su

suwang48404

comment created time in 5 days

Pull request review commentdocker/libnetwork

DOCKER-USER chain not created when IPTableEnable=false.

+package libnetwork++import (+	"fmt"+	"github.com/docker/libnetwork/iptables"+	"github.com/docker/libnetwork/netlabel"+	"github.com/docker/libnetwork/options"+	"strings"+	"testing"+)++const (+	fwdChainName = "FORWARD"+	usrChainName = "DOCKER-USER"+)++func verifyUserChain(enabled, insert bool) error {+	output, err := iptables.Raw("-S", fwdChainName)+	if err != nil {+		return err+	}+	rules := strings.Split(string(output), "\n")+	if !enabled {+		if len(rules)-1 != 1 {

take ur firewall_test.go instead

suwang48404

comment created time in 5 days

Pull request review commentdocker/libnetwork

DOCKER-USER chain not created when IPTableEnable=false.

+package libnetwork++import (+	"fmt"+	"github.com/docker/libnetwork/iptables"+	"github.com/docker/libnetwork/netlabel"+	"github.com/docker/libnetwork/options"+	"strings"+	"testing"+)++const (+	fwdChainName = "FORWARD"+	usrChainName = "DOCKER-USER"+)++func verifyUserChain(enabled, insert bool) error {+	output, err := iptables.Raw("-S", fwdChainName)+	if err != nil {+		return err+	}+	rules := strings.Split(string(output), "\n")+	if !enabled {+		if len(rules)-1 != 1 {+			return fmt.Errorf("chain %v: unexpected rules len=%v, %v, ",+				fwdChainName, len(rules), rules)+		}+		if _, err = iptables.Raw("-S", usrChainName); err == nil {+			return fmt.Errorf("chain %v: created unexpectedly", usrChainName)+		}+		return nil+	}+	nRules := 2+	if insert {+		nRules+++	}+	if nRules != len(rules)-1 || !strings.Contains(rules[1], usrChainName) {+		return fmt.Errorf("chain %v: unexpected rules len=%v, %v",+			fwdChainName, len(rules), rules)+	}++	output, err = iptables.Raw("-S", usrChainName)+	if err != nil {+		return err+	}+	rules = strings.Split(string(output), "\n")+	if len(rules)-1 != 2 {+		return fmt.Errorf("chain %v: unexpected rules len=%v, %v",+			usrChainName, len(rules), rules)+	}+	return nil+}++func testUserChain(t *testing.T, ctrl *controller, enabled, insert bool) {+	defer func() {+		_, err := iptables.Raw("-F", fwdChainName)+		if err != nil {+			t.Fatal(err)

thx for good this good pointer. taking ur firewall_test.go instead

suwang48404

comment created time in 5 days

Pull request review commentdocker/libnetwork

DOCKER-USER chain not created when IPTableEnable=false.

+package libnetwork++import (+	"fmt"+	"github.com/docker/libnetwork/iptables"+	"github.com/docker/libnetwork/netlabel"+	"github.com/docker/libnetwork/options"+	"strings"+	"testing"

take ur firewall_test.go instead.

suwang48404

comment created time in 5 days

Pull request review commentdocker/libnetwork

DOCKER-USER chain not created when IPTableEnable=false.

 package libnetwork  import ( 	"fmt"+	"github.com/docker/libnetwork/options"

done.

suwang48404

comment created time in 5 days

push eventsuwang48404/libnetwork

Sebastiaan van Stijn

commit sha f741dc9c305fea900b96b8a838f959395799cf78

Update Golang 1.12.12 (CVE-2019-17596) Golang 1.12.12 ------------------------------- full diff: https://github.com/golang/go/compare/go1.12.11...go1.12.12 go1.12.12 (released 2019/10/17) includes fixes to the go command, runtime, syscall and net packages. See the Go 1.12.12 milestone on our issue tracker for details. https://github.com/golang/go/issues?q=milestone%3AGo1.12.12 Golang 1.12.11 (CVE-2019-17596) ------------------------------- full diff: https://github.com/golang/go/compare/go1.12.10...go1.12.11 go1.12.11 (released 2019/10/17) includes security fixes to the crypto/dsa package. See the Go 1.12.11 milestone on our issue tracker for details. https://github.com/golang/go/issues?q=milestone%3AGo1.12.11 [security] Go 1.13.2 and Go 1.12.11 are released Hi gophers, We have just released Go 1.13.2 and Go 1.12.11 to address a recently reported security issue. We recommend that all affected users update to one of these releases (if you're not sure which, choose Go 1.13.2). Invalid DSA public keys can cause a panic in dsa.Verify. In particular, using crypto/x509.Verify on a crafted X.509 certificate chain can lead to a panic, even if the certificates don't chain to a trusted root. The chain can be delivered via a crypto/tls connection to a client, or to a server that accepts and verifies client certificates. net/http clients can be made to crash by an HTTPS server, while net/http servers that accept client certificates will recover the panic and are unaffected. Moreover, an application might crash invoking crypto/x509.(*CertificateRequest).CheckSignature on an X.509 certificate request, parsing a golang.org/x/crypto/openpgp Entity, or during a golang.org/x/crypto/otr conversation. Finally, a golang.org/x/crypto/ssh client can panic due to a malformed host key, while a server could panic if either PublicKeyCallback accepts a malformed public key, or if IsUserAuthority accepts a certificate with a malformed public key. The issue is CVE-2019-17596 and Go issue golang.org/issue/34960. Thanks to Daniel Mandragona for discovering and reporting this issue. We'd also like to thank regilero for a previous disclosure of CVE-2019-16276. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

view details

elangovan sivanandam

commit sha 571783238bee54062ebc781c321f7833a23e38f7

Merge pull request #2472 from thaJeztah/bump_golang_1.12.12 Update Golang 1.12.12 (CVE-2019-17596)

view details

Su Wang

commit sha f69597cc997407e2c4c5d52e5d5a5aef74bffd43

DOCKER-USER chain not created when IPTableEnable=false. This fix addresses https://docker.atlassian.net/browse/ENGCORE-1115 Expected behaviors upon docker engine restarts: 1. IPTableEnable=true, DOCKER-USER chain present -- no change to DOCKER-USER chain 2. IPTableEnable=true, DOCKER-USER chain not present -- DOCKER-USER chain created and inserted top of FORWARD chain. 3. IPTableEnable=false, DOCKER-USER chain present -- no change to DOCKER-USER chain the rational is that DOCKER-USER is populated and may be used by end-user for purpose other than filtering docker container traffic. Thus even if IPTableEnable=false, docker engine does not touch pre-existing DOCKER-USER chain. 4. IPTableEnable=false, DOCKER-USER chain not present -- DOCKER-USER chain is not created. Signed-off-by: Su Wang <su.wang@docker.com>

view details

push time in 5 days

Pull request review commentdocker/libnetwork

DOCKER-USER chain not created when IPTableEnable=false.

 package libnetwork import ( 	"github.com/docker/libnetwork/iptables" 	"github.com/sirupsen/logrus"+	"sync" )  const userChain = "DOCKER-USER" -func (c *controller) arrangeUserFilterRule() {-	c.Lock()-	arrangeUserFilterRule()-	c.Unlock()-	iptables.OnReloaded(func() {+var (+	ctrl          *controller = nil+	userChainOnce sync.Once

yes, cal_ once is the mean to goal of not setup DOCKER-USER chain multiple times.

suwang48404

comment created time in 5 days

Pull request review commentdocker/libnetwork

DOCKER-USER chain not created when IPTableEnable=false.

+package libnetwork++import (+	"fmt"+	"github.com/docker/libnetwork/iptables"+	"github.com/docker/libnetwork/netlabel"+	"github.com/docker/libnetwork/options"+	"strings"+	"testing"+)++const (+	fwdChainName = "FORWARD"+	usrChainName = "DOCKER-USER"+)++func verifyUserChain(enabled, insert bool) error {+	output, err := iptables.Raw("-S", fwdChainName)+	if err != nil {+		return err+	}+	rules := strings.Split(string(output), "\n")+	if !enabled {+		if len(rules)-1 != 1 {+			return fmt.Errorf("chain %v: unexpected rules len=%v, %v, ",+				fwdChainName, len(rules), rules)+		}+		if _, err = iptables.Raw("-S", usrChainName); err == nil {+			return fmt.Errorf("chain %v: created unexpectedly", usrChainName)+		}+		return nil+	}+	nRules := 2+	if insert {+		nRules+++	}+	if nRules != len(rules)-1 || !strings.Contains(rules[1], usrChainName) {+		return fmt.Errorf("chain %v: unexpected rules len=%v, %v",+			fwdChainName, len(rules), rules)+	}++	output, err = iptables.Raw("-S", usrChainName)+	if err != nil {+		return err+	}+	rules = strings.Split(string(output), "\n")+	if len(rules)-1 != 2 {+		return fmt.Errorf("chain %v: unexpected rules len=%v, %v",+			usrChainName, len(rules), rules)+	}+	return nil+}++func testUserChain(t *testing.T, ctrl *controller, enabled, insert bool) {+	defer func() {+		_, err := iptables.Raw("-F", fwdChainName)+		if err != nil {+			t.Fatal(err)+		}+		_ = iptables.RemoveExistingChain(usrChainName, "")+	}()+	// init. condition, FORWARD chain empty+	// DOCKER-USER not exist+	err := verifyUserChain(false, insert)+	if err != nil {+		t.Fatal(err)+	}+	if insert {+		_, err = iptables.Raw("-A", fwdChainName, "-j", "DROP")

the reason to have insert is that in case there is already a rule in FORWARD chain, DOCKER-CHAIN rule would be placed ahead of that rule when iptable=true; if iptable=false, it is moot because we do add DOCKER-CHAIN rule to FORWARD chain.

suwang48404

comment created time in 5 days

Pull request review commentdocker/libnetwork

DOCKER-USER chain not created when IPTableEnable=false.

 func (c *controller) IsDiagnosticEnabled() bool { 	defer c.Unlock() 	return c.DiagnosticServer.IsDiagnosticEnabled() }++func (c *controller) iptableEnabled() bool {+	c.Lock()+	defer c.Unlock()++	if c.cfg == nil {+		return false+	}+	// parse map cfg["bridge"]["generic"]["EnableIPTable"]+	cfgBridge, ok := c.cfg.Daemon.DriverCfg["bridge"].(map[string]interface{})
  1. just want to be safe, in case iptableEnable() is called beore makeDriverConfig()
  2. ideally it could have been global option. alas, "EnableIPTable" bool is stuff in bridge configure.
suwang48404

comment created time in 5 days

Pull request review commentdocker/libnetwork

DOCKER-USER chain not created when IPTableEnable=false.

+package libnetwork++import (+	"fmt"+	"github.com/docker/libnetwork/iptables"+	"github.com/docker/libnetwork/netlabel"+	"github.com/docker/libnetwork/options"+	"strings"+	"testing"+)++const (+	fwdChainName = "FORWARD"+	usrChainName = "DOCKER-USER"+)++func verifyUserChain(enabled, insert bool) error {+	output, err := iptables.Raw("-S", fwdChainName)+	if err != nil {+		return err+	}+	rules := strings.Split(string(output), "\n")+	if !enabled {+		if len(rules)-1 != 1 {+			return fmt.Errorf("chain %v: unexpected rules len=%v, %v, ",+				fwdChainName, len(rules), rules)+		}+		if _, err = iptables.Raw("-S", usrChainName); err == nil {+			return fmt.Errorf("chain %v: created unexpectedly", usrChainName)+		}+		return nil+	}+	nRules := 2+	if insert {+		nRules+++	}+	if nRules != len(rules)-1 || !strings.Contains(rules[1], usrChainName) {+		return fmt.Errorf("chain %v: unexpected rules len=%v, %v",+			fwdChainName, len(rules), rules)+	}++	output, err = iptables.Raw("-S", usrChainName)+	if err != nil {+		return err+	}+	rules = strings.Split(string(output), "\n")+	if len(rules)-1 != 2 {+		return fmt.Errorf("chain %v: unexpected rules len=%v, %v",+			usrChainName, len(rules), rules)+	}+	return nil+}++func testUserChain(t *testing.T, ctrl *controller, enabled, insert bool) {+	defer func() {+		_, err := iptables.Raw("-F", fwdChainName)+		if err != nil {+			t.Fatal(err)+		}+		_ = iptables.RemoveExistingChain(usrChainName, "")+	}()+	// init. condition, FORWARD chain empty+	// DOCKER-USER not exist+	err := verifyUserChain(false, insert)+	if err != nil {+		t.Fatal(err)+	}+	if insert {+		_, err = iptables.Raw("-A", fwdChainName, "-j", "DROP")+		if err != nil {+			t.Fatal(err)+		}+	}+	setupArrangeUserFilterRule(ctrl)+	arrangeUserFilterRule()+	err = verifyUserChain(enabled, insert)+	if err != nil {+		t.Fatal(err)+	}+}++func TestUserChain(t *testing.T) {+	nc, err := New()+	if err != nil {+		t.Fatal(err)+	}+	c := nc.(*controller)+	opt := options.Generic{+		"EnableIPTables": false,+	}+	cfgBridge := make(map[string]interface{})+	cfgBridge[netlabel.GenericData] = opt+	c.cfg.Daemon.DriverCfg["bridge"] = cfgBridge++	tests := []struct {+		enable bool // enable IPTable+		insert bool // insert other rules to FORWARD+	}{+		{enable: false, insert: false},+		{enable: true, insert: false},+		{enable: true, insert: true},+	}++	for _, i := range tests {+		opt["EnableIPTables"] = i.enable+		t.Logf("Test EnableIPTable=%v, insert=%v", i.enable, i.insert)

this change looks fantastic to me, it is much more compact.

suwang48404

comment created time in 5 days

Pull request review commentdocker/libnetwork

Fix IPv6 address pool calculation

 func splitNetwork(size int, base *net.IPNet) []*net.IPNet {  	for i := 0; i < n; i++ { 		ip := copyIP(base.IP)-		addIntToIP(ip, uint(i<<s))+		addIntToIP(ip, uint(i), s)

Some observation: function splitNetwork: Given a network address range "base", and a desired subnet lengh "size", return all disjointed subnet with length "size" within network address range "base". For instance, if base=10.10.0.0/16, and size=24, returned subnets 10.10.1.0/24, 10.10.2.0/24 ... 10.10.255.0/24

The function works fine for ipv4(int32). For ipv6 (8 bytes) two things should be watched for

  1. numbers of computed subnets may exceeds int32/64,(see line 112) for instance, base=fc00::/8 , size 120. there are 1<<(120-8) number of subnets. We perhaps should reject this kind of request.
  2. the host portion of subnet is bigger than int32/64; for instance with base=fdc00::/8, size 12, line 118 becomes s=128-12=116, and (1<<116) > any integer storage. This is legit bug that perhaps can be addressed with a fix.
SerialVelocity

comment created time in 6 days

Pull request review commentdocker/libnetwork

Fix IPv6 address pool calculation

 func splitNetwork(size int, base *net.IPNet) []*net.IPNet {  	for i := 0; i < n; i++ { 		ip := copyIP(base.IP)-		addIntToIP(ip, uint(i<<s))+		addIntToIP(ip, uint(i), s)

If i understand the problem here, for ipv6 i << s may be larger than (1<<64)-1; i wonder if we should instead shift ordeal into an byte array to accommodate that. It definitely making the change a lot more understandable?

SerialVelocity

comment created time in 7 days

pull request commentdocker/libnetwork

DOCKER-USER chain not created when IPTableEnable=false.

@thaJeztah

Hi Sebastiaan, can u please help review. As this will eventually be moved to dockerd? thx, Su

suwang48404

comment created time in 8 days

pull request commentmoby/moby

Added option to change allocation range of published ports

Hi Sebastiaan,

thx for review, and spending time think about it.

reply inline please

Some questions;

I think SwarmKit and "regular" containers use different ranges for the ephemeral port range; IIRC, regular containers use the range that's configured on the host (net.ipv4.ip_local_port_range), whereas SwarmKit (currently) has a fixed port range.

With this change;

  • what is the default range? ====> on linux based system, 49153-60999, on windows 60000-65000
  • is there something in place to prevent SwarmKit from allocating ports that were already used by "regular" containers (and vice-versa)?

===> swarm allocating range is hard coded to 30000-32767, and they are not yet tunable

Given that this affects both the ephemeral port range for docker run, and for swarm services (docker service create / docker stack deploy); when, and by "who", is the port selected?

=====> docker engine port allocator is used when "docker run -p ..." and "docker service create --publish mode=host", swarm port allocator is used when "docker service create --publish .."

Given a cluster with (e.g.) 3 managers and several workers;

If I create a service with a port published through the ingress network;

  • will the manager I'm connected to pick the port? =====> if "--publish mode" setting is not host, swarm manager pick a port from its port allocator, and this port is reserved on all nodes in cluster
  • will the worker? (likely not) ======> correct. docker engine will not allocate port

Same, but when creating a service with host mode publishing;

  • will the manager I'm connected to pick the port? ====> swarm defers to docker engine to allocate port
  • will the worker? (likely not) ==========> docker engine allocate port if "docker service create --publish mode=host,...",the allocated port has only local node scope.

Give that this is a configuration on the daemon, we must assume that each daemon can have a different configuration, which means that if the manager picks the port, that the selected port range can be different depending on which manager I'm connected to.

So, should we store this configuration in the swarm raft store instead? (similar to what we do for --default-addr-pool, which is configured for the cluster)

========> each docker engine picks different port independently as port allocated by docker engine has local scope only, does not impact other node? That is, the same port allocated by docker engines can be allocated and assigned to same/different service instances on different nodes. Do we need raft in this use case?

setting and updating this configuration

Per my earlier comment; validation is done in the libnetwork code; I still have to build and test this change, but;

  • at what point is the given value validated? ====> validation take place when config is read via daeomon.json or dockerd parameter
  • is the value validated when reloading the configuration (SIGHUP); in other words; do we have guards in place to prevent reloading with an invalid configuration?

=========> yes

(Without having given this much thought, so just thinking out loud); Are there possible complications if I update this port-range? What happens to existing containers if the daemon restarts? I assume containers (and services) preserve the port, but does the daemon / swarm continue to keep track of ports that were allocated in the old range? =======> The follows refers to services instances created with "--publish mode=host", or via "docker run -p", if port range is changed, config reloaded, existing container will continue to use old port allocated even if it is outside of new range; if dockerd rebooted, and containers restarted, they will be given ports from the new range. =======> for port allocated by swarm, i.e. "docker service create --publish ....", I believe port persists across docker engine restart.

Thyx, Su

suwang48404

comment created time in 13 days

pull request commentdocker/libnetwork

Added API to set ephemeral port allocator range.

lets test the functionality end-to-end (add changes to Moby master, add integration tests in Moby) before we cherry-pick into master ?

Arko, u wanted to add integration test to https://github.com/moby/moby/pull/40055/, right? we can do that.

wanted to make sure that the request is not associated with this PR.

suwang48404

comment created time in 14 days

pull request commentdocker/libnetwork

Added API to set ephemeral port allocator range.

@selansen @arkodg @euanh @chiragtayal @joeabbey

suwang48404

comment created time in 14 days

PR opened docker/libnetwork

Added API to set ephemeral port allocator range.

Also reduce the allowed port range as the total number of containers per host is typically less than 1K.

This change helps in scenarios where there are other services on the same host that uses ephemeral ports in iptables manipulation.

The workflow requires changes in docker engine ( https://github.com/moby/moby/pull/40055) and this change. It works as follows:

  1. user can now specified to docker engine an option --published-port-range="50000-60000" as cmdline argument or in daemon.json.
  2. docker engine read and pass this info to libnetwork via config.go:OptionDynamicPortRange.
  3. libnetwork uses this range to allocate dynamic port henceforth.
  4. --published-port-range can be set either via SIGHUP or restart docker engine
  5. if --published-port-range is not set by user, a OS specific default range is used for dynamic port allocation. Linux: 49153-60999, Windows: 60000-65000 6 if --published-port-range is invalid, that is, the range given is outside of allowed default range, no change takes place. libnetwork will continue to use old/existing port range for dynamic port allocation.

Signed-off-by: Su Wang su.wang@docker.com

+134 -14

0 comment

6 changed files

pr created time in 14 days

create barnchsuwang48404/libnetwork

branch : bump_19.03

created branch time in 14 days

push eventsuwang48404/moby

Su Wang

commit sha c073bf49226dce24a3604215eb5073160ffa2d00

Added option to change port allocation range for ephemeral service ports. Signed-off-by: Su Wang <su.wang@docker.com>

view details

push time in 15 days

push eventsuwang48404/moby

Su Wang

commit sha 97deab68e3782fb388bf07ed55ed92941eae7f27

Added option to change port allocation range for ephemeral service ports. Signed-off-by: Su Wang <su.wang@docker.com>

view details

push time in 18 days

push eventsuwang48404/moby

Daniel Black

commit sha 7b4b940470ee34c96bf434b810e4cd5ca2e68182

/containers/{id}/json missing Platform To match ContainerJSONBase api/types/types.go Signed-off-by: Daniel Black <daniel@linux.ibm.com>

view details

Devon Estes

commit sha cb2a36a89c1fb73b5b9ea3e9df8977f2b3139ad1

Add ability to handle index acknowledgment with splunk log driver Previously there was no way for the splunk log driver to work if index acknowledgment was set on the HEC, and it would in fact fail silently. This will now allow users to specify if index acknowledgment is set and will work with that setting. Signed-off-by: Devon Estes <devon.c.estes@gmail.com>

view details

Sebastiaan van Stijn

commit sha 404d87ec6946aaa9c130b64c0c75514a2fcd50c0

AppArmor: add missing rules for running in userns Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

view details

Sebastiaan van Stijn

commit sha 6756f5f378d0f4f9efbda50fabb5bfdef2e5c4a7

API: update docs that /session left experimental in V1.39 The `/session` endpoint left experimental in API V1.39 through 239047c2d36706f2826b0a9bc115e0a08b1c3d27 and 01c9e7082eba71cbe60ce2e47acb9aad2c83c7ef, but the API reference was not updated accordingly. This updates the API documentation to match the change. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

view details

Tibor Vass

commit sha fbdd437d295595e88466b33a550a8707b9ebb709

daemon/config: fix filter type in BuildKit GC config For backwards compatibility, the old incorrect object format for builder.GC.Rule.Filter still works but is deprecated in favor of array of strings akin to what needs to be passed on the CLI. Signed-off-by: Tibor Vass <tibor@docker.com>

view details

Tibor Vass

commit sha 85733620ebea3da75abe7d732043354aa0883f8a

daemon/config: add MarshalJSON for future proofing If anything marshals the daemon config now or in the future this commit ensures the correct canonical form for the builder GC policies' filters. Signed-off-by: Tibor Vass <tibor@docker.com>

view details

Michael Zhao

commit sha af86580000d8a2a56ad526e47358d9cdd6b9dec7

Test to enable CI on aarch64. Signed-off-by: Michael Zhao <michael.zhao@arm.com> Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

view details

Michael Zhao

commit sha 48b06a2561d5779cd8317047c01a10dee8d189ff

Tailor CI for ARM, skip legacy integration test. Signed-off-by: Michael Zhao <michael.zhao@arm.com> Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

view details

Sebastiaan van Stijn

commit sha 402c7b1b278683efe7ca3a2119319fda4838a392

Jenkinsfile: aarch64: move stage inside parallel group Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

view details

Sebastiaan van Stijn

commit sha 86e0c5a0d4e6462b79af0db5c016bf061854ed8f

Jenkinsfile: aarch64: sync stage with other stages Also switch aarch64 to use overlay2 Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

view details

Sebastiaan van Stijn

commit sha 58d57c76b5264804619c58702b2e33b3fe6253b3

Jenkinsfile: aarch64: split into stages, add "print info" unit-tests Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

view details

Sebastiaan van Stijn

commit sha 9d5361de3f9b090b317cf1ddb87bd50cda29a38b

Jenkinsfile: rename aarch64 to arm64 Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

view details

Sebastiaan van Stijn

commit sha 14ea1f62eb8be694cb27cca016d6146c341dba9a

Jenkinsfile: aarch64: don't restrict to packet workers only Pick whatever is available; packet worker, or auto-scaling a1.xlarge arm64 machines on AWS Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

view details

Sebastiaan van Stijn

commit sha a0d670e516e893fbf6c76bd5f56306256157605f

Jenkinsfile: aarch64: sync with latest changes Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

view details

Sebastiaan van Stijn

commit sha eda98ad00ffdf8f39ce3c704d02c5c2c34fb0ee0

Jenkinsfile: aarch64: use new labels to select agents Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

view details

Brian Goff

commit sha e5bfaf19b1ca48203c75b38e5454a05a6bcff4f5

Fix `make cross` target When changing the various cross targets in the Dockerfile I neglected some `;`. Instead of dealing with that now this just sets `--platform` on the cross specific targets which only work on linux/amd64 anyway. Signed-off-by: Brian Goff <cpuguy83@gmail.com>

view details

Kir Kolyshkin

commit sha 93f9b902af89f82367d750aa871d40f25ccd99ca

go-swagger: fix panic This is an attempt to fix go-swagger panic under Golang 1.13. Details: * https://github.com/go-openapi/jsonpointer/pull/4 * https://github.com/go-swagger/go-swagger/pull/2059 Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>

view details

Sebastiaan van Stijn

commit sha 1be2cc2568ddd52a3b27d607a9c9f7c9347d3c50

Makefile: force using buildkit if USE_BUILDX is not set Before this change: ``` unset DOCKER_BUILDKIT make build docker build --build-arg=CROSS=false -t "docker-dev:require-buildkit" -f "Dockerfile" . Sending build context to Docker daemon 50.01MB Error response from daemon: Dockerfile parse error line 17: Unknown flag: mount make: *** [build] Error 1 ``` After this change: ``` unset DOCKER_BUILDKIT make build docker build --build-arg=CROSS=false -t "docker-dev:require-buildkit" -f "Dockerfile" . [+] Building 5.2s (71/71) FINISHED => [internal] load .dockerignore 0.1s ... ... => => exporting layers 0.9s => => writing image sha256:1ea4128a0e7f3bdee47de1675252609d9d6071e32da24a2aafee9fba96b2404b 0.0s => => naming to docker.io/library/docker-dev:require-buildkit 0.0s ... Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

view details

Tibor Vass

commit sha 448db5a783a0fa989796589877ab7bd7f1869b56

Merge pull request #40060 from thaJeztah/require_buildkit Makefile: force using buildkit if USE_BUILDX is not set

view details

Sebastiaan van Stijn

commit sha 6afe0f38f6ed9874373cb179d8484f188e6c2d5b

integration-cli: make testRequires() a Helper Make this utility a helper, so that the "skip" message is printing the location of the test, instead of the location of the helper, which is what it's printing now: requirement.go:26: unmatched requirement bridgeNfIptables Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

view details

push time in 18 days

create barnchsuwang48404/libnetwork

branch : enc-fix

created branch time in 19 days

pull request commentdocker/libnetwork

DOCKER-USER chain not created when IPTableEnable=false.

-- in most common container use cases (network mode=bridge, overlay), I feel "iptables=false" should be ignore even if it is set. This is because without docker engine manipulate iptables for NATting/forwarding rules, connectivity to those container are broken.

I know that there's definitely users running with --iptables=false; their use case is that they don't want docker to automatically expose ports / traffic to the containers, and they manually set up rules to make networking work

Hi Sebastian,

thx for the heads-up with regard to iptable=false use case, i did not know that ...

The question is are we comfortable changing existing docker engine ibehavior and strictly enforcing iptable=false, i.e no iptable programming at all, or error on caution?

Thx, Su

suwang48404

comment created time in a month

PullRequestEvent

PR closed docker/libnetwork

DOCKER-USER chain not created when IPTableEnable=false.

This fix addresses https://docker.atlassian.net/browse/ENGCORE-1115 Expected behaviors upon docker engine restarts:

  1. IPTableEnable=true, DOCKER-USER chain present -- no change to DOCKER-USER chain
  2. IPTableEnable=true, DOCKER-USER chain not present -- DOCKER-USER chain created and inserted top of FORWARD chain.
  3. IPTableEnable=false, DOCKER-USER chain present -- no change to DOCKER-USER chain the rational is that DOCKER-USER is populated and may be used by end-user for purpose other than filtering docker container traffic. Thus even if IPTableEnable=false, docker engine does not touch pre-existing DOCKER-USER chain.
  4. IPTableEnable=false, DOCKER-USER chain not present -- DOCKER-USER chain is not created.
+158 -11

5 comments

4 changed files

suwang48404

pr closed time in a month

pull request commentdocker/libnetwork

DOCKER-USER chain not created when IPTableEnable=false.

Some useful documentation in this area - https://docs.docker.com/network/iptables/

This wont take care of the cases where DOCKER-INGRESS is created if there are services running

IMHO a clean way to fix this could be to incorporate this flag into https://github.com/docker/libnetwork/blob/master/iptables/iptables.go and error out from functions such as NewChain or ProgramChain if the flag is disabled with an error saying iptables is disabled in config

suwang48404

comment created time in a month

pull request commentdocker/libnetwork

DOCKER-USER chain not created when IPTableEnable=false.

@joeabbey

suwang48404

comment created time in a month

pull request commentdocker/libnetwork

DOCKER-USER chain not created when IPTableEnable=false.

@arkodg @selansen @euanh @chiragtayal

suwang48404

comment created time in a month

push eventsuwang48404/libnetwork

Su Wang

commit sha 55c56998f8a5b98250f376eeeac1104f8ef4700e

DOCKER-USER chain not created when IPTableEnable=false. This fix addresses https://docker.atlassian.net/browse/ENGCORE-1115 Expected behaviors upon docker engine restarts: 1. IPTableEnable=true, DOCKER-USER chain present -- no change to DOCKER-USER chain 2. IPTableEnable=true, DOCKER-USER chain not present -- DOCKER-USER chain created and inserted top of FORWARD chain. 3. IPTableEnable=false, DOCKER-USER chain present -- no change to DOCKER-USER chain the rational is that DOCKER-USER is populated and may be used by end-user for purpose other than filtering docker container traffic. Thus even if IPTableEnable=false, docker engine does not touch pre-existing DOCKER-USER chain. 4. IPTableEnable=false, DOCKER-USER chain not present -- DOCKER-USER chain is not created. Signed-off-by: Su Wang <su.wang@docker.com>

view details

push time in a month

push eventsuwang48404/libnetwork

Su Wang

commit sha f148a99f2c241a556b91ac917062569717b9d08d

DOCKER-USER chain not created when IPTableEnable=false. This fix addresses https://docker.atlassian.net/browse/ENGCORE-1115 Expected behaviors upon docker engine restarts: 1. IPTableEnable=true, DOCKER-USER chain present -- no change to DOCKER-USER chain 2. IPTableEnable=true, DOCKER-USER chain not present -- DOCKER-USER chain created and inserted top of FORWARD chain. 3. IPTableEnable=false, DOCKER-USER chain present -- no change to DOCKER-USER chain the rational is that DOCKER-USER is populated and may be used by end-user for purpose other than filtering docker container traffic. Thus even if IPTableEnable=false, docker engine does not touch pre-existing DOCKER-USER chain. 4. IPTableEnable=false, DOCKER-USER chain not present -- DOCKER-USER chain is not created. Signed-off-by: Su Wang <su.wang@docker.com>

view details

push time in a month

push eventsuwang48404/libnetwork

Su Wang

commit sha 651eabc44c3f426c091fa331543152c5ce487029

DOCKER-USER chain not created when IPTableEnable=false. This fix addresses https://docker.atlassian.net/browse/ENGCORE-1115 Expected behaviors upon docker engine restarts: 1. IPTableEnable=true, DOCKER-USER chain present -- no change to DOCKER-USER chain 2. IPTableEnable=true, DOCKER-USER chain not present -- DOCKER-USER chain created and inserted top of FORWARD chain. 3. IPTableEnable=false, DOCKER-USER chain present -- no change to DOCKER-USER chain the rational is that DOCKER-USER is populated and may be used by end-user for purpose other than filtering docker container traffic. Thus even if IPTableEnable=false, docker engine does not touch pre-existing DOCKER-USER chain. 4. IPTableEnable=false, DOCKER-USER chain not present -- DOCKER-USER chain is not created. Signed-off-by: Su Wang <su.wang@docker.com>

view details

push time in a month

PR opened docker/libnetwork

DOCKER-USER chain not created when IPTableEnable=false.

This fix addresses https://docker.atlassian.net/browse/ENGCORE-1115 Expected behaviors upon docker engine restarts:

  1. IPTableEnable=true, DOCKER-USER chain present -- no change to DOCKER-USER chain
  2. IPTableEnable=true, DOCKER-USER chain not present -- DOCKER-USER chain created and inserted top of FORWARD chain.
  3. IPTableEnable=false, DOCKER-USER chain present -- no change to DOCKER-USER chain the rational is that DOCKER-USER is populated and may be used by end-user for purpose other than filtering docker container traffic. Thus even if IPTableEnable=false, docker engine does not touch pre-existing DOCKER-USER chain.
  4. IPTableEnable=false, DOCKER-USER chain not present -- DOCKER-USER chain is not created.
+156 -9

0 comment

3 changed files

pr created time in a month

push eventsuwang48404/libnetwork

Su Wang

commit sha ea6a3597563cf6b152913d3c8af2a345778eca3b

DOCKER-USER chain not created when IPTableEnable=false. This fix addresses https://docker.atlassian.net/browse/ENGCORE-1115 Expected behaviors upon docker engine restarts: 1. IPTableEnable=true, DOCKER-USER chain present -- no change to DOCKER-USER chain 2. IPTableEnable=true, DOCKER-USER chain not present -- DOCKER-USER chain created and inserted top of FORWARD chain. 3. IPTableEnable=false, DOCKER-USER chain present -- no change to DOCKER-USER chain the rational is that DOCKER-USER is populated and may be used by end-user for purpose other than filtering docker container traffic. Thus even if IPTableEnable=false, docker engine does not touch pre-existing DOCKER-USER chain. 4. IPTableEnable=false, DOCKER-USER chain not present -- DOCKER-USER chain is not created.

view details

push time in a month

push eventsuwang48404/libnetwork

Su Wang

commit sha 44efffdcd8baf335edb33d69a678767b17e7d511

DOCKER-USER chain not created when IPTableEnable=false. Expected behaviors upon docker engine restarts: 1. IPTableEnable=true, DOCKER-USER chain present -- no change to DOCKER-USER chain 2. IPTableEnable=true, DOCKER-USER chain not present -- DOCKER-USER chain created and inserted top of FORWARD chain. 3. IPTableEnable=false, DOCKER-USER chain present -- no change to DOCKER-USER chain the rational is that DOCKER-USER is populated and may be used by end-user for purpose other than filtering docker container traffic. Thus even if IPTableEnable=false, docker engine does not touch pre-existing DOCKER-USER chain. 4. IPTableEnable=false, DOCKER-USER chain not present -- DOCKER-USER chain is not created.

view details

push time in a month

push eventsuwang48404/libnetwork

Arko Dasgupta

commit sha 8db595c16cc600afa99eeb47e172f38bbab646ce

Revert "Merge pull request #2339 from phyber/iptables-check" This reverts commit 820deef78e53c49f13797a93537325a4b8d53014, reversing changes made to 19e372a98f736c48e65563db5d7a474fa42d94b4. Signed-off-by: Arko Dasgupta <arko.dasgupta@docker.com>

view details

elangovan sivanandam

commit sha 90afbb01e1d8acacb505a092744ea42b9f167377

Merge pull request #2466 from arkodg/revert-iptables-docker-user Revert "Merge pull request #2339 from phyber/iptables-check"

view details

elangovan sivanandam

commit sha 79c19d09290f1a4a83e8bc786db3bf974473eda0

Merge pull request #2461 from suwang48404/master Allowed libnetwork caller to set ephemeral port

view details

Su Wang

commit sha 9726dbc1c484a4ba315c17e6864f1329b2299cb7

DOCKER-USER chain not created when IPTableEnable is false. Expected behaviors upon docker engine restarts: 1. IPTableEnable=true, DOCKER-USER chain present -- no change 2. IPTableEnable=true, DOCKER-USER chain not present -- DOCKER-USER chain created and inserted top of FORWARD chain. 3. IPTableEnable=false, DOCKER-USER chain present -- no change the rational is that DOCKER-USER is populated and may be used by end-user for purpose other than filtering docker container traffic. Thus even if IPTableEnable=false, docker engine does not touch pre-existing DOCKER-USER chain. 4. IPTableEnable=false, DOCKER-USER chain not present -- no change

view details

push time in a month

Pull request review commentdocker/libnetwork

Allowed libnetwork caller to set ephemeral port

 package portallocator import ( 	"errors" 	"fmt"+	"github.com/sirupsen/logrus" 	"net" 	"sync" ) -const (-	// DefaultPortRangeStart indicates the first port in port range-	DefaultPortRangeStart = 49153-	// DefaultPortRangeEnd indicates the last port in port range-	DefaultPortRangeEnd = 65535+var (+	// defaultPortRangeStart indicates the first port in port range+	defaultPortRangeStart = 49153+	// defaultPortRangeEnd indicates the last port in port range+	// consistent with default /proc/sys/net/ipv4/ip_local_port_range+	// upper bound on linux+	defaultPortRangeEnd = 60999 ) +func sanitizePortRange(start int, end int) (newStart, newEnd int, err error) {+	if start > defaultPortRangeEnd || end < defaultPortRangeStart || start > end {

this is sanity check.

say initially, port range is set to 43456-60999, everything works great. Now a (naive) customer comes in and changes published-port-range 80000-90000 ( which is larger (2<<16-1)). libnetwork would reject this illegal range and continue to use existing/working range for port allocation.

suwang48404

comment created time in a month

push eventsuwang48404/libnetwork

Su Wang

commit sha 94facacc0c4823b6e2f679913737da32f2d521da

Added API to set ephemeral port allocator range. Also reduce the allowed port range as the total number of containers per host is typically less than 1K. This change helps in scenarios where there are other services on the same host that uses ephemeral ports in iptables manipulation. The workflow requires changes in docker engine ( https://github.com/moby/moby/pull/40055) and this change. It works as follows: 1. user can now specified to docker engine an option --published-port-range="50000-60000" as cmdline argument or in daemon.json. 2. docker engine read and pass this info to libnetwork via config.go:OptionDynamicPortRange. 3. libnetwork uses this range to allocate dynamic port henceforth. 4. --published-port-range can be set either via SIGHUP or restart docker engine 5. if --published-port-range is not set by user, a OS specific default range is used for dynamic port allocation. Linux: 49153-60999, Windows: 60000-65000 6 if --published-port-range is invalid, that is, the range given is outside of allowed default range, no change takes place. libnetwork will continue to use old/existing port range for dynamic port allocation. Signed-off-by: Su Wang <su.wang@docker.com>

view details

push time in a month

Pull request review commentdocker/libnetwork

Allowed libnetwork caller to set ephemeral port

 package portallocator -const (-	StartPortRange = 60000-	EndPortRange   = 65000-)+func init() {

ditto

suwang48404

comment created time in a month

Pull request review commentdocker/libnetwork

Allowed libnetwork caller to set ephemeral port

 package portallocator import ( 	"errors" 	"fmt"+	"github.com/sirupsen/logrus" 	"net" 	"sync" ) -const (-	// DefaultPortRangeStart indicates the first port in port range-	DefaultPortRangeStart = 49153-	// DefaultPortRangeEnd indicates the last port in port range-	DefaultPortRangeEnd = 65535+var (

I want to share the DefaultPortRangeStart/End with all OSs, the idea is at func init(), each OS will modified them to default range specific to OS. And from that point on, this range can be used the common sanitize function to validate any change request.

On the other hands, if they are constant, they cannot be modified. Then each os would have its own set of default ranges, and different set of sanitize function.

suwang48404

comment created time in a month

push eventsuwang48404/libnetwork

Su Wang

commit sha bee3ffb54d5433330d381dd29f420a739bb80579

Added API to set ephemeral port allocator range. Also reduce the allowed port range as the total number of containers per host is typically less than 1K. This change helps in scenarios where there are other services on the same host that uses ephemeral ports in iptables manipulation. Signed-off-by: Su Wang <su.wang@docker.com>

view details

push time in a month

Pull request review commentdocker/libnetwork

Add DOCKER-USER chain when iptables=true is set

 func (d *driver) configure(option map[string]interface{}) error { 		} 		// Make sure on firewall reload, first thing being re-played is chains creation 		iptables.OnReloaded(func() { logrus.Debugf("Recreating iptables chains on firewall reload"); setupIPChains(config) })++		// Add DOCKER-USER chain+		arrangeUserFilterRule()

I suppose for any *nix, we expect to create bridge. I see somewhere there are _bsd specific imple. So I am guessing at some point before at least bsd is supported?

arkodg

comment created time in a month

Pull request review commentdocker/libnetwork

Add DOCKER-USER chain when iptables=true is set

 const ( 	vethLen                    = 7 	defaultContainerVethPrefix = "eth" 	maxAllocatePortAttempts    = 10+	userChain                  = "DOCKER-USER"

by moving arrangeUserFilterRule to bridge only network, it means unless a bridge network is created, there won't be DOCKER_USER chain. It will work as dockerd always creates docker9 bridge, but for me the original approach is cleaner.

arkodg

comment created time in a month

Pull request review commentdocker/libnetwork

Add DOCKER-USER chain when iptables=true is set

 const ( 	vethLen                    = 7 	defaultContainerVethPrefix = "eth" 	maxAllocatePortAttempts    = 10+	userChain                  = "DOCKER-USER"

I have similar questions as Elango, why move to bridge.go, may be the real problem is that controller.hasIPTableEnabled() isn't the correct value?

arkodg

comment created time in a month

Pull request review commentdocker/libnetwork

Add DOCKER-USER chain when iptables=true is set

 func (d *driver) configure(option map[string]interface{}) error { 		} 		// Make sure on firewall reload, first thing being re-played is chains creation 		iptables.OnReloaded(func() { logrus.Debugf("Recreating iptables chains on firewall reload"); setupIPChains(config) })++		// Add DOCKER-USER chain+		arrangeUserFilterRule()

now arrangeUserFIlterRule is called on all systems, linux/windows/bsd etc, is this by intention?

arkodg

comment created time in a month

Pull request review commentdocker/libnetwork

Fix panic in drivers/overlay/encryption.go

 func (d *driver) updateKeys(newKey, primary, pruneKey *key) error { 		} 	} +	d.secMapWalk(func(rIPs string, spis []*spi) ([]*spi, bool) {+		rIP := net.ParseIP(rIPs)+		return updateNodeKey(lIP, aIP, rIP, spis, d.keys, newIdx, priIdx, delIdx), false+	})+ 	if (newKey != nil && newIdx == -1) || 		(primary != nil && priIdx == -1) ||

If my understanding is correct, there is mismatch between driver.key and driver.secMap.node[].spi when an error happens here. The driver.key is already appended in line 466, and that is only change to the driver state.

I wonder if it is better to revert driver.key state when there is error here? That way driver.key and driver.secMap.node[[].spi are kept consistent.

arkodg

comment created time in a month

push eventsuwang48404/moby

Derek McGowan

commit sha 6c94a50f4198fffa44f93d627044f1ca43545081

update containerd binary v1.3.0 full diff: https://github.com/containerd/containerd/compare/v1.2.8..v1.3.0 Signed-off-by: Sebastiaan van Stijn <github@gone.nl> Signed-off-by: Derek McGowan <derek@mcgstyle.net>

view details

Derek McGowan

commit sha 12f9887c8edb38177041aa35fd7d948b5c64f15b

bump containerd v1.3.0 full diff: https://github.com/containerd/containerd/compare/7c1e88399ec0b0b077121d9d5ad97e647b11c870...v1.3.0 Signed-off-by: Sebastiaan van Stijn <github@gone.nl> Signed-off-by: Derek McGowan <derek@mcgstyle.net>

view details

Sebastiaan van Stijn

commit sha 1617be92d301de1386adabad5f241d3653b6c8ff

bump containerd/go-runc e029b79d8cda8374981c64eba71f28ec38e5526f - github.com/containerd/go-runc https://github.com/containerd/go-runc/compare/7d11b49dc0769f6dbb0d1b19f3d48524d1bad9ad...e029b79d8cda8374981c64eba71f28ec38e5526f - containerd/go-runc#52 Fix Method of judging command execution failure - fixes "init.pid: no such file or directory: unknown" errors - containerd/go-runc#54 avoid setting NOTIFY_SOCKET from calling process Signed-off-by: Sebastiaan van Stijn <github@gone.nl> Signed-off-by: Derek McGowan <derek@mcgstyle.net>

view details

Sebastiaan van Stijn

commit sha 0af1099a81861dd0269adad53bdfb387b5c78f39

bump containerd/cgroups c4b9ac5c7601384c965b9646fc515884e091ebb9 full diff: github.com/containerd/cgroups https://github.com/containerd/cgroups/compare/4994991857f9b0ae8dc439551e8bebdbb4bf66c1...c4b9ac5c7601384c965b9646fc515884e091ebb9 changes included: - containerd/cgroups#81 Add network stats - addresses containerd/cgroups#80 Add network metrics - containerd/cgroups#85 Fix cgroup hugetlb size prefix for kB - addresses kubernetes/kubernetes#77169 Permission denied on hugetlb due to wrong filename - relates to opencontainers/runc#2065 Fix cgroup hugetlb size prefix for kB - containerd/cgroups#88 cgroups: fix MoveTo function fail problem - containerd/cgroups#92 fixed an issue with invalid soft memory limits - containerd/cgroups#93 avoid adding io_serviced and io_service_bytes duplicately - fixes containerd/containerd#3412 collected metric container_blkio_io_serviced_recursive_total: was collected before with the same name and label values Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

view details

Sebastiaan van Stijn

commit sha 56357b73dabde4db4d0b627d9c2f5df766c509e2

bump containerd/continuity f2a389ac0a02ce21c09edd7344677a601970f41c full diff: https://github.com/containerd/continuity/compare/aaeac12a7ffcd198ae25440a9dff125c2e2703a7...f2a389ac0a02ce21c09edd7344677a601970f41c Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

view details

Sebastiaan van Stijn

commit sha 0b5dcdc5d759e3ad612e814f00ee5dbae43335e4

bump containerd/fifo bda0ff6ed73c67bfb5e62bc9c697f146b7fd7f13 full diff: https://github.com/containerd/fifo/compare/a9fb20d87448d386e6d50b1f2e1fa70dcf0de43c...bda0ff6ed73c67bfb5e62bc9c697f146b7fd7f13 Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

view details

Derek McGowan

commit sha bc5484d2dd5039e4fcb6774ae568002fd04efc7a

bump moby/buildkit f7042823e340d38d1746aa675b83d1aca431cee3 full diff: https://github.com/moby/buildkit/compare/588c73e1e4f0f3d7d3738abaaa7cf8026064b33e...f7042823e340d38d1746aa675b83d1aca431cee3 Signed-off-by: Sebastiaan van Stijn <github@gone.nl> fix daemon for changes in containerd registry configuration Signed-off-by: Evan Hazlett <ejhazlett@gmail.com> Update buildernext and daemon for buildkit update Signed-off-by: Derek McGowan <derek@mcgstyle.net>

view details

Sebastiaan van Stijn

commit sha 82097c0f1f0494d75b1088b59453a2d0feeb9735

bump hashicorp/golang-lru v0.5.3 full diff: https://github.com/hashicorp/golang-lru/compare/v0.5.1...v0.5.3 Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

view details

Sebastiaan van Stijn

commit sha d5f07220fce4a52e9b653f0f7e831aa5b71f8308

integration-cli: DockerSwarmSuite: show output on failures Unfortunately quite some of these tests do output-matching, which may be CLI dependent; this patch prints the output string, to help debugging failures that may be related to the output having changed between CLI versions. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

view details

Kir Kolyshkin

commit sha 8663d0933439acd8187c376d64a57ec8ffee511e

devmapper: fix unit test It has been pointed out that sometimes device mapper unit tests fail with the following diagnostics: > --- FAIL: TestDevmapperSetup (0.02s) > graphtest_unix.go:44: graphdriver: loopback attach failed > graphtest_unix.go:48: loopback attach failed The root cause is the absence of udev inside the container used for testing, which causes device nodes (/dev/loop*) to not be created. The test suite itself already has a workaround, but it only creates 8 devices (loop0 till loop7). It might very well be the case that the first few devices are already used by the system (on my laptop 15 devices are busy). The fix is to raise the number of devices being manually created. Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>

view details

Brian Goff

commit sha 94254871178787246d899392256709d0560701ad

Merge pull request #40054 from thaJeztah/swarmtests_more_debugging integration-cli: DockerSwarmSuite: show output on failures

view details

Andrew Hsu

commit sha 7450f89f6c8bfae0f81cecca0414beb9d1a0ae2a

integration-cli: TestUserDefinedNetworkConnectDisconnectAlias return on failure Have the test return immediately if the test does not pass instead of stuck in `top`. Signed-off-by: Andrew Hsu <andrewhsu@docker.com>

view details

Andrew Hsu

commit sha 318e279fd8f7bb98eba4890d25e24b4ca86185b5

integration-cli: TestDockerNetworkConnectLinkLocalIP return on failure Signed-off-by: Andrew Hsu <andrewhsu@docker.com>

view details

Brian Goff

commit sha 4faf65f2508f71802f115ad95b6410738e456357

Merge pull request #40056 from kolyshkin/dm-unit devmapper: fix unit test

view details

Tibor Vass

commit sha 72befc22185fadfa8a19aef3b6d3eb7d6ce6e8dd

Merge pull request #40043 from andrewhsu/true integration-cli: TestUserDefinedNetworkConnectDisconnectAlias return on failure

view details

Tibor Vass

commit sha dba8da8158452daaca4ef7546a5ba53c94d7ae3c

Merge pull request #40057 from andrewhsu/truetop integration-cli: TestDockerNetworkConnectLinkLocalIP return on failure

view details

Tibor Vass

commit sha b3be2802d417d05813d6140b5bc177043288db24

Merge pull request #39713 from thaJeztah/containerd_1.3 bump containerd and dependencies to v1.3.0

view details

Su Wang

commit sha c1c9ab3e8b457f9d05d98f7923140bd6c69fc89f

Added option to change port allocation range for ephemeral service ports. Note: cannot checkin until https://github.com/docker/libnetwork/pull/2461 is merged and vendored.

view details

push time in a month

push eventsuwang48404/libnetwork

Su Wang

commit sha 21902ccf0c754d19e7a5149961870e5544139bbe

Allowed libnetwork caller to set ephemeral port allocator range in runtime. Also reduce the allowed port range as the total number of containers per host is typically less than 1K. This change helps in scenarios where there are other services on the same host that uses ephemeral ports in iptables manipulation. Signed-off-by: Su Wang <su.wang@docker.com>

view details

push time in a month

push eventsuwang48404/libnetwork

Su Wang

commit sha 37c282788a8caec37ac409a8f96522e7af19e398

Allowed libnetwork caller to set ephemeral port allocator range in runtime. Also reduce the allowed port range as the total number of containers per host is typically less than 1K. This change helps in scenarios where there are other services on the same host that uses ephemeral ports in iptables manipulation. Signed-off-by: Su Wang <su.wang@docker.com>

view details

push time in a month

Pull request review commentmoby/moby

Added option to change port allocation range for ephemeral service po…

 func installCommonConfigFlags(conf *config.Config, flags *pflag.FlagSet) error { 	flags.Var(opts.NewNamedListOptsRef("node-generic-resources", &conf.NodeGenericResources, opts.ValidateSingleGenericResource), "node-generic-resource", "Advertise user-defined resource")  	flags.IntVar(&conf.NetworkControlPlaneMTU, "network-control-plane-mtu", config.DefaultNetworkMtu, "Network Control plane MTU")+	flags.StringVar(&conf.HostServiceDynamicPortRange, "host-service-dynamic-port-range", "", "Port allocator range")

The flag is --service-port-range "50000-60000"

suwang48404

comment created time in a month

Pull request review commentmoby/moby

Added option to change port allocation range for ephemeral service po…

 func (daemon *Daemon) networkOptions(dconfig *config.Config, pg plugingetter.Plu  	options = append(options, nwconfig.OptionNetworkControlPlaneMTU(dconfig.NetworkControlPlaneMTU)) +	if len(dconfig.NetworkConfig.HostServiceDynamicPortRange) > 0 {

the validation is in libnetwork

suwang48404

comment created time in a month

Pull request review commentmoby/moby

Added option to change port allocation range for ephemeral service po…

 func installCommonConfigFlags(conf *config.Config, flags *pflag.FlagSet) error { 	flags.Var(opts.NewNamedListOptsRef("node-generic-resources", &conf.NodeGenericResources, opts.ValidateSingleGenericResource), "node-generic-resource", "Advertise user-defined resource")  	flags.IntVar(&conf.NetworkControlPlaneMTU, "network-control-plane-mtu", config.DefaultNetworkMtu, "Network Control plane MTU")+	flags.StringVar(&conf.HostServiceDynamicPortRange, "host-service-dynamic-port-range", "", "Port allocator range")

yes, it will impact both swarm and docker run -P, will think of a better name

suwang48404

comment created time in a month

pull request commentmoby/moby

Added option to change port allocation range for ephemeral service po…

@euanh @chiragtayal

suwang48404

comment created time in a month

push eventsuwang48404/libnetwork

Su Wang

commit sha e7839b8fb945409c3442151edc6c2ebd7c2c8804

Allowed libnetwork caller to set ephemeral port allocator range in runtime. Also reduce the allowed port range as the total number of containers per host is typically less than 1K. This change helps in scenarios where there are other services on the same host that uses ephemeral ports in iptables manipulation. Signed-off-by: Su Wang <su.wang@docker.com>

view details

push time in a month

pull request commentmoby/moby

Added option to change port allocation range for ephemeral service po…

@selansen @arkodg

suwang48404

comment created time in a month

PR opened moby/moby

Added option to change port allocation range for ephemeral service po…

…rts.

Note: cannot checkin until https://github.com/docker/libnetwork/pull/2461 is merged and vendored.

<!-- Please make sure you've read and understood our contributing guidelines; https://github.com/moby/moby/blob/master/CONTRIBUTING.md

** Make sure all your commits include a signature generated with git commit -s **

For additional information on our contributing process, read our contributing guide https://docs.docker.com/opensource/code/

If this is a bug fix, make sure your description includes "fixes #xxxx", or "closes #xxxx"

Please provide the following information: -->

- What I did

- How I did it

- How to verify it

- Description for the changelog <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: -->

- A picture of a cute animal (not mandatory but encouraged)

+26 -0

0 comment

4 changed files

pr created time in a month

push eventsuwang48404/moby

Olli Janatuinen

commit sha 8660330173e5053e274cf12860079f132cbaa9fa

Unit test for getOrphan Signed-off-by: Olli Janatuinen <olli.janatuinen@gmail.com>

view details

Jintao Zhang

commit sha f8f6f7c2a0e1e1e8b541b29b0f1bdae44964e714

cleanup: remove SetDead function Signed-off-by: Jintao Zhang <zhangjintao9020@gmail.com>

view details

Kamil Domański

commit sha 186e22d26e7cf6e4d6f718257c653e496850914a

include IPv6 address of linked containers in /etc/hosts Signed-off-by: Kamil Domański <kamil@domanski.co>

view details

Jintao Zhang

commit sha e6fce00ec83df2f23523b836f647b8f3df97953f

TestCase: use `icmd.RunCmd` instead `icmd.StartCmd` Use `cli.Docker` instead `dockerCmdWithResult`. Signed-off-by: Jintao Zhang <zhangjintao9020@gmail.com>

view details

Vikram bir Singh

commit sha ebf12dbda08633375ab12387255d3f617ee9be38

Reimplement iteration over fileInfos in getOrphan. 1. Reduce complexity due to nested if blocks by using early return/continue 2. Improve logging Changes suggested as a part of code review comments in 39748 Signed-off-by: Vikram bir Singh <vikrambir.singh@docker.com>

view details

Sebastiaan van Stijn

commit sha 8e8c52c4abe011a4cf3334da0726ef1fc0d17b14

hack/ci/windows.ps1: explicitly set exit code to result of tests Trying to see if this helps with the cleanup step exiting in CI, but Jenkins continuing to wait for the script to end afterwards. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

view details

Sebastiaan van Stijn

commit sha 7eb522a2350d759cf6a9aad493ac1b8ffc3d3335

hack/ci/windows.ps1 print all environment variables to check how Jenkins runs this script Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

view details

Sebastiaan van Stijn

commit sha b6f596c4112818109441c84d313cf38fa06d6768

hack/ci/windows.ps1: add support for DOCKER_STORAGE_OPTS Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

view details

Sebastiaan van Stijn

commit sha 61450a651ba8fe4ef3cac284482a9495fc0b761d

hack/ci/windows.ps1: fix Go version check (due to trailing .0) The Windows Dockerfile downloads the Go binaries, which (unlike the Golang images) do not have a trailing `.0` in their version. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

view details

Sebastiaan van Stijn

commit sha a3f9cb5b635a76484a926289bfa4fc6feee1763a

TestDispatch: refactor to use subtests again, and fix linting (structcheck) Instead of using a `initDispatchTestCases()` function, declare the test-table inside `TestDispatch` itself, and run the tests as subtests. ``` [2019-08-27T15:14:51.072Z] builder/dockerfile/evaluator_test.go:18:2: `name` is unused (structcheck) [2019-08-27T15:14:51.072Z] name, expectedError string ``` Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

view details

Stephen Benjamin

commit sha 89dd10b06efe93d4f427057f043abf560c461281

archive: fix race condition in cmdStream There is a race condition in pkg/archive when using `cmd.Start` for pigz and xz where the `*bufio.Reader` could be returned to the pool while the command is still writing to it, and then picked up and used by a new command. The command is wrapped in a `CommandContext` where the process will be killed when the context is cancelled, however this is not instantaneous, so there's a brief window while the command is still running but the `*bufio.Reader` was already returned to the pool. wrapReadCloser calls `cancel()`, and then `readBuf.Close()` which eventually returns the buffer to the pool. However, because cmdStream runs `cmd.Wait` in a go routine that we never wait for to finish, it is not safe to return the reader to the pool yet. We need to ensure we wait for `cmd.Wait` to finish! Signed-off-by: Stephen Benjamin <stephen@redhat.com>

view details

Sebastiaan van Stijn

commit sha f2498e21c4fa0fdd2025ab493038d7c56543608a

hack/make: remove autogen resources for Docker CLI the files used by the docker cli were moved to the docker/cli repository, so are no longer needed here. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

view details

Sebastiaan van Stijn

commit sha a9aeda834398b4e604a888d25e7711e9151cb269

Rename some references to docker.exe to dockerd.exe Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

view details

Sebastiaan van Stijn

commit sha 5adaf52953476bf0ff612c8d78500e35821035bb

integration-cli: Skip TestAPIImagesSaveAndLoad on RS3 and older I've seen this test fail a number of times recently on RS1 Looking at failures, the test is taking a long time ro run (491.77s, which is more than 8 minutes), so perhaps it's just too slow on RS1, which may be because we switch to a different base image, or because we're now running on different machines. Compared to RS5 (still slow, but a lot faster); ``` --- PASS: Test/DockerSuite/TestAPIImagesSaveAndLoad (146.25s) ``` ``` --- FAIL: Test/DockerSuite/TestAPIImagesSaveAndLoad (491.77s) cli.go:45: assertion failed: Command: d:\CI-5\CI-93d2cf881\binary\docker.exe inspect --format {{.Id}} sha256:69e7c1ff23be5648c494294a3808c0ea3f78616fad67bfe3b10d3a7e2be5ff02 ExitCode: 1 Error: exit status 1 Stdout: Stderr: Error: No such object: sha256:69e7c1ff23be5648c494294a3808c0ea3f78616fad67bfe3b10d3a7e2be5ff02 Failures: ExitCode was 1 expected 0 Expected no error ``` Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

view details

Brian Goff

commit sha fcd65ebf49a858c4f6223d1b1db728f7400a3b6d

Fix more signal handling issues in tests. Found these by doing a `grep -R 'using the force'` on a full test run. There's still a few more which are running against the main test daemon, so it is difficult to find which test they belong to. Signed-off-by: Brian Goff <cpuguy83@gmail.com>

view details

Sebastiaan van Stijn

commit sha 961119db21b95504f819a31dfadd7115757fffb3

Dockerfile: set GO111MODULE=off Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

view details

Sebastiaan van Stijn

commit sha 38e4ae3bca76b9558eb44993c4208b41114c4597

Bump Golang version 1.13.0 Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

view details

Sebastiaan van Stijn

commit sha dbde4786e48531f095f9c3ecaff0f57b838abefc

integration-cli: fix some bashism's in Dockerfiles `TestBuildBuildTimeArgEnv` and `TestBuildBuildTimeArgEmptyValVariants` were using non-standard comparisons. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

view details

Sebastiaan van Stijn

commit sha 32f1c651623421ee1ac480b200d34025a74436bb

TestBuildSquashParent: fix non-standard comparisson Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

view details

Justen Martin

commit sha 548623b758c79085c4c434333ba8e044b3ab6b05

Use unique names in integration/service/plugin_test.go Signed-off-by: Justen Martin <jmart@the-coder.com>

view details

push time in a month

pull request commentdocker/libnetwork

Allowed libnetwork caller to set ephemeral port allocator range in ru…

@selansen @euanh @arkodg @chiragtayal

suwang48404

comment created time in a month

push eventsuwang48404/libnetwork

Su Wang

commit sha dda4930b699364d60c354c60f2d83f05c882fd56

Allowed libnetwork caller to set ephemeral port allocator range in runtime. Also reduce the allowed port range as the total number of containers per host is typically less than 1K. This change helps in scenarios where there are other services on the same host that uses ephemeral ports in iptables manipulation. Signed-off-by: Su Wang <su.wang@docker.com>

view details

push time in a month

PR opened docker/libnetwork

Allowed libnetwork caller to set ephemeral port allocator range in ru…

…ntime.

Also reduce the allowed port range as the total number of containers per host is typically less than 1K.

This change helps in scenarios where there are other services on the same host that uses ephemeral ports in iptables manipulation.

+135 -14

0 comment

6 changed files

pr created time in a month

push eventsuwang48404/libnetwork

Grant Millar

commit sha bdeccb571fd7ace82ba303ef01186f4b48a16622

Shorten controller ID in exec-root to not hit UNIX_PATH_MAX Signed-off-by: Grant Millar <rid@cylo.io>

view details

Kamil Domański

commit sha bff77cfa9e8bbd50ed00f5e2dfc91ed51450423a

log the actual error when failing to add IPv6 route Signed-off-by: Kamil Domański <kamil@domanski.co>

view details

Espen Suenson

commit sha ee880504a6f9bfe7404e25204f4b03f3f5b2e57e

Fixed getNetworkFromStore, which returned incorrect network information - notably, the 'resolver' field was empty. This fixes https://github.com/moby/moby/issues/38901 Signed-off-by: Espen Suenson <mail@espensuenson.dk>

view details

Espen Suenson

commit sha a35a7705f45ead0487000b0daad0107657e8a4ef

return immediately on error Signed-off-by: Espen Suenson <mail@espensuenson.dk>

view details

jdrahos

commit sha c3c0fabf8fc32738b75134c1556b47dd0242e839

weighted scheduling methods constants for ipvs Signed-off-by: Jakub Drahos <jack.drahos@gmail.com>

view details

Jakub Drahos

commit sha 59876450caa24ab97b8d6af2618a8f3d5e292976

adding the constants to the test file Signed-off-by: Jakub Drahos <jack.drahos@gmail.com>

view details

Grant Millar

commit sha 4cee383c13630a860984e9ec65a8e9013903f81f

Rerun CI Signed-off-by: Grant Millar <rid@cylo.io>

view details

Jakub Drahos

commit sha c8a5fca4a6529415d24c9066fd833b87012bcd79

trigger new CI run Signed-off-by: Jakub Drahos <jack.drahos@gmail.com>

view details

Euan Harris

commit sha 953ec5ed8f3e1cd7b79911263f2b3cf61d7a4aff

Merge pull request #2456 from suwang48404/master Resolve "bridge fdb show" hang issue

view details

elangovan sivanandam

commit sha 27a47ab187cd69014a90633f0f96c686cc4683f5

Merge pull request #2453 from jdrahos/ipvs_weighted_scheduling_constants-2452 weighted scheduling methods constants for ipvs

view details

elangovan sivanandam

commit sha f5e0618b985702a6d517f728865c5ec660a03418

Merge pull request #2449 from espensuenson/bugfix_getnetworkfromstore Fixed getNetworkFromStore, which returned an incorrect struct

view details

elangovan sivanandam

commit sha 10c7fb66116c775aef4891975405bc75b17f880d

Merge pull request #2444 from kdomanski/verbose-ipv6-cannot-add log the actual error when failing to add IPv6 route

view details

elangovan sivanandam

commit sha 0025177e3dabbe0de151be0957dcaff149d43536

Merge pull request #2443 from Rid/shorten-setkey-id Shorten controller ID in exec-root to not hit UNIX_PATH_MAX

view details

Arko Dasgupta

commit sha 7e575b842f5f29a5c847c3d04aadfe35621c1ab5

Fix Error Check in NewNetwork Use types.MaskableError instead of doing a string comparison Signed-off-by: Arko Dasgupta <arko.dasgupta@docker.com>

view details

elangovan sivanandam

commit sha 922cd533eac14b6e0754756c5cacf9f44af5d699

Merge pull request #2459 from arkodg/fix-error-check Fix Error Check in NewNetwork

view details

Arko Dasgupta

commit sha 516d973d1103c588c64b7a9d3e0aa59c33ef7386

Fix flaky NetworkDB tests Fixed these tests : 1.TestNetworkDBIslands Addresses : https://github.com/docker/libnetwork/issues/2402 2.TestNetworkDBCRUDMediumCluster Addresses : https://github.com/docker/libnetwork/issues/2401 By : 1. Importing gotest.tools/poll to use poll.WaitOn Above function can be used to check a condition at regular intervals until a timeout is reached 2. Replacing Sleep with poll.WaitOn 2. Adding closeNetworkDBInstances to close remaining DBs Signed-off-by: Arko Dasgupta <arko.dasgupta@docker.com>

view details

elangovan sivanandam

commit sha 3e10ae9ba101e60145e264d4e3fcc8046bc1ac6c

Merge pull request #2458 from arkodg/fix-flaky-tests Fix flaky NetworkDB tests

view details

Su Wang

commit sha 3f3aff9295684c37abc8ab41a45a3ac30449f0b3

Allowed libnetwork caller to set ephemeral port allocator range in runtime. Also reduce the allowed port range as the total number of containers per host is typically less than 1K. This change helps in scenarios where there are other services on the same host that uses ephemeral ports in iptables manipulation.

view details

push time in a month

Pull request review commentdocker/libnetwork

Fix flaky NetworkDB tests

 func TestNetworkDBIslands(t *testing.T) { 	for i := 0; i < 3; i++ { 		logrus.Infof("node %d leaving", i) 		dbs[i].Close()-		time.Sleep(2 * time.Second)

perhaps in addition to this change ( to me, it is good to remove unneeded sleep), we should do a polling loop around line 836-839?

arkodg

comment created time in a month

Pull request review commentdocker/libnetwork

Fix flaky NetworkDB tests

 func createNetworkDBInstances(t *testing.T, num int, namePrefix string, conf *Co 		dbs = append(dbs, db) 	} +	// Wait for all peers to be established and cluster to be created+	time.Sleep(5 * time.Second)

some kind of wait make sense, as node discovery via gossip can take time. perhaps we can do a polling loop with longer expiration to ensure a) fast test time in most case b) dependable test when test machine is slow?

arkodg

comment created time in a month

fork suwang48404/swarmkit

A toolkit for orchestrating distributed systems at any scale. It includes primitives for node discovery, raft-based consensus, task scheduling and more.

fork in a month

fork suwang48404/cli

The Docker CLI

fork in a month

push eventsuwang48404/libnetwork

Su Wang

commit sha 74da5bf0a5efe85a46f3937ece1a3c0d14cbf60a

Resolve "bridge fdb show" hang issue The output of "bridge fdb show" command invoked under a network namespace is unpredicable. Sometime it returns empty, and sometime non-stop rolling output. This perhaps is a bug in kernel and/or iproute2 implementation. To work around, display fdb for each bridge. Signed-off-by: Su Wang <su.wang@docker.com>

view details

push time in 2 months

pull request commentdocker/libnetwork

Resolve "bridge fdb show" hang issue

@arkodg @euanh @selansen @chiragtayal

suwang48404

comment created time in 2 months

PR opened docker/libnetwork

Resolve "bridge fdb show" hang issue

The output of "bridge fdb show" command invoked under a network namespace is unpredicable. Sometime it returns empty, and sometime non-stop rolling output. This perhaps is a bug in kernel and/or iproute2 implementation. To work around, display fdb for each bridge.

+13 -1

0 comment

2 changed files

pr created time in 2 months

push eventsuwang48404/libnetwork

Pradip Dhara

commit sha 64f88ad5104b3fe29b7b14f3c3abbb06faa806f4

Updating IPAM config with results from HNS create network call. In windows HNS manages IPAM. If the user does not specify a subnet, HNS will choose one for them. However, in order for the IPAM to show up in the output of "docker inspect", we need to update the network IPAMv4Config field. Signed-off-by: Pradip Dhara <pradipd@microsoft.com>

view details

Leonardo Nodari

commit sha d070217c5cb154b803189a51cdf6a4edeeccbdd3

Configure iptables forward policy when ip forwarding is enabled Signed-off-by: Leonardo Nodari <me@leonardonodari.it>

view details

elangovan sivanandam

commit sha 45c710223c5fbf04dc3028b9a90b51892e36ca7f

Merge pull request #2429 from pradipd/windows-nosubnet Updating IPAM config with results from HNS create network call.

view details

Euan Harris

commit sha 96bcc0dae898308ed659c5095526788a602f4726

Merge pull request #2450 from TheNodi/iptables-policy Always configure iptables forward policy

view details

Su Wang

commit sha a2537b0fc9c7c9dd55cfca278163b2a3bc2689df

Resolve "bridge fdb show" hang issue The output of "bridge fdb show" command invoked under a network namespace is unpredicable. Sometime it returns empty, and sometime non-stop rolling output. This perhaps is a bug in kernel and/or iproute2 implementation. To work around, display fdb for each bridge.

view details

push time in 2 months

fork suwang48404/moby

Moby Project - a collaborative project for the container ecosystem to assemble container-based systems

https://mobyproject.org/

fork in 2 months

fork suwang48404/libnetwork

networking for containers

fork in 2 months

create barnchsuwang48404/hello-world

branch : master

created branch time in 2 months

created repositorysuwang48404/hello-world

created time in 2 months

more