profile
viewpoint

arkodg/buildkit 0

concurrent, cache-efficient, and Dockerfile-agnostic builder toolkit

arkodg/cli 0

The Docker CLI

arkodg/containerd 0

An open and reliable container runtime

arkodg/docker-ce 0

Docker CE

arkodg/docker-ce-packaging 0

Packaging scripts for Docker CE

arkodg/docker.github.io 0

Source repo for Docker's Documentation

arkodg/istio 0

Connect, secure, control, and observe services.

arkodg/jsonschema2md 0

Convert Complex JSON Schemas into Markdown Documentation

arkodg/libnetwork 0

Docker Networking

arkodg/moby 0

Moby Project - a collaborative project for the container ecosystem to assemble container-based systems

Pull request review commentmoby/libnetwork

Enable IPv6 NAT (rebase of #2023)

 func (n *bridgeNetwork) allocatePorts(ep *bridgeEndpoint, reqDefBindIP net.IP, u 		defHostIP = reqDefBindIP 	} -	return n.allocatePortsInternal(ep.extConnConfig.PortBindings, ep.addr.IP, defHostIP, ulPxyEnabled)+	// IPv4 port binding including user land proxy+	pb, err := n.allocatePortsInternal(ep.extConnConfig.PortBindings, ep.addr.IP, defHostIP, ulPxyEnabled)+	if err != nil {+		return nil, err+	}++	// IPv6 port binding excluding user land proxy+	if n.driver.config.EnableIP6Tables && ep.addrv6 != nil {+		pbv6, err := n.allocatePortsInternal(ep.extConnConfig.PortBindings, ep.addrv6.IP, defaultBindingIPV6, false)

another TODO is to have a defaultBindingIPv6 similar to what IPv4 has

bboehmke

comment created time in 13 days

Pull request review commentmoby/libnetwork

Enable IPv6 NAT (rebase of #2023)

 func arrangeUserFilterRule() { 	if ctrl == nil || !ctrl.iptablesEnabled() { 		return 	}-	_, err := iptables.NewChain(userChain, iptables.Filter, false)+	iptable := iptables.GetIptable(iptables.IPv4)

sure you can add a TODO above those

bboehmke

comment created time in 13 days

pull request commentmoby/libnetwork

Enable IPv6 NAT (rebase of #2023)

thanks for the update, added comments to simplify the port forwarding to handle either a ipv4 or ipv6 stack, which might require more plumbing

bboehmke

comment created time in 14 days

Pull request review commentmoby/libnetwork

Enable IPv6 NAT (rebase of #2023)

 func (n *bridgeNetwork) isolateNetwork(others []*bridgeNetwork, enable bool) err 	}  	// Install the rules to isolate this network against each of the other networks-	return setINC(thisConfig.BridgeName, enable)+	if n.driver.config.EnableIP6Tables {+		err := setINC(iptables.IPv6, thisConfig.BridgeName, enable)+		if err != nil {+			return err+		}+	}+

can we add a if n.driver.config.EnableIPTables check here

bboehmke

comment created time in 14 days

Pull request review commentmoby/libnetwork

Enable IPv6 NAT (rebase of #2023)

 func (n *bridgeNetwork) allocatePorts(ep *bridgeEndpoint, reqDefBindIP net.IP, u 		defHostIP = reqDefBindIP 	} -	return n.allocatePortsInternal(ep.extConnConfig.PortBindings, ep.addr.IP, defHostIP, ulPxyEnabled)+	var pb []types.PortBinding++	if ep.addrv6 != nil {+		pb, _ = n.allocatePortsInternal(ep.extConnConfig.PortBindings, ep.addrv6.IP, defaultBindingIPV6, ulPxyEnabled, nil)+	}++	return n.allocatePortsInternal(ep.extConnConfig.PortBindings, ep.addr.IP, defHostIP, ulPxyEnabled, pb)

above comment will ensure we don't change the signature of this API for now

bboehmke

comment created time in 14 days

Pull request review commentmoby/libnetwork

Enable IPv6 NAT (rebase of #2023)

 func (n *bridgeNetwork) allocatePorts(ep *bridgeEndpoint, reqDefBindIP net.IP, u 		defHostIP = reqDefBindIP 	} -	return n.allocatePortsInternal(ep.extConnConfig.PortBindings, ep.addr.IP, defHostIP, ulPxyEnabled)+	var pb []types.PortBinding++	if ep.addrv6 != nil {

can we take care of this case later, and only pick one binding for now if ep.addrv6 != nil { // bind to ipv6 addr } else { // bind to ipv4 addr }

bboehmke

comment created time in 14 days

Pull request review commentmoby/libnetwork

Enable IPv6 NAT (rebase of #2023)

 func TestSetupIPForwarding(t *testing.T) { 	}  	// Set IP Forwarding-	if err := setupIPForwarding(true); err != nil {+	if err := setupIPForwarding(true, false); err != nil {

can we enable this in the TC I think the docker0 bridge is usually a dual ipv4+ipv6 bridge, so this will be a good test for it

bboehmke

comment created time in 14 days

Pull request review commentmoby/libnetwork

Enable IPv6 NAT (rebase of #2023)

 func arrangeUserFilterRule() { 	if ctrl == nil || !ctrl.iptablesEnabled() { 		return 	}-	_, err := iptables.NewChain(userChain, iptables.Filter, false)+	iptable := iptables.GetIptable(iptables.IPv4)

looks like we will need to add another DOCKER_USER jump rule for ipv6, you can add that as a TODO for now

bboehmke

comment created time in 14 days

Pull request review commentmoby/libnetwork

Enable IPv6 NAT (rebase of #2023)

 var ( 	initOnce            sync.Once ) +// IPTable defines struct with IPVersion and few other members are added later

can we remove the and few other members are added later for now

bboehmke

comment created time in 14 days

Pull request review commentmoby/libnetwork

Enable IPv6 NAT (rebase of #2023)

 func (pm *PortMapper) MapRange(container net.Addr, hostIP net.IP, hostPortStart, 		return nil 	} -	if err := m.userlandProxy.Start(); err != nil {-		if err := cleanup(); err != nil {-			return nil, fmt.Errorf("Error during port allocation cleanup: %v", err)+	if hostIP.To4() != nil {

are you testing this with the user land proxy enabled ? does this work with it disabled ? we can add a TODO for the user land proxy rather than add the hostIP.To4() checks in the code

bboehmke

comment created time in 14 days

pull request commentmoby/libnetwork

Enable IPv6 NAT (rebase of #2023)

thanks for carrying forward this PR @bboehmke . I've taken a quick pass and the arch mostly looks fine to me. Hoping you can address the comments

we also need to make sure that EnableIPMasquerade for bridge config is false by default for ipv6 networks to maintain backward compatibility

bboehmke

comment created time in 15 days

Pull request review commentmoby/libnetwork

Enable IPv6 NAT (rebase of #2023)

 func (n *bridgeNetwork) setupIPTables(config *networkConfiguration, i *bridgeInt 		n.registerIptCleanFunc(func() error { 			return setupIPTablesInternal(config.HostIP, config.BridgeName, maskedAddrv4, config.EnableICC, config.EnableIPMasquerade, hairpinMode, false) 		})-		natChain, filterChain, _, _, err := n.getDriverChains()+		natChain, filterChain, _, _, err := n.getDriverChains(iptables.IPv4) 		if err != nil { 			return fmt.Errorf("Failed to setup IP tables, cannot acquire chain info %s", err.Error()) 		} -		err = iptables.ProgramChain(natChain, config.BridgeName, hairpinMode, true)+		err = iptable.ProgramChain(natChain, config.BridgeName, hairpinMode, true) 		if err != nil { 			return fmt.Errorf("Failed to program NAT chain: %s", err.Error()) 		} -		err = iptables.ProgramChain(filterChain, config.BridgeName, hairpinMode, true)+		err = iptable.ProgramChain(filterChain, config.BridgeName, hairpinMode, true) 		if err != nil { 			return fmt.Errorf("Failed to program FILTER chain: %s", err.Error()) 		}  		n.registerIptCleanFunc(func() error {-			return iptables.ProgramChain(filterChain, config.BridgeName, hairpinMode, false)+			return iptable.ProgramChain(filterChain, config.BridgeName, hairpinMode, false) 		})  		n.portMapper.SetIptablesChain(natChain, n.getNetworkBridgeName()) 	}  	d.Lock()-	err = iptables.EnsureJumpRule("FORWARD", IsolationChain1)+	err = iptable.EnsureJumpRule("FORWARD", IsolationChain1)

can we split up setupIPTables into setupIP4Tables and setupIP6Tables based on the ip version value

bboehmke

comment created time in 15 days

Pull request review commentmoby/libnetwork

Enable IPv6 NAT (rebase of #2023)

 func (n *bridgeNetwork) allocatePorts(ep *bridgeEndpoint, reqDefBindIP net.IP, u 		defHostIP = reqDefBindIP 	} -	return n.allocatePortsInternal(ep.extConnConfig.PortBindings, ep.addr.IP, defHostIP, ulPxyEnabled)+	var pb []types.PortBinding++	if ep.addrv6 != nil {

if a network will be either a ipv4 or a ipv6 network, why do we need to allocate both kind of ports here

bboehmke

comment created time in 15 days

Pull request review commentmoby/libnetwork

Enable IPv6 NAT (rebase of #2023)

 func (d *driver) createNetwork(config *networkConfiguration) (err error) {  	// Add inter-network communication rules. 	setupNetworkIsolationRules := func(config *networkConfiguration, i *bridgeInterface) error {-		if err := network.isolateNetwork(networkList, true); err != nil {-			if err = network.isolateNetwork(networkList, false); err != nil {+		if err := network.isolateNetwork(iptables.IPv4, networkList, true); err != nil {+			if err = network.isolateNetwork(iptables.IPv4, networkList, false); err != nil {

same as above

bboehmke

comment created time in 15 days

Pull request review commentmoby/libnetwork

Enable IPv6 NAT (rebase of #2023)

 func (d *driver) createNetwork(config *networkConfiguration) (err error) {  	// Add inter-network communication rules. 	setupNetworkIsolationRules := func(config *networkConfiguration, i *bridgeInterface) error {-		if err := network.isolateNetwork(networkList, true); err != nil {-			if err = network.isolateNetwork(networkList, false); err != nil {+		if err := network.isolateNetwork(iptables.IPv4, networkList, true); err != nil {+			if err = network.isolateNetwork(iptables.IPv4, networkList, false); err != nil { 				logrus.Warnf("Failed on removing the inter-network iptables rules on cleanup: %v", err) 			} 			return err 		} 		// register the cleanup function 		network.registerIptCleanFunc(func() error { 			nwList := d.getNetworks()-			return network.isolateNetwork(nwList, false)+			return network.isolateNetwork(iptables.IPv4, nwList, false)

same as above

bboehmke

comment created time in 15 days

Pull request review commentmoby/libnetwork

Enable IPv6 NAT (rebase of #2023)

 func (d *driver) createNetwork(config *networkConfiguration) (err error) {  	// Add inter-network communication rules. 	setupNetworkIsolationRules := func(config *networkConfiguration, i *bridgeInterface) error {-		if err := network.isolateNetwork(networkList, true); err != nil {-			if err = network.isolateNetwork(networkList, false); err != nil {+		if err := network.isolateNetwork(iptables.IPv4, networkList, true); err != nil {

I think we should be passing the IPtables version in this API, this call will finally call setINC which should hopefully fo the right right

bboehmke

comment created time in 15 days

Pull request review commentmoby/libnetwork

Enable IPv6 NAT (rebase of #2023)

 func (n *bridgeNetwork) releasePort(bnd types.PortBinding) error { 	if err != nil { 		return err 	}-	return n.portMapper.Unmap(host)++	portmapper := n.portMapper++	if strings.ContainsAny(host.String(), "]") == true {

same comment as above

bboehmke

comment created time in 15 days

Pull request review commentmoby/libnetwork

Enable IPv6 NAT (rebase of #2023)

 func (n *bridgeNetwork) allocatePort(bnd *types.PortBinding, containerIP, defHos 		return err 	} +	portmapper := n.portMapper++	if containerIP.To4() == nil {

hoping we can be consistent with the way we decipher ipv6 and ipv4 throughout the codebase

bboehmke

comment created time in 16 days

Pull request review commentmoby/libnetwork

Enable IPv6 NAT (rebase of #2023)

 func setupIPForwarding(enableIPTables bool) error { 		if !enableIPTables { 			return nil 		}-		if err := iptables.SetDefaultPolicy(iptables.Filter, "FORWARD", iptables.Drop); err != nil {+		iptable := iptables.GetIptable(iptables.IPv4)+		if err := iptable.SetDefaultPolicy(iptables.Filter, "FORWARD", iptables.Drop); err != nil { 			if err := configureIPForwarding(false); err != nil { 				logrus.Errorf("Disabling IP forwarding failed, %v", err) 			} 			return err 		} 		iptables.OnReloaded(func() { 			logrus.Debug("Setting the default DROP policy on firewall reload")-			if err := iptables.SetDefaultPolicy(iptables.Filter, "FORWARD", iptables.Drop); err != nil {+			if err := iptable.SetDefaultPolicy(iptables.Filter, "FORWARD", iptables.Drop); err != nil { 				logrus.Warnf("Setting the default DROP policy on firewall reload failed, %v", err) 			} 		}) 	}++	// add only iptables rules - forwarding is handled by setupIPv6Forwarding in setup_ipv6+	if enableIP6Tables {

this needs a iptables.OnReloaded call as well (for firewalld)

bboehmke

comment created time in 16 days

Pull request review commentmoby/libnetwork

Enable IPv6 NAT (rebase of #2023)

 func setupIPTablesInternal(hostIP net.IP, bridgeIface string, addr net.Addr, icc 	natRule := iptRule{table: iptables.Nat, chain: "POSTROUTING", preArgs: []string{"-t", "nat"}, args: natArgs} 	hpNatRule := iptRule{table: iptables.Nat, chain: "POSTROUTING", preArgs: []string{"-t", "nat"}, args: hpNatArgs} +	ipVersion := iptables.IPv4++	if strings.Contains(address, ":") {+		ipVersion = iptables.IPv6

this should work, but should we consider using something from the standard net package instead ?

bboehmke

comment created time in 16 days

Pull request review commentmoby/libnetwork

Enable IPv6 NAT (rebase of #2023)

 func (pm *PortMapper) MapRange(container net.Addr, hostIP net.IP, hostPortStart, 		return nil 	} -	if err := m.userlandProxy.Start(); err != nil {-		if err := cleanup(); err != nil {-			return nil, fmt.Errorf("Error during port allocation cleanup: %v", err)+	if hostIP.To4() != nil {

why are we skipping this section for ipv6

bboehmke

comment created time in 16 days

Pull request review commentmoby/libnetwork

Enable IPv6 NAT (rebase of #2023)

 func (d *driver) createNetwork(config *networkConfiguration) (err error) {  	// Add inter-network communication rules. 	setupNetworkIsolationRules := func(config *networkConfiguration, i *bridgeInterface) error {-		if err := network.isolateNetwork(networkList, true); err != nil {-			if err = network.isolateNetwork(networkList, false); err != nil {+		if err := network.isolateNetwork(iptables.IPv4, networkList, true); err != nil {

curious why we are setting this to ipv4 ?

bboehmke

comment created time in 16 days

Pull request review commentmoby/libnetwork

Enable IPv6 NAT (rebase of #2023)

 func (d *driver) configure(option map[string]interface{}) error { 				logrus.Warnf("Running modprobe bridge br_netfilter failed with message: %s, error: %v", out, err) 			} 		}-		removeIPChains()-		natChain, filterChain, isolationChain1, isolationChain2, err = setupIPChains(config)++		removeIPChains(iptables.IPv4)

shouldn't we remove/setup either ipv4 or ipv6 chains in this function ?

bboehmke

comment created time in 16 days

push eventdocker/docker.github.io

Arko Dasgupta

commit sha 93651ae2f2ca7e9fe5493b85419a288b02c2139b

Update config/daemon/ipv6.md Co-authored-by: Sebastiaan van Stijn <thaJeztah@users.noreply.github.com>

view details

push time in 19 days

PR opened docker/docker.github.io

Update ipv6.md

Update the ipv6 daemon config to include the fixed-cidr-v6 (IPv6 Subnet) key

+3 -1

0 comment

1 changed file

pr created time in 19 days

create barnchdocker/docker.github.io

branch : ipv6-add-subnet

created branch time in 19 days

issue commentdocker/for-linux

Network performance and bandwidth degradation

the one intermediate layer that can play a factor here is conntrack (connection tracking module for NATing). You could try further debugging the state of conntrack

wkruse

comment created time in 25 days

issue commentdocker/for-linux

Network performance and bandwidth degradation

can you try increasing the Bridge MTU

wkruse

comment created time in a month

PR opened docker/docker.github.io

Remove the duplicated words in the landing page

Signed-off-by: Arko Dasgupta arko.dasgupta@docker.com

+1 -1

0 comment

1 changed file

pr created time in a month

create barncharkodg/docker.github.io

branch : rm-dup-words

created branch time in a month

Pull request review commentdocker/docker-ce

Bump version to 19.03.12

 For official release notes for Docker Engine CE and Docker Engine EE, visit the [release notes page](https://docs.docker.com/engine/release-notes/). +## 19.03.12 (2020-06-18)++### Client++- Fix bug preventing logout from registry when using multiple config files (e.g. Windows vs WSL2) [docker/cli#2592](https://github.com/docker/cli/pull/2592)

should it be Docker Desktop with WSL2 instead ?

tiborvass

comment created time in 2 months

Pull request review commentmoby/moby

Fix 'failed to get network during CreateEndpoint'

 func (nc *networkAttacherController) Terminate(ctx context.Context) error { }  func (nc *networkAttacherController) Remove(ctx context.Context) error {-	// Try removing the network referenced in this task in case this-	// task is the last one referencing it-	return nc.adapter.removeNetworks(ctx)+	return nil

trying to understand why https://github.com/moby/moby/blob/a23ca165c919b62da681cf062429ffe36d249a1b/daemon/container_operations.go#L428 won't help now that you have fixed the error codes with https://github.com/moby/libnetwork/pull/2554

xinfengliu

comment created time in 2 months

Pull request review commentmoby/moby

Fix 'failed to get network during CreateEndpoint'

 func TestDockerNetworkReConnect(t *testing.T) { 	assert.NilError(t, err) 	assert.Check(t, is.DeepEqual(n1, n2)) }++func TestDockerRestartWithAttachbleNetwork(t *testing.T) {+	skip.If(t, testEnv.DaemonInfo.OSType == "windows")+	defer setupTest(t)()+	d := swarm.NewSwarm(t, testEnv)+	defer d.Stop(t)+	client := d.NewClientT(t)+	defer client.Close()+	ctx := context.Background()++	name := t.Name() + "dummyNet"+	net.CreateNoError(ctx, t, client, name,+		net.WithDriver("overlay"),+		net.WithAttachable(),+	)++	c1 := container.Create(ctx, t, client, func(c *container.TestContainerConfig) {+		c.NetworkingConfig = &network.NetworkingConfig{+			EndpointsConfig: map[string]*network.EndpointSettings{+				name: {},+			},+		}+	})++	err := client.ContainerStart(ctx, c1, types.ContainerStartOptions{})+	assert.NilError(t, err)++	var timeout time.Duration = 20 * time.Second+	err = client.ContainerRestart(ctx, c1, &timeout)

ah how about 11 :trollface:

xinfengliu

comment created time in 2 months

Pull request review commentmoby/moby

Fix 'failed to get network during CreateEndpoint'

 func (nc *networkAttacherController) Terminate(ctx context.Context) error { }  func (nc *networkAttacherController) Remove(ctx context.Context) error {-	// Try removing the network referenced in this task in case this-	// task is the last one referencing it-	return nc.adapter.removeNetworks(ctx)+	return nil

instead of doing this, what are your thoughts on applying a retry logic below in Start ?

xinfengliu

comment created time in 2 months

push eventmoby/libnetwork

Xinfeng Liu

commit sha 3a453538831b90f4b00e0bfc966ed9234c312b88

Fix 'failed to get network during CreateEndpoint' Fix 'failed to get network during CreateEndpoint' during container starting. Change the error type to `libnetwork.ErrNoSuchNetwork`, so `Start()` in `daemon/cluster/executor/container/controller.go` will recreate the network. Signed-off-by: Xinfeng Liu <xinfeng.liu@gmail.com> (cherry picked from commit 1df7f7e6d1e809362b16aba8893675ef81b1b9ab) Signed-off-by: Shane Jarych <sjarych@mirantis.com> (cherry picked from commit bcb6dd6d252167a714e316270eaa7dc68ae739c8) Signed-off-by: Sam Whited <sam@samwhited.com>

view details

Arko Dasgupta

commit sha 7754c506c0b7cb0bb309d478297ea4abb79e09e3

Merge pull request #2568 from SamWhited/bump_18.09_createendpoint_fix [18.09 backport] Fix 'failed to get network during CreateEndpoint

view details

push time in 2 months

PR merged moby/libnetwork

[18.09 backport] Fix 'failed to get network during CreateEndpoint

This is a backport of https://github.com/moby/libnetwork/pull/2554 that I'm hoping we can merge into the bump_18.09 branch. Thanks!

+3 -3

0 comment

2 changed files

SamWhited

pr closed time in 2 months

Pull request review commentmoby/moby

Fix 'failed to get network during CreateEndpoint'

 func TestDockerNetworkReConnect(t *testing.T) { 	assert.NilError(t, err) 	assert.Check(t, is.DeepEqual(n1, n2)) }++func TestDockerRestartWithAttachbleNetwork(t *testing.T) {+	skip.If(t, testEnv.DaemonInfo.OSType == "windows")+	defer setupTest(t)()+	d := swarm.NewSwarm(t, testEnv)+	defer d.Stop(t)+	client := d.NewClientT(t)+	defer client.Close()+	ctx := context.Background()++	name := t.Name() + "dummyNet"+	net.CreateNoError(ctx, t, client, name,+		net.WithDriver("overlay"),+		net.WithAttachable(),+	)++	c1 := container.Create(ctx, t, client, func(c *container.TestContainerConfig) {+		c.NetworkingConfig = &network.NetworkingConfig{+			EndpointsConfig: map[string]*network.EndpointSettings{+				name: {},+			},+		}+	})++	err := client.ContainerStart(ctx, c1, types.ContainerStartOptions{})+	assert.NilError(t, err)++	var timeout time.Duration = 20 * time.Second+	err = client.ContainerRestart(ctx, c1, &timeout)

does this work with a smaller restart value , default in compose and cli is 10s

xinfengliu

comment created time in 2 months

issue openeddocker/compose

docker-compose push sometimes does not work

Description of the issue

Running docker-compose push does nothing (sometimes) after building my images using docker-compose build, Workaround - I need to run docker-compose -f docker-compose.yml push to push the image Seeing this on Docker Desktop

Context information (for bug reports)

Output of docker-compose version

1.26.0-rc4

Output of docker version

19.03.8

Observed result

In the below output, you can see that docker-compose push did not work in the first attempt and succeeded in the second attempt

docker-compose --verbose push
compose.config.config.find: Using configuration files: ./docker-compose.yml,./docker-compose.override.yml
docker.utils.config.find_config_file: Trying paths: ['/Users/<user>/.docker/config.json', '/Users/<user>/.dockercfg']
docker.utils.config.find_config_file: Found file at path: /Users/<user>/.docker/config.json
docker.utils.config.find_config_file: Trying paths: ['/Users/<user>/.docker/config.json', '/Users/<user>/.dockercfg']
docker.utils.config.find_config_file: Found file at path: /Users/<user>/.docker/config.json
docker.auth.load_config: Found 'auths' section
docker.auth.parse_auth: Auth data for https://index.docker.io/v1/ is absent. Client might be using a credentials store instead.
docker.auth.parse_auth: Auth data for registry-1-stage.docker.io is absent. Client might be using a credentials store instead.
docker.auth.load_config: Found 'credsStore' section
urllib3.connectionpool._make_request: http://localhost:None "GET /v1.30/version HTTP/1.1" 200 872
compose.cli.docker_client.get_client: docker-compose version 1.26.0-rc4, build d279b7a8
docker-py version: 4.2.0
CPython version: 3.7.7
OpenSSL version: OpenSSL 1.1.1g  21 Apr 2020
compose.cli.docker_client.get_client: Docker base_url: http+docker://localhost
compose.cli.docker_client.get_client: Docker version: Platform={'Name': 'Docker Engine - Community'}, Components=[{'Name': 'Engine', 'Version': '19.03.8', 'Details': {'ApiVersion': '1.40', 'Arch': 'amd64', 'BuildTime': '2020-03-11T01:29:16.000000000+00:00', 'Experimental': 'true', 'GitCommit': 'afacb8b', 'GoVersion': 'go1.12.17', 'KernelVersion': '4.19.76-linuxkit', 'MinAPIVersion': '1.12', 'Os': 'linux'}}, {'Name': 'containerd', 'Version': 'v1.2.13', 'Details': {'GitCommit': '7ad184331fa3e55e52b890ea95e65ba581ae3429'}}, {'Name': 'runc', 'Version': '1.0.0-rc10', 'Details': {'GitCommit': 'dc9208a3303feef5b3839f4323d9beb36df0a9dd'}}, {'Name': 'docker-init', 'Version': '0.18.0', 'Details': {'GitCommit': 'fec3683'}}], Version=19.03.8, ApiVersion=1.40, MinAPIVersion=1.12, GitCommit=afacb8b, GoVersion=go1.12.17, Os=linux, Arch=amd64, KernelVersion=4.19.76-linuxkit, Experimental=True, BuildTime=2020-03-11T01:29:16.000000000+00:00
compose.cli.verbose_proxy.proxy_callable: docker inspect_network <- ('<stack-name>_default')
urllib3.connectionpool._make_request: http://localhost:None "GET /v1.30/networks/<stack-name>_default HTTP/1.1" 200 557
compose.cli.verbose_proxy.proxy_callable: docker inspect_network -> {'Attachable': True,
 'ConfigFrom': {'Network': ''},
 'ConfigOnly': False,
 'Containers': {},
 'Created': '2020-05-29T16:51:40.2312654Z',
 'Driver': 'bridge',
 'EnableIPv6': False,
 'IPAM': {'Config': [{'Gateway': '172.19.0.1', 'Subnet': '172.19.0.0/16'}],
          'Driver': 'default',
          'Options': None},
...
🐳 $ docker-compose --verbose push
compose.config.config.find: Using configuration files: ./docker-compose.yml,./docker-compose.override.yml
docker.utils.config.find_config_file: Trying paths: ['/Users/<user>/.docker/config.json', '/Users/<user>/.dockercfg']
docker.utils.config.find_config_file: Found file at path: /Users/<user>/.docker/config.json
docker.utils.config.find_config_file: Trying paths: ['/Users/<user>/.docker/config.json', '/Users/<user>/.dockercfg']
docker.utils.config.find_config_file: Found file at path: /Users/<user>/.docker/config.json
docker.auth.load_config: Found 'auths' section
docker.auth.parse_auth: Auth data for https://index.docker.io/v1/ is absent. Client might be using a credentials store instead.
docker.auth.parse_auth: Auth data for registry-1-stage.docker.io is absent. Client might be using a credentials store instead.
docker.auth.load_config: Found 'credsStore' section
urllib3.connectionpool._make_request: http://localhost:None "GET /v1.30/version HTTP/1.1" 200 872
compose.cli.docker_client.get_client: docker-compose version 1.26.0-rc4, build d279b7a8
docker-py version: 4.2.0
CPython version: 3.7.7
OpenSSL version: OpenSSL 1.1.1g  21 Apr 2020
compose.cli.docker_client.get_client: Docker base_url: http+docker://localhost
compose.cli.docker_client.get_client: Docker version: Platform={'Name': 'Docker Engine - Community'}, Components=[{'Name': 'Engine', 'Version': '19.03.8', 'Details': {'ApiVersion': '1.40', 'Arch': 'amd64', 'BuildTime': '2020-03-11T01:29:16.000000000+00:00', 'Experimental': 'true', 'GitCommit': 'afacb8b', 'GoVersion': 'go1.12.17', 'KernelVersion': '4.19.76-linuxkit', 'MinAPIVersion': '1.12', 'Os': 'linux'}}, {'Name': 'containerd', 'Version': 'v1.2.13', 'Details': {'GitCommit': '7ad184331fa3e55e52b890ea95e65ba581ae3429'}}, {'Name': 'runc', 'Version': '1.0.0-rc10', 'Details': {'GitCommit': 'dc9208a3303feef5b3839f4323d9beb36df0a9dd'}}, {'Name': 'docker-init', 'Version': '0.18.0', 'Details': {'GitCommit': 'fec3683'}}], Version=19.03.8, ApiVersion=1.40, MinAPIVersion=1.12, GitCommit=afacb8b, GoVersion=go1.12.17, Os=linux, Arch=amd64, KernelVersion=4.19.76-linuxkit, Experimental=True, BuildTime=2020-03-11T01:29:16.000000000+00:00
compose.cli.verbose_proxy.proxy_callable: docker inspect_network <- ('<stack-name>_default')
urllib3.connectionpool._make_request: http://localhost:None "GET /v1.30/networks/<stack-name>_default HTTP/1.1" 200 557
compose.cli.verbose_proxy.proxy_callable: docker inspect_network -> {'Attachable': True,
 'ConfigFrom': {'Network': ''},
 'ConfigOnly': False,
 'Containers': {},
 'Created': '2020-05-29T16:51:40.2312654Z',
 'Driver': 'bridge',
 'EnableIPv6': False,
 'IPAM': {'Config': [{'Gateway': '172.19.0.1', 'Subnet': '172.19.0.0/16'}],
          'Driver': 'default',
          'Options': None},
...
compose.service.push: Pushing agent (docker/<image-name>:latest)...
compose.cli.verbose_proxy.proxy_callable: docker push <- ('docker/<image-name>', tag='latest', stream=True)
docker.auth.get_config_header: Looking for auth config
docker.auth.resolve_authconfig: Using credentials store "desktop"
docker.auth._resolve_authconfig_credstore: Looking for auth entry for 'https://index.docker.io/v1/'
docker.auth.get_config_header: Found auth config
urllib3.connectionpool._make_request: http://localhost:None "POST /v1.30/images/docker/<image-name>/push?tag=latest HTTP/1.1" 200 None
compose.cli.verbose_proxy.proxy_callable: docker push -> <generator object APIClient._stream_helper at 0x10cc30dd0>
The push refers to repository [docker.io/docker/<image-name>]
ec008f010cdc: Layer already exists
4439818d7d2c: Layer already exists
22ea724cf209: Layer already exists
9e35372c652f: Layer already exists
1ede7fa3c827: Layer already exists
a7a3c4ee26ce: Layer already exists
b2b0e8762b55: Layer already exists
0ffca9d82054: Layer already exists
ad3ceb923f2a: Layer already exists
80d84172557e: Layer already exists
ef5011a1a00c: Layer already exists
7302fdf028f0: Layer already exists
b048bf9a2437: Layer already exists
e5263d8c6535: Layer already exists
962f2cfe1008: Layer already exists
b4cbef589dca: Layer already exists
20dd3bb3212d: Layer already exists
2d24322a652d: Layer already exists
853b7b300313: Layer already exists
22fdb1b038cf: Layer already exists
a9761686f819: Layer already exists
20215a41fca5: Layer already exists
a72a7e555fe1: Layer already exists
b8f8aeff56a8: Layer already exists
687890749166: Layer already exists
2f77733e9824: Layer already exists
97041f29baff: Layer already exists
latest: digest: sha256:b2e6af1c2e04bfb4982fcf8315279ba798025763866f282e8c5a7d92ff2e5fd7 size: 5986

created time in 2 months

push eventmoby/libnetwork

Sebastiaan van Stijn

commit sha 8dd33dc6e8b782821ddd54539afeec854071de31

log error instead if disabling IPv6 router advertisement failed Previously, failing to disable IPv6 router advertisement prevented the daemon to start. An issue was reported by a user that started docker using `systemd-nspawn "machine"`, which produced an error; failed to start daemon: Error initializing network controller: Error creating default "bridge" network: libnetwork: Unable to disable IPv6 router advertisement: open /proc/sys/net/ipv6/conf/docker0/accept_ra: read-only file system This patch changes the error to a log-message instead. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

view details

Arko Dasgupta

commit sha 9e99af28df21367340c95a3863e31808d689c92a

Merge pull request #2563 from thaJeztah/no_error log error instead if disabling IPv6 router advertisement failed

view details

push time in 2 months

PR merged moby/libnetwork

log error instead if disabling IPv6 router advertisement failed

addresses https://github.com/docker/for-linux/issues/1033

Previously, failing to disable IPv6 router advertisement prevented the daemon to start.

An issue was reported by a user that started docker using systemd-nspawn "machine", which produced an error;

failed to start daemon: Error initializing network controller:
Error creating default "bridge" network: libnetwork:
Unable to disable IPv6 router advertisement:
open /proc/sys/net/ipv6/conf/docker0/accept_ra: read-only file system

This patch changes the error to a log-message instead.

+1 -1

4 comments

1 changed file

thaJeztah

pr closed time in 2 months

pull request commentdocker/docker.github.io

Introduce Buildkit to AutoBuilds

ptal @thaJeztah @usha-mandya

arkodg

comment created time in 2 months

PR opened docker/docker.github.io

Introduce Buildkit to AutoBuilds

Add a section on explaining how to enable builds to use the Docker Buildkit builder by the Autobuild service

+5 -0

0 comment

1 changed file

pr created time in 2 months

push eventarkodg/docker.github.io

Arko Dasgupta

commit sha ef883bb22a860e85f43dc71842bf7d3912495ca6

Introduce Buildkit to AutoBuilds Add a section on explaining how to enable builds to use the Docker Buildkit builder by the Autobuild service

view details

push time in 2 months

pull request commentmoby/libnetwork

log error instead if disabling IPv6 router advertisement failed

@thaJeztah now that we have incorporated the security patch, can we please utilize this library into setupDefaultSysctl - https://github.com/moby/libnetwork/blob/master/osl/kernel/knobs_linux.go similar to https://github.com/moby/libnetwork/blob/13a4da01ec396d4bd59cc2108a7d183d7ac7fddd/osl/namespace_linux.go#L40 its best effort and logs errors

thaJeztah

comment created time in 2 months

PR merged moby/libnetwork

Reviewers
Fix 'failed to get network during CreateEndpoint'

Fixes https://github.com/docker/for-linux/issues/888

Fix 'failed to get network during CreateEndpoint' during container starting. Change the error type to libnetwork.ErrNoSuchNetwork, so Start() in docker engine daemon/cluster/executor/container/controller.go will recreate the network.

Signed-off-by: Xinfeng Liu xinfeng.liu@gmail.com

+3 -2

7 comments

2 changed files

xinfengliu

pr closed time in 2 months

push eventmoby/libnetwork

Xinfeng Liu

commit sha 1df7f7e6d1e809362b16aba8893675ef81b1b9ab

Fix 'failed to get network during CreateEndpoint' Fix 'failed to get network during CreateEndpoint' during container starting. Change the error type to `libnetwork.ErrNoSuchNetwork`, so `Start()` in `daemon/cluster/executor/container/controller.go` will recreate the network. Signed-off-by: Xinfeng Liu <xinfeng.liu@gmail.com>

view details

Arko Dasgupta

commit sha 868f23bb5f044aa6049b7820b9a57ac80863a4e1

Merge pull request #2554 from xinfengliu/fix-network-not-found Fix 'failed to get network during CreateEndpoint'

view details

push time in 2 months

pull request commentmoby/libnetwork

Fix 'failed to get network during CreateEndpoint'

I have reservations about https://github.com/moby/libnetwork/pull/2554#issuecomment-636767809 but we can tackle those later :)

xinfengliu

comment created time in 2 months

push eventmoby/libnetwork

Sebastiaan van Stijn

commit sha 2586342dc33d357207951abebe12c041114cd5ff

store.getNetworksFromStore() remove unused error return This function always returned `nil`, so we can remove the error return, and update other functions that were handling errors. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

view details

Arko Dasgupta

commit sha ca3700b70d7e81ec634650a09414757a944742d9

Merge pull request #2556 from thaJeztah/remove_unused_error store.getNetworksFromStore() remove unused error return

view details

push time in 2 months

PR merged moby/libnetwork

store.getNetworksFromStore() remove unused error return enhancement/code-cleanup status/2-needs-code-review

noticed this while reviewing https://github.com/moby/libnetwork/pull/2554, so thought I'd open as a pull request

This function always returned nil, so we can remove the error return, and update other functions that were handling errors.

+8 -26

1 comment

2 changed files

thaJeztah

pr closed time in 2 months

pull request commentmoby/moby

Better selection of DNS server

thanks for the detailed code comments ! can we add tests for these 🤗 ?

thaJeztah

comment created time in 2 months

pull request commentmoby/libnetwork

Fix 'failed to get network during CreateEndpoint'

you could mimic https://github.com/docker/for-linux/issues/888#issuecomment-572348123 via an integration test sidenote - had no idea we had a retry logic in the executor, TIL :)

xinfengliu

comment created time in 2 months

pull request commentmoby/libnetwork

Fix 'failed to get network during CreateEndpoint'

@xinfengliu thanks for solving this nasty issue ! would appreciate it if you could add a test case here or in https://github.com/moby/moby/blob/master/integration/network/network_test.go to recreate it

xinfengliu

comment created time in 2 months

push eventmoby/libnetwork

Sebastiaan van Stijn

commit sha fdaaa027ad3af55ce9f9157fa5d4c1b22913bb34

Resolver: fix error handling if we didn't receive a response Commit d5e341e6798c619147691d53ceb6b426b3b8cb9d updated the DNS library and updated the error handling. Due to changes in the library, we now had to check the response itself to check if the response was truncated (Truncated DNS replies should be sent to the client so that the client can retry over TCP). However, bea32b018c874ef35396ef46a3908ca0f9367d76 added an incorrect `nil` check to fix a panic, which ignored situations where an error was returned, but no response (for example, if we failed to connect to the DNS server). In that situation, the error would be ignored, and further down we would consider the connection to have been succesfull, but the DNS server not returning a result. After a "successful" lookup (but no results), we break the loop, and don't attempt lookups in other DNS servers. Versions before bea32b018c874ef35396ef46a3908ca0f9367d76 would produce: Name To resolve: bbc.co.uk. [resolver] query bbc.co.uk. (A) from 172.21.0.2:36181, forwarding to udp:192.168.5.1 [resolver] read from DNS server failed, read udp 172.21.0.2:36181->192.168.5.1:53: i/o timeout [resolver] query bbc.co.uk. (A) from 172.21.0.2:38582, forwarding to udp:8.8.8.8 [resolver] received A record "151.101.0.81" for "bbc.co.uk." from udp:8.8.8.8 [resolver] received A record "151.101.192.81" for "bbc.co.uk." from udp:8.8.8.8 [resolver] received A record "151.101.64.81" for "bbc.co.uk." from udp:8.8.8.8 [resolver] received A record "151.101.128.81" for "bbc.co.uk." from udp:8.8.8.8 Versions after that commit would ignore the error, and stop further lookups: Name To resolve: bbc.co.uk. [resolver] query bbc.co.uk. (A) from 172.21.0.2:59870, forwarding to udp:192.168.5.1 [resolver] external DNS udp:192.168.5.1 returned empty response for "bbc.co.uk." This patch updates the logic to handle the error to log the error (and continue with the next DNS): - if an error is returned, and no response was received - if an error is returned, but it was not related to a truncated response Signed-off-by: Sebastiaan van Stijn <github@gone.nl> Signed-off-by: Tibor Vass <tibor@docker.com> (cherry picked from commit 15ead894b96497b2a2a3363fd8a6d55bd834bb07) Signed-off-by: Tibor Vass <tibor@docker.com>

view details

Arko Dasgupta

commit sha 71d4d82a5ce50453b1121d95544f0a2ae95bef9b

Merge pull request #2552 from thaJeztah/19.03_backport_fix_error_handline [19.03 backport] Resolver: fix error handling if we didn't receive a response

view details

push time in 2 months

PR merged moby/libnetwork

[19.03 backport] Resolver: fix error handling if we didn't receive a response

backport of https://github.com/moby/libnetwork/pull/2551

Addresses https://github.com/moby/moby/issues/41003 Addresses https://github.com/moby/moby/issues/20494#issuecomment-631339768

Commit d5e341e updated the DNS library and updated the error handling.

Due to changes in the library, we now had to check the response itself to check if the response was truncated (Truncated DNS replies should be sent to the client so that the client can retry over TCP).

However, bea32b0 added an incorrect nil check to fix a panic, which ignored situations where an error was returned, but no response (for example, if we failed to connect to the DNS server).

In that situation, the error would be ignored, and further down we would consider the connection to have been succesfull, but the DNS server not returning a result.

After a "successful" lookup (but no results), we break the loop, and don't attempt lookups in other DNS servers.

Versions before bea32b0 would produce:

Name To resolve: bbc.co.uk.
[resolver] query bbc.co.uk. (A) from 172.21.0.2:36181, forwarding to udp:192.168.5.1
[resolver] read from DNS server failed, read udp 172.21.0.2:36181->192.168.5.1:53: i/o timeout
[resolver] query bbc.co.uk. (A) from 172.21.0.2:38582, forwarding to udp:8.8.8.8
[resolver] received A record "151.101.0.81" for "bbc.co.uk." from udp:8.8.8.8
[resolver] received A record "151.101.192.81" for "bbc.co.uk." from udp:8.8.8.8
[resolver] received A record "151.101.64.81" for "bbc.co.uk." from udp:8.8.8.8
[resolver] received A record "151.101.128.81" for "bbc.co.uk." from udp:8.8.8.8

Versions after that commit would ignore the error, and stop further lookups:

Name To resolve: bbc.co.uk.
[resolver] query bbc.co.uk. (A) from 172.21.0.2:59870, forwarding to udp:192.168.5.1
[resolver] external DNS udp:192.168.5.1 returned empty response for "bbc.co.uk."

This patch updates the logic to handle the error to log the error (and continue with the next DNS):

  • if an error is returned, and no response was received
  • if an error is returned, but it was not related to a truncated response
+1 -1

1 comment

1 changed file

thaJeztah

pr closed time in 2 months

delete branch arkodg/libnetwork

delete branch : add-intf-firewalld-zone

delete time in 2 months

push eventmoby/libnetwork

Arko Dasgupta

commit sha 7a7209221542dc99b316748c97608dfc276c40f6

Add docker interfaces to firewalld docker zone If firewalld is running, create a new docker zone and add the docker interfaces to the docker zone to allow container networking for distros with firewalld enabled Fixes: https://github.com/moby/libnetwork/issues/2496 Signed-off-by: Arko Dasgupta <arko.dasgupta@docker.com>

view details

Arko Dasgupta

commit sha 13a4da01ec396d4bd59cc2108a7d183d7ac7fddd

Merge pull request #2548 from arkodg/add-intf-firewalld-zone Add docker interfaces to firewalld docker zone

view details

push time in 2 months

PR merged moby/libnetwork

Add docker interfaces to firewalld docker zone

If firewalld is running, create a new docker zone and add the docker interfaces to the docker zone to allow container networking for distros with Firewalld enabled

Fixes: https://github.com/moby/libnetwork/issues/2496

Debug output

[vagrant@centos8 ~]$ uname -a
Linux centos8 4.18.0-80.el8.x86_64 #1 SMP Tue Jun 4 09:19:46 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

[vagrant@centos8 ~]$ sudo firewall-cmd --info-zone docker
docker (active)
  target: ACCEPT
  icmp-block-inversion: no
  interfaces: docker0
  sources: 
  services: 
  ports: 
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 

[vagrant@centos8 ~]$ sudo docker run -it alpine nslookup www.google.com
  Server:		10.0.2.3
Address:	10.0.2.3:53

Non-authoritative answer:
Name:	www.google.com
Address: 2607:f8b0:4005:80a::2004

Non-authoritative answer:
Name:	www.google.com
Address: 172.217.164.100

Signed-off-by: Arko Dasgupta arko.dasgupta@docker.com

+159 -10

13 comments

2 changed files

arkodg

pr closed time in 2 months

issue closedmoby/libnetwork

Port Forwarding does not work on RHEL 8 with Firewalld running with FirewallBackend=nftables

With RHEL8 and Firewalld with FirewallBackend=nftables enabled, docker port forwarding (e.g. docker run --name test-nginx -p 8080:80 -d nginx )does not work

Might need to revisit the logic in https://github.com/docker/libnetwork/blob/master/iptables/firewalld.go to get this to work

Workaround -

  1. Set FirewallBackend in /etc/firewalld/firewalld.conf to iptables
  2. or Include the interface firewall-cmd --permanent --zone=trusted --add-interface=docker0; firewall-cmd --reload

closed time in 2 months

arkodg

push eventmoby/libnetwork

Sebastiaan van Stijn

commit sha 15ead894b96497b2a2a3363fd8a6d55bd834bb07

Resolver: fix error handling if we didn't receive a response Commit d5e341e6798c619147691d53ceb6b426b3b8cb9d updated the DNS library and updated the error handling. Due to changes in the library, we now had to check the response itself to check if the response was truncated (Truncated DNS replies should be sent to the client so that the client can retry over TCP). However, bea32b018c874ef35396ef46a3908ca0f9367d76 added an incorrect `nil` check to fix a panic, which ignored situations where an error was returned, but no response (for example, if we failed to connect to the DNS server). In that situation, the error would be ignored, and further down we would consider the connection to have been succesfull, but the DNS server not returning a result. After a "successful" lookup (but no results), we break the loop, and don't attempt lookups in other DNS servers. Versions before bea32b018c874ef35396ef46a3908ca0f9367d76 would produce: Name To resolve: bbc.co.uk. [resolver] query bbc.co.uk. (A) from 172.21.0.2:36181, forwarding to udp:192.168.5.1 [resolver] read from DNS server failed, read udp 172.21.0.2:36181->192.168.5.1:53: i/o timeout [resolver] query bbc.co.uk. (A) from 172.21.0.2:38582, forwarding to udp:8.8.8.8 [resolver] received A record "151.101.0.81" for "bbc.co.uk." from udp:8.8.8.8 [resolver] received A record "151.101.192.81" for "bbc.co.uk." from udp:8.8.8.8 [resolver] received A record "151.101.64.81" for "bbc.co.uk." from udp:8.8.8.8 [resolver] received A record "151.101.128.81" for "bbc.co.uk." from udp:8.8.8.8 Versions after that commit would ignore the error, and stop further lookups: Name To resolve: bbc.co.uk. [resolver] query bbc.co.uk. (A) from 172.21.0.2:59870, forwarding to udp:192.168.5.1 [resolver] external DNS udp:192.168.5.1 returned empty response for "bbc.co.uk." This patch updates the logic to handle the error to log the error (and continue with the next DNS): - if an error is returned, and no response was received - if an error is returned, but it was not related to a truncated response Signed-off-by: Sebastiaan van Stijn <github@gone.nl> Signed-off-by: Tibor Vass <tibor@docker.com>

view details

Arko Dasgupta

commit sha 2e24aed516bd5c836e11378bb457dd612aa868ed

Merge pull request #2551 from thaJeztah/fix_error_handling Resolver: fix error handling if we didn't receive a response

view details

push time in 2 months

PR merged moby/libnetwork

Resolver: fix error handling if we didn't receive a response

Addresses https://github.com/moby/moby/issues/41003 Addresses https://github.com/moby/moby/issues/20494#issuecomment-631339768

Commit d5e341e updated the DNS library and updated the error handling.

Due to changes in the library, we now had to check the response itself to check if the response was truncated (Truncated DNS replies should be sent to the client so that the client can retry over TCP).

However, bea32b0 added an incorrect nil check to fix a panic, which ignored situations where an error was returned, but no response (for example, if we failed to connect to the DNS server).

In that situation, the error would be ignored, and further down we would consider the connection to have been succesfull, but the DNS server not returning a result.

After a "successful" lookup (but no results), we break the loop, and don't attempt lookups in other DNS servers.

Versions before bea32b0 would produce:

Name To resolve: bbc.co.uk.
[resolver] query bbc.co.uk. (A) from 172.21.0.2:36181, forwarding to udp:192.168.5.1
[resolver] read from DNS server failed, read udp 172.21.0.2:36181->192.168.5.1:53: i/o timeout
[resolver] query bbc.co.uk. (A) from 172.21.0.2:38582, forwarding to udp:8.8.8.8
[resolver] received A record "151.101.0.81" for "bbc.co.uk." from udp:8.8.8.8
[resolver] received A record "151.101.192.81" for "bbc.co.uk." from udp:8.8.8.8
[resolver] received A record "151.101.64.81" for "bbc.co.uk." from udp:8.8.8.8
[resolver] received A record "151.101.128.81" for "bbc.co.uk." from udp:8.8.8.8

Versions after that commit would ignore the error, and stop further lookups:

Name To resolve: bbc.co.uk.
[resolver] query bbc.co.uk. (A) from 172.21.0.2:59870, forwarding to udp:192.168.5.1
[resolver] external DNS udp:192.168.5.1 returned empty response for "bbc.co.uk."

This patch updates the logic to handle the error to log the error (and continue with the next DNS):

  • if an error is returned, and no response was received
  • if an error is returned, but it was not related to a truncated response
+1 -1

8 comments

1 changed file

thaJeztah

pr closed time in 2 months

pull request commentmoby/libnetwork

Resolver: fix error handling if we didn't receive a response

@thaJeztah can we recreate the empty response using a test similar to https://github.com/moby/libnetwork/blob/1ea375d2b54d2e914e41970a04553ad55ef39b62/resolver_test.go#L216

thaJeztah

comment created time in 3 months

pull request commentmoby/libnetwork

[19.03 backport] Resolver: fix error handling if we didn't receive a response

@thaJeztah can we recreate the empty response using a test similar to https://github.com/moby/libnetwork/blob/1ea375d2b54d2e914e41970a04553ad55ef39b62/resolver_test.go#L216

thaJeztah

comment created time in 3 months

pull request commentmoby/libnetwork

Add docker interfaces to firewalld docker zone

@cpuguy83 do you prefer having a firewall-integration flag that can be used for firewalld and ufw, and would it be disabled/enabled by default .

arkodg

comment created time in 3 months

pull request commentmoby/libnetwork

Add docker interfaces to firewalld docker zone

@cpuguy83 even w/o this change, we have been pushing iptables rules via firewalld I'm not a fan of creating yet another daemon knob . Although iptables=false is less granular it should be the recommended approach if the user wants to manage their own iptables rules via iptables or firewalld

arkodg

comment created time in 3 months

pull request commentmoby/libnetwork

Enable IPv6 NAT (Cleaner commits)

@etc0de @andryyy @HunterXuan , would really appreciate it if you could carry forward this PR, I will sign up as a reviewer

wrridgwa

comment created time in 3 months

pull request commentmoby/libnetwork

Add docker interfaces to firewalld docker zone

So I believe that should happen if the user sets iptables=false

arkodg

comment created time in 3 months

pull request commentmoby/moby

[do not merge] vendor libnetwork with firewalld changes

@thaJeztah mentioned that the integration tests run in DinD which cannot be used to test firewalld integration

thaJeztah

comment created time in 3 months

issue commentdocker/for-linux

Docker bypasses ufw firewall rules

https://github.com/moby/libnetwork/pull/2548 should add native support for the latest firewalld version soon which adds docker interfaces to a docker zone which accepts traffic and we pass direct iptables rules to firewalld for forwarding, DNAT etc . We would love to support ufw but unsure of what kind/how much of plumbing will be needed to support it since we are not very familiar with ufw as @cpuguy83 suggested, would a simple jump to a known ufw generated iptables chain suffice - iptables -I DOCKER-USER -j ufw-user-forward Hoping to hear from the ufw experts/maintainers

binaryfire

comment created time in 3 months

pull request commentmoby/moby

[do not merge] vendor libnetwork with firewalld changes

@StefanScherer would it be possible to setup a Centos8/Fedora32 CI machine with firewalld enabled, this will also help with the ongoing work to support docker-ce packages for the above distros TIA :)

thaJeztah

comment created time in 3 months

pull request commentmoby/libnetwork

Add docker interfaces to firewalld docker zone

If a user manually created a docker zone, libnetwork would just reuse that and insert interfaces into that zone. However exact behavior of networking depends on the zone settings (which libnetwork will not try and override)

arkodg

comment created time in 3 months

pull request commentmoby/moby

[do not merge] vendor libnetwork with firewalld changes

failure looks unrelated

=== RUN   TestJSONFileLoggerWithOpts
--- FAIL: TestJSONFileLoggerWithOpts (0.01s)
    jsonfilelog_test.go:187: open C:\Users\ContainerAdministrator\AppData\Local\Temp\docker-logger-432409777\container.log.1: The process cannot access the file because it is being used by another process.
thaJeztah

comment created time in 3 months

push eventarkodg/libnetwork

Arko Dasgupta

commit sha 7a7209221542dc99b316748c97608dfc276c40f6

Add docker interfaces to firewalld docker zone If firewalld is running, create a new docker zone and add the docker interfaces to the docker zone to allow container networking for distros with firewalld enabled Fixes: https://github.com/moby/libnetwork/issues/2496 Signed-off-by: Arko Dasgupta <arko.dasgupta@docker.com>

view details

push time in 3 months

push eventarkodg/libnetwork

Arko Dasgupta

commit sha 902a280f9176e5745e31ff311ef8fcb189eba447

Add docker interfaces to firewalld docker zone If firewalld is running, create a new docker zone and add the docker interfaces to the docker zone to allow container networking for distros with Firewalld enabled Fixes: https://github.com/moby/libnetwork/issues/2496 Signed-off-by: Arko Dasgupta <arko.dasgupta@docker.com>

view details

push time in 3 months

push eventarkodg/libnetwork

Arko Dasgupta

commit sha 57b7633aca9efd2d164c4243d24eb39d4bc23134

Add docker interfaces to firewalld docker zone If firewalld is running, create a new docker zone and add the docker interfaces to the docker zone to allow container networking for distros with Firewalld enabled Fixes: https://github.com/moby/libnetwork/issues/2496 Signed-off-by: Arko Dasgupta <arko.dasgupta@docker.com>

view details

push time in 3 months

Pull request review commentmoby/libnetwork

Add docker interfaces to firewalld docker zone

 func checkRunning() bool { func Passthrough(ipv IPV, args ...string) ([]byte, error) { 	var output string 	logrus.Debugf("Firewalld passthrough: %s, %s", ipv, args)-	if err := connection.sysobj.Call(dbusInterface+".direct.passthrough", 0, ipv, args).Store(&output); err != nil {+	if err := connection.sysObj.Call(dbusInterface+".direct.passthrough", 0, ipv, args).Store(&output); err != nil { 		return nil, err 	} 	return []byte(output), nil }++// getDockerZoneSettings converts the ZoneSettings struct into a interface slice+func getDockerZoneSettings() []interface{} {+	settings := ZoneSettings{+		version:     "1.0",+		name:        dockerZone,+		description: "zone for docker bridge network interfaces",+		target:      "ACCEPT",+	}+	slice := []interface{}{+		settings.version,+		settings.name,+		settings.description,+		settings.unused,+		settings.target,+		settings.services,+		settings.ports,+		settings.icmpBlocks,+		settings.masquerade,+		settings.forwardPorts,+		settings.interfaces,+		settings.sourceAddresses,+		settings.richRules,+		settings.protocols,+		settings.sourcePorts,+		settings.unknown,+	}+	return slice++}++// setupDockerZone creates a zone called docker in firewalld which includes docker interfaces to allow+// container networking+func setupDockerZone() error {+	var zones []string+	// Check if zone exists+	if err := connection.sysObj.Call(dbusInterface+".zone.getZones", 0).Store(&zones); err != nil {+		return err+	}+	if contains(zones, dockerZone) {+		logrus.Infof("Firewalld: %s zone already exists, returning", dockerZone)+		return nil+	}+	logrus.Debugf("Firewalld: creating %s zone", dockerZone)++	settings := getDockerZoneSettings()+	// Permanent+	if err := connection.sysConfObj.Call(dbusInterface+".config.addZone", 0, dockerZone, settings).Err; err != nil {+		return err+	}+	// Reload for change to take effect+	if err := connection.sysObj.Call(dbusInterface+".reload", 0).Err; err != nil {+		return err+	}++	return nil+}++// AddInterfaceFirewalld adds the interface to the trusted zone+func AddInterfaceFirewalld(intf string) error {+	var intfs []string+	// Check if interface is already added to the zone+	if err := connection.sysObj.Call(dbusInterface+".zone.getInterfaces", 0, dockerZone).Store(&intfs); err != nil {+		return err+	}+	// Return if interface is already part of the zone+	if contains(intfs, intf) {+		logrus.Infof("Firewalld: interface %s already part of %s zone, returning", intf, dockerZone)+		return nil+	}++	logrus.Debugf("Firewalld: adding %s interface to %s zone", intf, dockerZone)+	// Runtime+	if err := connection.sysObj.Call(dbusInterface+".zone.addInterface", 0, dockerZone, intf).Err; err != nil {+		return err+	}+	return nil+}++// DelInterfaceFirewalld removes the interface from the trusted zone+func DelInterfaceFirewalld(intf string) error {+	var intfs []string+	// Check if interface is part of the zone+	if err := connection.sysObj.Call(dbusInterface+".zone.getInterfaces", 0, dockerZone).Store(&intfs); err != nil {+		return err+	}+	// Remove interface if it exists+	if !contains(intfs, intf) {+		return fmt.Errorf("Firewalld: unable to find interface %s in %s zone", intf, dockerZone)+	}++	logrus.Debugf("Firewalld: removing %s interface from %s zone", intf, dockerZone)+	// Runtime+	if err := connection.sysObj.Call(dbusInterface+".zone.removeInterface", 0, dockerZone, intf).Err; err != nil {+		return err+	}

same comment as above

arkodg

comment created time in 3 months

Pull request review commentmoby/libnetwork

Add docker interfaces to firewalld docker zone

 func checkRunning() bool { func Passthrough(ipv IPV, args ...string) ([]byte, error) { 	var output string 	logrus.Debugf("Firewalld passthrough: %s, %s", ipv, args)-	if err := connection.sysobj.Call(dbusInterface+".direct.passthrough", 0, ipv, args).Store(&output); err != nil {+	if err := connection.sysObj.Call(dbusInterface+".direct.passthrough", 0, ipv, args).Store(&output); err != nil { 		return nil, err 	} 	return []byte(output), nil }++// getDockerZoneSettings converts the ZoneSettings struct into a interface slice+func getDockerZoneSettings() []interface{} {+	settings := ZoneSettings{+		version:     "1.0",+		name:        dockerZone,+		description: "zone for docker bridge network interfaces",+		target:      "ACCEPT",+	}+	slice := []interface{}{+		settings.version,+		settings.name,+		settings.description,+		settings.unused,+		settings.target,+		settings.services,+		settings.ports,+		settings.icmpBlocks,+		settings.masquerade,+		settings.forwardPorts,+		settings.interfaces,+		settings.sourceAddresses,+		settings.richRules,+		settings.protocols,+		settings.sourcePorts,+		settings.unknown,+	}+	return slice++}++// setupDockerZone creates a zone called docker in firewalld which includes docker interfaces to allow+// container networking+func setupDockerZone() error {+	var zones []string+	// Check if zone exists+	if err := connection.sysObj.Call(dbusInterface+".zone.getZones", 0).Store(&zones); err != nil {+		return err+	}+	if contains(zones, dockerZone) {+		logrus.Infof("Firewalld: %s zone already exists, returning", dockerZone)+		return nil+	}+	logrus.Debugf("Firewalld: creating %s zone", dockerZone)++	settings := getDockerZoneSettings()+	// Permanent+	if err := connection.sysConfObj.Call(dbusInterface+".config.addZone", 0, dockerZone, settings).Err; err != nil {+		return err+	}+	// Reload for change to take effect+	if err := connection.sysObj.Call(dbusInterface+".reload", 0).Err; err != nil {+		return err+	}++	return nil+}++// AddInterfaceFirewalld adds the interface to the trusted zone+func AddInterfaceFirewalld(intf string) error {+	var intfs []string+	// Check if interface is already added to the zone+	if err := connection.sysObj.Call(dbusInterface+".zone.getInterfaces", 0, dockerZone).Store(&intfs); err != nil {+		return err+	}+	// Return if interface is already part of the zone+	if contains(intfs, intf) {+		logrus.Infof("Firewalld: interface %s already part of %s zone, returning", intf, dockerZone)+		return nil+	}++	logrus.Debugf("Firewalld: adding %s interface to %s zone", intf, dockerZone)+	// Runtime+	if err := connection.sysObj.Call(dbusInterface+".zone.addInterface", 0, dockerZone, intf).Err; err != nil {+		return err+	}+	return nil

I'd like to keep it this way, since AFAIK I would need to compare the error string to handle the above case, which might change in the future

arkodg

comment created time in 3 months

Pull request review commentmoby/libnetwork

Add docker interfaces to firewalld docker zone

 const ( 	// Ebtables point to bridge table 	Ebtables IPV = "eb" )+ const (-	dbusInterface = "org.fedoraproject.FirewallD1"-	dbusPath      = "/org/fedoraproject/FirewallD1"+	dbusInterface  = "org.fedoraproject.FirewallD1"+	dbusPath       = "/org/fedoraproject/FirewallD1"+	dbusConfigPath = "/org/fedoraproject/FirewallD1/config"+	dockerZone     = "docker" )  // Conn is a connection to firewalld dbus endpoint. type Conn struct {-	sysconn *dbus.Conn-	sysobj  dbus.BusObject-	signal  chan *dbus.Signal+	sysconn    *dbus.Conn+	sysObj     dbus.BusObject+	sysConfObj dbus.BusObject+	signal     chan *dbus.Signal+}++// ZoneSettings holds the firewalld zone settings, documented in+// https://firewalld.org/documentation/man-pages/firewalld.dbus.html+type ZoneSettings struct {+	version         string+	name            string+	description     string+	unused          bool+	target          string+	services        []string+	ports           [][]interface{}+	icmpBlocks      []string+	masquerade      bool+	forwardPorts    [][]interface{}+	interfaces      []string+	sourceAddresses []string+	richRules       []string+	protocols       []string+	sourcePorts     [][]interface{}+	unknown         bool

thanks, will include this

arkodg

comment created time in 3 months

issue commentfirewalld/firewalld

Unable to create new zone using the dbus interface

np, hoping you can take a look at the PR to recommend some sane zone settings

arkodg

comment created time in 3 months

pull request commentmoby/libnetwork

Add docker interfaces to firewalld docker zone

updated the PR to add docker interfaces to a new docker zone PTAL @yrro @cpuguy83 @thaJeztah

arkodg

comment created time in 3 months

push eventarkodg/libnetwork

Arko Dasgupta

commit sha d0d4036e8af57adae770437aca622eb553c1c03f

Add docker interfaces to firewalld docker zone If firewalld is running, create a new docker zone and add the docker interfaces to the docker zone to allow container networking for distros with Firewalld enabled Fixes: https://github.com/moby/libnetwork/issues/2496 Signed-off-by: Arko Dasgupta <arko.dasgupta@docker.com>

view details

push time in 3 months

issue closedfirewalld/firewalld

Unable to create new zone using the dbus interface

Attempting to create a new zone called docker for docker networking interfaces for all distros with firewalld enabled

Using https://firewalld.org/documentation/man-pages/firewalld.dbus.html#FirewallD1.config.Methods.addZone as a reference

I'm hitting into this error when calling addZone

RRO[2020-05-05T20:21:02.869141449Z] Traceback (most recent call last):
  File "/usr/lib64/python3.6/site-packages/dbus/service.py", line 654, in _message_cb
    (candidate_method, parent_method) = _method_lookup(self, method_name, interface_name)
  File "/usr/lib64/python3.6/site-packages/dbus/service.py", line 246, in _method_lookup
    raise UnknownMethodException('%s is not a valid method of interface %s' % (method_name, dbus_interface))
dbus.exceptions.UnknownMethodException: org.freedesktop.DBus.Error.UnknownMethod: Unknown method: addZone is not a valid method of interface org.fedoraproject.FirewallD1.config 

On my CentOS VM running firewalld 0.6.3

[vagrant@centos8 ~]$ sudo firewall-cmd -V
0.6.3

Relates to - https://github.com/moby/libnetwork/pull/2548

TIA !

closed time in 3 months

arkodg

issue commentfirewalld/firewalld

Unable to create new zone using the dbus interface

@erig0 thanks for the help, got it to work I was using the wrong object-path which you pointed out and had to pass an interface slice to the addZone API to get it to work , had to reverse engineer the dbus API arg type using getZoneSettings

Also you might want to update https://firewalld.org/documentation/man-pages/firewalld.dbus.html#FirewallD1.config.Methods.addZone to include the 16th argument (boolean)

arkodg

comment created time in 3 months

push eventarkodg/libnetwork

Arko Dasgupta

commit sha ed91f60eb738c8c57f0d23023d69eb893f9ffdd7

Add docker interfaces to firewalld docker zone If firewalld is running, create a new docker zone and add the docker interfaces to the docker zone to allow container networking for distros with Firewalld enabled Fixes: https://github.com/moby/libnetwork/issues/2496 Signed-off-by: Arko Dasgupta <arko.dasgupta@docker.com>

view details

push time in 3 months

pull request commentmoby/libnetwork

Add docker interfaces to firewalld docker zone

@yrro thanks for taking a look !

  1. attempted to create a new docker zone but hitting some issues, following it up with firewalld
  2. libnetwork does handle the reload case - https://github.com/moby/libnetwork/blob/1ea375d2b54d2e914e41970a04553ad55ef39b62/iptables/firewalld.go#L138
  3. the notion of zones or just security policy is definitely something that would be nice to have but not sure what the interface would look like yet, and hoping to keep that out of this PR
arkodg

comment created time in 3 months

issue openedfirewalld/firewalld

Unable to create new zone using the dbus interface

Attempting to create a new zone called docker for docker networking interfaces for all distros with firewalld enabled

Using https://firewalld.org/documentation/man-pages/firewalld.dbus.html#FirewallD1.config.Methods.addZone as a reference

I'm hitting into this error when calling addZone

RRO[2020-05-05T20:21:02.869141449Z] Traceback (most recent call last):
  File "/usr/lib64/python3.6/site-packages/dbus/service.py", line 654, in _message_cb
    (candidate_method, parent_method) = _method_lookup(self, method_name, interface_name)
  File "/usr/lib64/python3.6/site-packages/dbus/service.py", line 246, in _method_lookup
    raise UnknownMethodException('%s is not a valid method of interface %s' % (method_name, dbus_interface))
dbus.exceptions.UnknownMethodException: org.freedesktop.DBus.Error.UnknownMethod: Unknown method: addZone is not a valid method of interface org.fedoraproject.FirewallD1.config 

On my CentOS VM running firewalld 0.6.3

[vagrant@centos8 ~]$ sudo firewall-cmd -V
0.6.3

Relates to - https://github.com/moby/libnetwork/pull/2548

TIA !

created time in 3 months

push eventarkodg/libnetwork

Arko Dasgupta

commit sha cd39849ddcddc3b611d47884a0e774a2aef7cccd

Add docker interfaces to firewalld docker zone If firewalld is running, create a new docker zone and add the docker interfaces to the docker zone to allow container networking for distros with Firewalld enabled Fixes: https://github.com/moby/libnetwork/issues/2496 Signed-off-by: Arko Dasgupta <arko.dasgupta@docker.com>

view details

push time in 3 months

issue commentmoby/moby

[feature request] nftables support

@lee-jnk AFAIK podman offloads networking to cni (https://github.com/containernetworking/plugins) which does have a an open PR for nftables https://github.com/containernetworking/plugins/pull/462

senden9

comment created time in 3 months

more