profile
viewpoint
Akihiro Suda AkihiroSuda NTT Tokyo, Japan https://akihirosuda.github.io/ Moby (former Docker Engine), BuildKit, and containerd maintainer. https://twitter.com/_AkihiroSuda_ ("AkihiroSuda" without underscores is NOT my Twitter)

issue closedGoogleCloudPlatform/google-cloud-powershell

Request: `gcloud components install powershell` on Linux and macOS

Now powershell officially supports Linux and macOS.

So could you please consider enabling gcloud components install powershell on Linux and macOS as well?

closed time in 6 hours

AkihiroSuda

issue closedlinuxkit/linuxkit

TODO: use dhcpcd as both onboot and service

From https://github.com/linuxkit/linuxkit/pull/1701#issuecomment-296936114

The reason for invoking dhcp in onboot is so that we can guarantee that we have an IP address for everything starting afterwards. If we just start dhcp in the service section other services, like for example an nginx contianer, need to check/wait till the network is up. We had discussed to use dhcpd in oneshot mode in onboot and as a service

Before duplicating dhcpd in all the yaml files and keeping them up-to-date, I think we should better implement moby/tool#5 (inheritance) or moby/tool#4 (include) in the builder.

closed time in 6 hours

AkihiroSuda

issue closedmistifyio/go-zfs

implement `zfs promote`

I can open PR if no one working on this yet.

closed time in 6 hours

AkihiroSuda

issue closedrd235/libslirp

man: add do_pty magic numbers

https://github.com/rd235/libslirp/blob/37fd650ad7fba7eb0360b1e1d0abf69cac6eb403/man/libslirpfwd.3#L56

However, IIUC, do_pty = 1 is not functional, unless outdated slirp.telnetd is installed?

https://github.com/rd235/libslirp/blob/37fd650ad7fba7eb0360b1e1d0abf69cac6eb403/src/misc.c#L82

closed time in 6 hours

AkihiroSuda

issue closedgo-cmd/cmd

Request: support for Pdeathsig

closed time in 6 hours

AkihiroSuda

issue commentmoby/moby

Push 19.03 engine branch to this repo

Thanks 🎉

Can we push tags as well?

Not having 17.04+ tags may give people false sense that the project was stopped after 17.03.

cpuguy83

comment created time in 7 hours

issue openedcontainerd/cri

sync OWNERS with containerd/project/MAINTAINERS

As containerd/cri is a core subproject of containerd, https://github.com/containerd/cri/blob/master/OWNERS should be synced with https://github.com/containerd/project/blob/master/MAINTAINERS

created time in 7 hours

pull request commentcontainerd/cri

vendor kubernetes 1.17.1

/test pull-cri-containerd-node-e2e

AkihiroSuda

comment created time in 8 hours

issue closedmoby/moby

docker fails to start with SIGSEGV

Description Docker daemon fails to start when restarted with systemctl restart docker

Steps to reproduce the issue:

  1. Restart docker with systemctl
  2. Check syslog

Describe the results you received: Systemd is restarting docker:

Apr 24 13:47:37 localhost systemd[1]: Stopping Docker Application Container Engine...
Apr 24 13:47:53 localhost systemd[1]: Stopped Docker Application Container Engine.
Apr 24 13:47:53 localhost systemd[1]: Closed Docker Socket for the API.
Apr 24 13:47:53 localhost systemd[1]: Stopping Docker Socket for the API.
Apr 24 13:47:53 localhost systemd[1]: Starting Docker Socket for the API.
Apr 24 13:47:53 localhost systemd[1]: Listening on Docker Socket for the API.
Apr 24 13:47:53 localhost systemd[1]: Starting Docker Application Container Engine...
Apr 24 13:48:05 localhost systemd[1]: docker.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Apr 24 13:48:05 localhost systemd[1]: Failed to start Docker Application Container Engine.

But dockerd fails:

3657126 Apr 24 13:59:27 localhost dockerd[17816]: time="2017-04-24T13:59:27.061012296Z" level=warning msg="libcontainerd: client is out of sync, restore was called on a fully synced container (386eb7812980        fe22ac7e2f32d3cfee52686ba5aef116b67ea2a1cd10e5a01b5f)."
3657127 Apr 24 13:59:27 localhost dockerd[17816]: time="2017-04-24T13:59:27.061623954Z" level=debug msg="libcontainerd: restore container 386eb7812980fe22ac7e2f32d3cfee52686ba5aef116b67ea2a1cd10e5a01b5f st        ate running"
3657128 Apr 24 13:59:27 localhost dockerd[17816]: panic: runtime error: invalid memory address or nil pointer dereference
3657129 Apr 24 13:59:27 localhost dockerd[17816]: [signal SIGSEGV: segmentation violation code=0x1 addr=0x40 pc=0x59add1]
3657130 Apr 24 13:59:27 localhost dockerd[17816]: goroutine 6155 [running]:
3657131 Apr 24 13:59:27 localhost dockerd[17816]: panic(0x16dc2a0, 0xc42000c080)
3657132 Apr 24 13:59:27 localhost dockerd[17816]: #011/usr/local/go/src/runtime/panic.go:500 +0x1a1
3657133 Apr 24 13:59:27 localhost dockerd[17816]: github.com/docker/docker/daemon.(*Daemon).createSpec(0xc4203e2200, 0xc4214c7000, 0x0, 0x0, 0xc4203e2200)
3657134 Apr 24 13:59:27 localhost dockerd[17816]: #011/usr/src/docker/.gopath/src/github.com/docker/docker/daemon/oci_linux.go:729 +0xe11
3657135 Apr 24 13:59:27 localhost dockerd[17816]: github.com/docker/docker/daemon.(*Daemon).containerStart(0xc4203e2200, 0xc4214c7000, 0x0, 0x0, 0x0, 0x0, 0xc42349b400, 0x0, 0x0)
3657136 Apr 24 13:59:27 localhost dockerd[17816]: #011/usr/src/docker/.gopath/src/github.com/docker/docker/daemon/start.go:149 +0x290
3657137 Apr 24 13:59:27 localhost dockerd[17816]: github.com/docker/docker/daemon.(*Daemon).StateChanged.func2(0xc42349ade0, 0xc4203e2200, 0xc4214c7000, 0xc421a3b104, 0x4, 0x100000000, 0x0, 0x0, 0x0, 0xc42        3496680)
3657138 Apr 24 13:59:27 localhost dockerd[17816]: #011/usr/src/docker/.gopath/src/github.com/docker/docker/daemon/monitor.go:65 +0x267
3657139 Apr 24 13:59:27 localhost dockerd[17816]: created by github.com/docker/docker/daemon.(*Daemon).StateChanged
3657140 Apr 24 13:59:27 localhost dockerd[17816]: #011/usr/src/docker/.gopath/src/github.com/docker/docker/daemon/monitor.go:76 +0x6de
3657141 Apr 24 13:59:27 localhost systemd[1]: docker.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
3657142 Apr 24 13:59:27 localhost systemd[1]: Failed to start Docker Application Container Engine.
3657143 Apr 24 13:59:27 localhost systemd[1]: Dependency failed for daemon for configuring additional routing and iptables rules for additional IPs.
3657144 Apr 24 13:59:27 localhost systemd[1]: docker_routing_rules.service: Job docker_routing_rules.service/start failed with result 'dependency'.
3657145 Apr 24 13:59:27 localhost systemd[1]: docker.service: Unit entered failed state.
3657146 Apr 24 13:59:27 localhost systemd[1]: docker.service: Failed with result 'exit-code'.

Additional information you deem important (e.g. issue happens only occasionally): It's not repeatable; eventually, some restart will be successful and docker will start

Output of docker version: Please note: we confirmed this on 17.03.0 EE also

Client:
 Version:      17.03.1-ee-3
 API version:  1.27
 Go version:   go1.7.5
 Git commit:   3fcee33
 Built:        Thu Mar 30 20:06:11 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.03.1-ee-3
 API version:  1.27 (minimum version 1.12)
 Go version:   go1.7.5
 Git commit:   3fcee33
 Built:        Thu Mar 30 20:06:11 2017
 OS/Arch:      linux/amd64
 Experimental: false

Output of docker info:

Containers: 393
 Running: 316
 Paused: 0
 Stopped: 77
Images: 1886
Server Version: 17.03.1-ee-3
Storage Driver: aufs
 Root Dir: /opt/io1/docker/aufs
 Backing Filesystem: extfs
 Dirs: 4506
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins: 
 Volume: local
 Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 4ab9917febca54791c5f071a9d1f404867857fcc
runc version: 54296cf40ad8143b62dbcaa1d90e520a2136ddfe
init version: 949e6fa
Security Options:
 apparmor
 seccomp
  Profile: default
Kernel Version: 4.4.0-72-generic
Operating System: Ubuntu 16.04.2 LTS
OSType: linux
Architecture: x86_64
CPUs: 64
Total Memory: 960.7 GiB
Name: ip-10-69-11-89
ID: UGZS:UFD3:GB4C:W5MX:JU2L:K7PH:6ZWS:4GPM:27Q5:UNNN:X3DC:YDT7
Docker Root Dir: /opt/io1/docker
Debug Mode (client): false
Debug Mode (server): true
 File Descriptors: 2962
 Goroutines: 1968
 System Time: 2017-04-25T10:12:51.915536168Z
 EventsListeners: 1
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: true

Additional environment details (AWS, VirtualBox, physical, etc.):

  • running on AWS x16.xlarge

closed time in 8 hours

piontec

issue commentmoby/moby

docker fails to start with SIGSEGV

Seems already resolved?

piontec

comment created time in 8 hours

issue closedmoby/moby

panic in 1.12.5

Reported by @cloutiertyler on Docker for Mac: https://github.com/docker/for-mac/issues/205#issuecomment-276240698

I made a gist which isolate the panic stack trace: https://gist.github.com/samoht/ad87fb019d6e29580145c31bd4ec6488

Please let me know if you need more information (or if you want this kind of issues to be formatted differently / reported in a different place). It happened using the stable channel of Docker for Mac on the 10th of January and was using 1.12.5

closed time in 8 hours

samoht

issue commentmoby/moby

panic in 1.12.5

Closable?

samoht

comment created time in 8 hours

issue commentmoby/moby

support zstd as archive format

Link: OCI Image Spec added support for zstd recently: https://github.com/opencontainers/image-spec/pull/788

This is already implemented by containerd, but still not implemented by Docker/Moby.

grexe

comment created time in 8 hours

issue closedmoby/moby

Docker swarm from DAB file infinitely tries to pull an image.


BUG REPORT INFORMATION

Description

When you deploy a dab file to docker swarm with a reference to a docker image that does not exist, it does not stop trying to get the image. Docker logs each of these attempts until the dockerfile.log file fills the server.

Steps to reproduce the issue:

  1. From a team city deployment script
  2. Create a docker DAB file with an image repository that does exist and a tag that does not exist.
  3. Run the command "docker deploy --with-registry-auth --file environment..dab xxxxxx"
  4. Load up the log and watch it grow "less /var/log/docker.log"

Describe the results you received:

The log file grows till the server is full, it does not give up trying to pull this image.

Describe the results you expected:

Possibly try 10 times then give up

Additional information you deem important (e.g. issue happens only occasionally):

Output of docker log:

time="2016-10-12T14:09:55.208732899Z" level=error msg="Not continuing with pull after error: Tag develop not found in repository docker.io/xxxxxx/xxxxxqueueprocessor" time="2016-10-12T14:09:55.208784224Z" level=error msg="pulling image failed" error="Tag develop not found in repository docker.io/xxxxxx/xxxxxqueueprocessor" module=taskmanager task.id=bikwjqdey83gt4gr3h6nmiu8n time="2016-10-12T14:09:55.209408668Z" level=error msg="fatal task error" error="No such image: controlf1/cf1.charles.journeysummaryqueueprocessor:develop" module=taskmanager task.id=bikwjqdey83gt4gr3h6nmiu8n ... Lots more times

Additional environment details (AWS, VirtualBox, physical, etc.):

AWS Docker swarm beta 5 and 6

closed time in 8 hours

PaulKGray

issue commentmoby/moby

Docker swarm from DAB file infinitely tries to pull an image.

DAB was removed: https://github.com/docker/cli/pull/2216

PaulKGray

comment created time in 8 hours

issue closedmoby/moby

panic when starting dockerd (v1.12.0)

docker version

[root@cpcentos6 docker.service.d]# docker version
Client:
 Version:      1.12.0
 API version:  1.24
 Go version:   go1.6.3
 Git commit:   8eab29e
 Built:        
 OS/Arch:      linux/amd64

docker swarm mode

[root@cpcentos6 docker.service.d]# /usr/bin/dockerd  -H=tcp://0.0.0.0:2376 --registry-mirror=http://mirror.ghostcloud.cn --insecure-registry=192.168.6.2:5001
WARN[0000] [!] DON'T BIND ON ANY IP ADDRESS WITHOUT setting -tlsverify IF YOU DON'T KNOW WHAT YOU'RE DOING [!] 
INFO[0000] libcontainerd: new containerd process, pid: 41122 
WARN[0000] containerd: low RLIMIT_NOFILE changing to max  current=1024 max=4096
WARN[0001] devmapper: Usage of loopback devices is strongly discouraged for production use. Please use `--storage-opt dm.thinpooldev` or use `man docker` to refer to dm.thinpooldev section. 
WARN[0001] devmapper: Base device already exists and has filesystem xfs on it. User specified filesystem  will be ignored. 
INFO[0001] [graphdriver] using prior storage driver "devicemapper" 
INFO[0001] Graph migration to content-addressability took 0.00 seconds 
WARN[0001] mountpoint for pids not found                
INFO[0001] Loading containers: start.                   
INFO[0001] Firewalld running: true                      
INFO[0001] Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address 

INFO[0001] Loading containers: done.                    
INFO[0002] Listening for connections                     addr=[::]:2377 proto=tcp
INFO[0002] Listening for local connections               addr=/var/lib/docker/swarm/control.sock proto=unix
WARN[0002] ignoring request to join cluster, because raft state already exists 
INFO[0002] 63b069b31f4ee3d0 became follower at term 5   
INFO[0002] newRaft 63b069b31f4ee3d0 [peers: [], term: 5, commit: 0, applied: 0, lastindex: 0, lastterm: 0] 
PANI[0003] tocommit(3425) is out of range [lastIndex(0)]. Was the raft log corrupted, truncated, or lost? 
panic: (*logrus.Entry) (0x1cfe940,0xc820ad9a40)

goroutine 87 [running]:
panic(0x1cfe940, 0xc820ad9a40)
    /usr/local/go/src/runtime/panic.go:481 +0x3e6
github.com/Sirupsen/logrus.Entry.log(0xc82000a2c0, 0xc82036c090, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xc820ab0000, ...)
    /root/rpmbuild/BUILD/docker-engine/vendor/src/github.com/Sirupsen/logrus/entry.go:113 +0x62c
github.com/Sirupsen/logrus.(*Entry).Panic(0xc820305a40, 0xc8210b63b8, 0x1, 0x1)
    /root/rpmbuild/BUILD/docker-engine/vendor/src/github.com/Sirupsen/logrus/entry.go:158 +0x99
github.com/Sirupsen/logrus.(*Entry).Panicf(0xc820305a40, 0x20a6700, 0x5d, 0xc820cbb9c0, 0x2, 0x2)
    /root/rpmbuild/BUILD/docker-engine/vendor/src/github.com/Sirupsen/logrus/entry.go:206 +0x139
github.com/coreos/etcd/raft.(*raftLog).commitTo(0xc8203afd50, 0xd61)
    /root/rpmbuild/BUILD/docker-engine/vendor/src/github.com/coreos/etcd/raft/log.go:194 +0x1a6
github.com/coreos/etcd/raft.(*raft).handleHeartbeat(0xc820d36780, 0x8, 0xbcffd7d37c9266f, 0x4e922094287f5a30, 0x5, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
    /root/rpmbuild/BUILD/docker-engine/vendor/src/github.com/coreos/etcd/raft/raft.go:771 +0x44
github.com/coreos/etcd/raft.stepFollower(0xc820d36780, 0x8, 0xbcffd7d37c9266f, 0x4e922094287f5a30, 0x5, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
    /root/rpmbuild/BUILD/docker-engine/vendor/src/github.com/coreos/etcd/raft/raft.go:736 +0x119c
github.com/coreos/etcd/raft.(*raft).Step(0xc820d36780, 0x8, 0xbcffd7d37c9266f, 0x4e922094287f5a30, 0x5, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
    /root/rpmbuild/BUILD/docker-engine/vendor/src/github.com/coreos/etcd/raft/raft.go:564 +0x3e0
github.com/coreos/etcd/raft.(*node).run(0xc820cddbd0, 0xc820d36780)
    /root/rpmbuild/BUILD/docker-engine/vendor/src/github.com/coreos/etcd/raft/node.go:310 +0x90e
created by github.com/coreos/etcd/raft.RestartNode
    /root/rpmbuild/BUILD/docker-engine/vendor/src/github.com/coreos/etcd/raft/node.go:215 +0x2e4

closed time in 8 hours

chenpengdev

issue closedmoby/moby

User namespaces on Gentoo

<!-- If you are reporting a new issue, make sure that we do not have any duplicates already open. You can ensure this by searching the issue list for this repository. If there is a duplicate, please close your issue and add a comment to the existing issue instead.

If you suspect your issue is a bug, please edit your issue description to include the BUG REPORT INFORMATION shown below. If you fail to provide this information within 7 days, we cannot debug your issue and will close it. We will, however, reopen it if you later provide the information.

For more information about reporting issues, see https://github.com/docker/docker/blob/master/CONTRIBUTING.md#reporting-other-issues


BUG REPORT INFORMATION

Use the commands below to provide key information from your environment: You do NOT have to include this information if this is a FEATURE REQUEST -->

Output of docker version:

Docker version 1.11.0, build 4dc5990

Output of docker info:

Containers: 5
 Running: 0
 Paused: 0
 Stopped: 5
Images: 1
Server Version: 1.11.0
Storage Driver: devicemapper
 Pool Name: docker-8:3-4187082-pool
 Pool Blocksize: 65.54 kB
 Base Device Size: 10.74 GB
 Backing Filesystem: ext4
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data Space Used: 309.8 MB
 Data Space Total: 107.4 GB
 Data Space Available: 107.1 GB
 Metadata Space Used: 974.8 kB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.147 GB
 Udev Sync Supported: true
 Deferred Removal Enabled: false
 Deferred Deletion Enabled: false
 Deferred Deleted Device Count: 0
 Data loop file: /var/lib/docker/100000.100000/devicemapper/devicemapper/data
 WARNING: Usage of loopback devices is strongly discouraged for production use. Either use `--storage-opt dm.thinpooldev` or use `--storage-opt dm.no_warn_on_loop_devices=true` to suppress this warning.
 Metadata loop file: /var/lib/docker/100000.100000/devicemapper/devicemapper/metadata
 Library Version: 1.02.93 (2015-01-30)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge null host
Kernel Version: 4.4.8-hardened-r1
Operating System: Gentoo/Linux
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 15.66 GiB
Name: test
ID: YHDF:63TG:76II:I6S6:HRKF:3UUI:V3NX:CDXX:JMQM:ZMP5:6YC7:QB4T
Docker Root Dir: /var/lib/docker/100000.100000
Debug mode (client): false
Debug mode (server): true
 File Descriptors: 18
 Goroutines: 28
 System Time: 2016-05-26T01:33:32.728942903+02:00
 EventsListeners: 0
Registry: https://index.docker.io/v1/

Additional environment details (AWS, VirtualBox, physical, etc.):

Physical server running a current version of Gentoo

Steps to reproduce the issue:

  1. Install docker
  2. Enable user namespaces by adding "--userns-remap=default" to the DOCKER_OPTS in the /etc/conf.d/docker file
  3. docker run hello-world

Describe the results you received: docker: Error response from daemon: Container command '/hello' not found or does not exist.. Log: time="2016-05-26T01:36:29+02:00" level=error msg="containerd: start container" error="oci runtime error: could not synchronise with container process: no such file or directory" id=de655db31400d13f85ab2e3880194c1d51199e25d733d079012b76b8ebe6b872 time="2016-05-26T01:36:29.736755548+02:00" level=debug msg="attach: stdout: end" time="2016-05-26T01:36:29.736906320+02:00" level=debug msg="attach: stderr: end" time="2016-05-26T01:36:29.737206672+02:00" level=debug msg="Revoking external connectivity on endpoint suspicious_kirch (70bf669317cff345060a56d7d7ae6dd5aee904b8b9fe7f19cb20306bda731e92)" time="2016-05-26T01:36:29.809364255+02:00" level=debug msg="Releasing addresses for endpoint suspicious_kirch's interface on network bridge" time="2016-05-26T01:36:29.809455218+02:00" level=debug msg="ReleaseAddress(LocalDefault/172.17.0.0/16, 172.17.0.2)" time="2016-05-26T01:36:29.836453314+02:00" level=debug msg="devmapper: UnmountDevice(hash=5db7c911a96814c5614cdb437b7add466d5f6f71315eccd6957a87249ef2afa1)" time="2016-05-26T01:36:29.836503263+02:00" level=debug msg="devmapper: Unmount(/var/lib/docker/100000.100000/devicemapper/mnt/5db7c911a96814c5614cdb437b7add466d5f6f71315eccd6957a87249ef2afa1)" time="2016-05-26T01:36:29.993813721+02:00" level=debug msg="devmapper: Unmount done" time="2016-05-26T01:36:29.993878939+02:00" level=debug msg="devmapper: deactivateDevice(5db7c911a96814c5614cdb437b7add466d5f6f71315eccd6957a87249ef2afa1)" time="2016-05-26T01:36:29.993979541+02:00" level=debug msg="devmapper: removeDevice START(docker-8:3-4187082-5db7c911a96814c5614cdb437b7add466d5f6f71315eccd6957a87249ef2afa1)" time="2016-05-26T01:36:30.008836193+02:00" level=debug msg="devmapper: removeDevice END(docker-8:3-4187082-5db7c911a96814c5614cdb437b7add466d5f6f71315eccd6957a87249ef2afa1)" time="2016-05-26T01:36:30.008883688+02:00" level=debug msg="devmapper: deactivateDevice END(5db7c911a96814c5614cdb437b7add466d5f6f71315eccd6957a87249ef2afa1)" time="2016-05-26T01:36:30.008909393+02:00" level=debug msg="devmapper: UnmountDevice(hash=5db7c911a96814c5614cdb437b7add466d5f6f71315eccd6957a87249ef2afa1) END" time="2016-05-26T01:36:30.009090425+02:00" level=error msg="Handler for POST /v1.23/containers/de655db31400d13f85ab2e3880194c1d51199e25d733d079012b76b8ebe6b872/start returned error: Container command '/hello' not found or does not exist."

Describe the results you expected:

The normal hello-world output

Additional information you deem important (e.g. issue happens only occasionally):

This works fine, when user namespaces is not used. This could be an issue with the Docker build provided by Gentoo, I tried to give some access rights to the /var/lib/docker directories (chmod -R o+rx) and to the /run/docker* directories, but had no success. If you have other ideas or need more information, please do not hesitate to ask.

closed time in 8 hours

wapolinar

issue commentmoby/moby

User namespaces on Gentoo

I'm closing this, but feel free to reopen if still an issue with recent releases.

wapolinar

comment created time in 8 hours

issue closedmoby/moby

hack/make.sh dynbinary ubuntu fails

Hello,

Can not build docker v1.11.1 ubuntu package using hack/make.sh. Tested under 14.04.

Steps to reproduce the issue:

  • Clone docker repo into clean ubuntu:14.04 container
apt-get update
apt-get -y install git
git clone https://github.com/docker/docker
cd docker && git checkout v1.11.1
  • Install build dependencies
apt-get -y install golang-1.6 go-md2man libdevmapper-dev debhelper \
  build-essential libapparmor-dev dh-systemd btrfs-tools
  • Try to build it
export AUTO_GOPATH=1
hack/make.sh dynbinary ubuntu
  • See it fails
root@4cdfe923d1de:/docker# hack/make.sh dynbinary ubuntu
# WARNING! I don't seem to be running in a Docker container.
# The result of this command might be an incorrect build, and will not be
# officially supported.
#
# Try this instead: make all
#

bundles/1.11.1 already exists. Removing.

---> Making bundle: dynbinary (in bundles/1.11.1/dynbinary)
Building: bundles/1.11.1/dynbinary/docker-1.11.1
Created binary: bundles/1.11.1/dynbinary/docker-1.11.1
+++ '[' -x /usr/local/bin/docker-runc ']'

---> Making bundle: ubuntu (in bundles/1.11.1/ubuntu)
cp: cannot stat 'bundles/1.11.1/ubuntu/../binary/docker-1.11.1': No such file or directory

Expecting to see deb package.

closed time in 8 hours

kshcherban

issue commentmoby/moby

hack/make.sh dynbinary ubuntu fails

Makefiles have changed significantly; closing

kshcherban

comment created time in 8 hours

issue closedmoby/moby

1.8.2 => 1.10 save-load breaks

Output of docker version:

Client:
 Version:      1.10.3
 API version:  1.22
 Go version:   go1.5.3
 Git commit:   20f81dd
 Built:        Thu Mar 10 15:54:52 2016
 OS/Arch:      linux/amd64

Server:
 Version:      1.10.3
 API version:  1.22
 Go version:   go1.5.3
 Git commit:   20f81dd
 Built:        Thu Mar 10 15:54:52 2016
 OS/Arch:      linux/amd64

Output of docker info:

Containers: 9
 Running: 1
 Paused: 0
 Stopped: 8
Images: 47
Server Version: 1.10.3
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 82
 Dirperm1 Supported: false
Execution Driver: native-0.2
Logging Driver: json-file
Plugins: 
 Volume: local
 Network: host bridge null
Kernel Version: 3.13.0-77-generic
Operating System: Ubuntu 14.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 490 MiB

Steps to reproduce the issue:

  1. docker save on an Version: 1.8.2
  2. docker load on the machine described above

Describe the results you received:

Error response from daemon: open /var/lib/docker/tmp/docker-import-944635007/bin/json: no such file or directory

Describe the results you expected:

No error

Additional information you deem important (e.g. issue happens only occasionally):

closed time in 8 hours

chx

issue commentmoby/moby

1.8.2 => 1.10 save-load breaks

These version are no longer supported. Let me close this issue.

chx

comment created time in 8 hours

issue closedmoby/moby

Proposal: Use graph search to find valid chains of relevant docker build cache hits

Proposal: Use graph search to find valid chains of relevant docker build cache hits

Summary: As a build tool, it's not enough to sort by build cache item recency and only pick the most recent image. Cache miss event should cause the builder to backtrack (recursive graph traversal etc) and try the next most recent available matching image, until it exhausts all available image options, at which point it should recursively backtrack to the previous Dockerfile step and repeat the exhaustive search.

Otherwise, the docker build tool will not find the right cache hits even though the cached layers and/or images are technically available. This bug causes a significant amount of rebuilds and cache misses.

Reproducing the bug: Unexpected cache miss after successful build --no-cache

Scenario: a Dockerfile, a git repository, two git branches, one branch has a Dockerfile with N lines last one is "RUN echo beep >> /tmp/beep.txt", other branch has a Dockerfile with same N lines except last one is instead "RUN echo bloop >> /tmp/bloop.txt".

Run docker build on both branches, once on the first branch, then once on the second branch, and finally a third time on the first branch and second branch, to show that the docker build cache is working correctly. The first and second build should have some cache misses, first because the Dockerfile is new, and the second because the new Dockerfile line is also new, as expected. The final two out of the four runs should have only cache hits and those two should never have any cache misses, as expected. This works fine, as expected.

However, after a successful build --no-cache, a subsequent (regular) "docker build" run on the other branch will fail to avoid cache misses. This is unexpected because valid correct intermediate images are still available in the docker build cache, and they are still valid according to all of the cache hit rules.

Speculative proposal

It's not enough to sort by build cache item recency and only pick the most recent image. Cache miss event should cause the builder to backtrack (recursive graph traversal etc) and try the next most recent available matching image, until it exhausts all available image options, at which point it should recursively backtrack to the previous Dockerfile step and repeat the exhaustive search.

"Use the most recent image from the build cache" will fail after using --no-cache because "last known most recently matching image" will naturally not have any relevant child images in the build cache, despite there being working prior images in the build cache that could conceivably be used to correctly satisfy the build. The "last known most recently matching image" will be whatever image was recently created during --no-cache, and therefore have none of the children from previous cached builds. Meanwhile the build cache is still present and available and has useful things inside.

Generally, the builder should prefer to select any satisfactory set of cached images, as long as the build is still estimated to be the correct build. Until the builder has a complete plan, it should prefer the plans that have the lowest cumulative cache miss score. Since this requires essentially a graph search algorithm, basic heuristics could be used for how long to grind on the graph search problem. Since fresh builds can take lots of time, it seems reasonable to spend entire seconds working on the graph search problem, or even allow the user to configure how long to search by an API parameter when starting the builder.

Graph search should significantly improve cache hit performance on many CI clusters, saving what I am sure would be billions of dollars of compute time for builds, which I am sure you can all agree should instead be sent to my Bitcoin addresses, that's how this works, right? Thanks.

"Use the most recent image from the build cache" was implemented here: https://github.com/docker/docker/blob/415dd8688618ac2b3872c6a5b5af3afcef2c3e10/daemon/daemon.go#L1363 https://github.com/docker/docker/issues/3375 https://github.com/docker/docker/pull/3417

Plans and dry runs

There should be a way to generate a "docker build plan", which is not yet executed but based on a comparison of the current Dockerfile to the DOCKER_HOST's docker build cache image store. Being able to compare multiple alternative plans would be extremely useful for diagnosing docker build cache problems (including cache misses) or even potential optimizations. It would also be useful to be able to "hint" to the builder that I expect the cache to be already-warm for a particular build request, so that I may receive an error instead of experiencing a fresh build further contaminating the cache...

"Cache invalidation"

For some reason there seems to be a claim floating around (on the docker site) that the docker build cache is invalidated when foregoing the use of a previous intermediate image. However, based on my (short) review of the source code, it seems that the old intermediate images stay around in the cache and are not invalidated. If this is not the case, perhaps it would be useful to make this more clear in the documentation or even within the source code......

Distributed cache warming?

Unfortunately, concurrent builds on the same DOCKER_HOST seem to be pretty slow (see #9656). At the same time, sharing a docker build cache would be very helpful for reducing total build time across a CI cluster of multiple DOCKER_HOSTs. Perhaps if this is not likely, then a tool for DOCKER_HOST build assignment would be helpful, based on looking at the image store on each host, and then deciding which host should be assigned the job, based on which host would be most likely to have the most cache hits for the build job?

Alternatively I would also be interested in exploring the concept of pulling relevant cached intermediates from a registry....

One more bug...

I was originally investigating possible cache bugs because I was experiencing unexpected cache misses on my CI cluster. Unfortunately I am not using --no-cache, so none of this seems to explain the unexpected cache misses I have been experiencing...... If you think image graph cache search would be helpful beyond the after-no-cache unexpected behavior (like maybe it would help with ADD commands or something?), I would be eager to hear your thoughts. Thanks!

Related

"output reason for docker cache invalidation" https://github.com/docker/docker/issues/9294

"unexpected intermediate images are selected after using --no-cache" (I believe that this was reported before "select the most recent image" was implemented in #3375 and #3417.) https://github.com/docker/docker/issues/3199

"faster cache miss detection" https://github.com/docker/docker/pull/16821

closed time in 8 hours

kanzure

issue commentmoby/moby

Proposal: Use graph search to find valid chains of relevant docker build cache hits

BuiltKit was integrated to Docker in 18.06. I'm closing this issue, but enhancement proposals for caching is still highly appreciated in BuildKit repo: https://github.com/moby/buildkit/issues

kanzure

comment created time in 8 hours

issue commentmoby/moby

Image Graph operations are racey

still issue?

cpuguy83

comment created time in 8 hours

issue commentmoby/moby

enhancement: allow `+` in tag names

This should be discussed in OCI Distribution Spec repo if there is still demand for this: https://github.com/opencontainers/distribution-spec

nakedible-p

comment created time in 8 hours

issue closedmoby/moby

enhancement: allow `+` in tag names

Currently tags must satisfy the regexp ^[\w][\w.-]{0,127}$. That means that tags allow _, . and - apart from letters and numbers.

Semantic Versioning is quite a common versioning system today, used in NPM and in increasing numbers in other projects, including Docker itself. Semantic Versioning 2.0.0 says:

Build metadata MAY be denoted by appending a plus sign and a series of dot separated identifiers immediately following the patch or pre-release version. Identifiers MUST comprise only ASCII alphanumerics and hyphen [0-9A-Za-z-]. Identifiers MUST NOT be empty. Build metadata SHOULD be ignored when determining version precedence. Thus two versions that differ only in the build metadata, have the same precedence. Examples: 1.0.0-alpha+001, 1.0.0+20130313144700, 1.0.0-beta+exp.sha.5114f85.

This means that Semantic Versioning version numbers can in full represented by Docker tags, with the exception of the plus sign. Adding the plus sign as an allowed character in tags would allow all semantic versions to be directly used as docker tags.

It is obvious that there are a lot of competing versioning schemes and there are probably a lot of other characters that would be required for some of them as well, so the addition of just the plus sign is probably a judgement call.

However, I wanted to suggest this because of the widespread usage of semantic versioning and let the project make the judgement call.

closed time in 8 hours

nakedible-p

issue closedmoby/moby

Docker causes system freeze on Ubuntu 14.10

Starting multiple Docker containers hangs the system. Not sure what exact steps are but I have seen such behaviour several times.

Jan 26 15:57:27 oleg kernel: [257250.221647] device vethf7a6cc6 entered promiscuous mode Jan 26 15:57:27 oleg kernel: [257250.221822] IPv6: ADDRCONF(NETDEV_UP): vethf7a6cc6: link is not ready Jan 26 15:57:27 oleg kernel: [257250.271640] IPv6: ADDRCONF(NETDEV_CHANGE): vethf7a6cc6: link becomes ready Jan 26 15:57:27 oleg kernel: [257250.271692] docker0: port 1(vethf7a6cc6) entered forwarding state Jan 26 15:57:27 oleg kernel: [257250.271705] docker0: port 1(vethf7a6cc6) entered forwarding state Jan 26 15:57:28 oleg kernel: [257251.014089] docker0: port 1(vethf7a6cc6) entered disabled state Jan 26 15:57:28 oleg kernel: [257251.015661] device vethf7a6cc6 left promiscuous mode Jan 26 15:57:28 oleg kernel: [257251.015677] docker0: port 1(vethf7a6cc6) entered disabled state Jan 26 15:57:30 oleg kernel: [257252.550674] device veth7707973 entered promiscuous mode Jan 26 15:57:30 oleg kernel: [257252.551075] IPv6: ADDRCONF(NETDEV_UP): veth7707973: link is not ready Jan 26 15:57:30 oleg kernel: [257252.598878] IPv6: ADDRCONF(NETDEV_CHANGE): veth7707973: link becomes ready Jan 26 15:57:30 oleg kernel: [257252.598919] docker0: port 1(veth7707973) entered forwarding state Jan 26 15:57:30 oleg kernel: [257252.598935] docker0: port 1(veth7707973) entered forwarding state Jan 26 15:57:45 oleg kernel: [257267.637453] docker0: port 1(veth7707973) entered forwarding state

Here it hangs. Only off/on with a power button helps. Event Kernel Reset keys are not working ( https://en.wikipedia.org/wiki/Magic_SysRq_key )

Jan 26 15:58:43 oleg kernel: [ 0.000000] Initializing cgroup subsys cpuset Jan 26 15:58:43 oleg kernel: [ 0.000000] Initializing cgroup subsys cpu Jan 26 15:58:43 oleg kernel: [ 0.000000] Initializing cgroup subsys cpuacct Jan 26 15:58:43 oleg kernel: [ 0.000000] Linux version 3.16.0-29-generic (buildd@tipua) (gcc version 4.9.1 (Ubuntu 4.9.1-16ubuntu6) ) #39-Ubuntu SMP Mon Dec 15 22:27:29 UTC 2014 (Ubuntu 3.16.0-29.39-gen Jan 26 15:58:43 oleg kernel: [ 0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-3.16.0-29-generic.efi.signed root=/dev/mapper/ubuntu--vg-root ro

I'm not a Linux guru so let me know where else should I look for logs, dumps, etc

closed time in 8 hours

relgames

issue commentmoby/moby

Docker causes system freeze on Ubuntu 14.10

I'm closing this. If somebody is still hitting this, please open a new issue and also consider contacting to the distro's kernel maintainers.

Note that system hang-up may happen in various different reasons.

relgames

comment created time in 8 hours

issue closedmoby/moby

Gzip performance

This is how perf report looks for docker push.

perf

Looks like compress/gzip in golang is quite slow. To check this I write gzip and gunzip that can work with compress/gzip and code.google.com/p/vitess/go/cgzip.

gzip.go:

package main

import (
    "compress/gzip"
    "code.google.com/p/vitess/go/cgzip"
    "flag"
    "io"
    "log"
    "os"
)

func main() {
    l := flag.Int("l", -1, "compression level")
    c := flag.Bool("c", false, "use cgzip")
    flag.Parse()

    var w io.Writer
    var err error

    if *c {
        w, err = cgzip.NewWriterLevel(os.Stdout, *l)
    } else {
        w, err = gzip.NewWriterLevel(os.Stdout, *l)
    }

    if err != nil {
        log.Fatal(err)
    }

    io.Copy(w, os.Stdin)
}

gunzip.go:

package main

import (
    "compress/gzip"
    "code.google.com/p/vitess/go/cgzip"
    "os"
    "io"
    "log"
    "flag"
)

func main() {
    c := flag.Bool("c", false, "use cgzip")
    flag.Parse()

    var r io.Reader
    var err error

    if *c {
        r, err = cgzip.NewReader(os.Stdin)
    } else {
        r, err = gzip.NewReader(os.Stdin)
    }

    if err != nil {
        log.Fatal(err)
    }

    io.Copy(os.Stdout, r)
}

I have test layer that is 521mb in tar and 66mb in gzipped tar with default compression level. This is real image and final layer of our production image. Now it takes 90s+ to push 2 layers to the registry and this is mostly CPU-bound issue.

Below my tests on on my mbp (core i5 with ssd, so it's not io bound). I ran every command twice and picked best result.

Gunzip:

cmd time slowdown
time cat layer.tar.gz | gunzip > /dev/null 0m1.041s baseline
time cat layer.tar.gz | ./bin/gunzip > /dev/null 0m4.596s 4.41x
time cat layer.tar.gz | ./bin/gunzip -c > /dev/null 0m1.134s 1.08x

Gzip:

cmd time slowdown
time cat layer.tar | gzip -c > /dev/null 0m12.567s baseline
time cat layer.tar | ./bin/gzip > /dev/null 0m25.794s 2.05x
time cat layer.tar | ./bin/gzip -c > /dev/null 0m12.854s 1.02x

Clearly current performance is far from perfect.

This issue is related to #7291 where comments are restricted to collaborators. If docker migrates to cgzip then #9060 is going to improve too.

cc @unclejack

closed time in 8 hours

bobrik

issue commentmoby/moby

Gzip performance

Optional support for unpigz (parallel gunzip) was introduced in Docker 18.02: https://github.com/moby/moby/commit/871afbb304422877e683cbafc0ebd0b029b85379

bobrik

comment created time in 8 hours

PR opened containerd/cri

vendor kubernetes 1.17.1

Corresponds to https://github.com/kubernetes/kubernetes/blob/v1.17.1/go.mod

note: k8snet.ChooseBindAddress() was renamed to k8snet.ResolveBindAddress() in https://github.com/kubernetes/kubernetes/commit/afa0b808f873b515c9d58a9ead788972ea7d2533

+3472 -1775

0 comment

65 changed files

pr created time in 8 hours

create barnchAkihiroSuda/cri-containerd

branch : vendor-kube1.17.1

created branch time in 9 hours

Pull request review commentcontainerd/cri

Move to go modules

+module github.com/containerd/cri++go 1.13++require (+	github.com/BurntSushi/toml v0.3.1+	github.com/Microsoft/go-winio v0.4.15-0.20190919025122-fc70bd9a86b5+	github.com/Microsoft/hcsshim v0.8.7-0.20191021170233-d2849cbdb9df+	github.com/cilium/ebpf v0.0.0-20191203103619-60c3aa43f488 // indirect+	github.com/containerd/cgroups v0.0.0-20191206154412-fada802a7909+	github.com/containerd/console v0.0.0-20191219165238-8375c3424e4d // indirect+	github.com/containerd/containerd v1.3.1-0.20200109164906-0a1f2b40642e+	github.com/containerd/continuity v0.0.0-20190815185530-f2a389ac0a02+	github.com/containerd/fifo v0.0.0-20190816180239-bda0ff6ed73c+	github.com/containerd/go-cni v0.0.0-20190822145629-0d360c50b10b+	github.com/containerd/go-runc v0.0.0-20191206163734-a5c2862aed5e // indirect+	github.com/containerd/ttrpc v0.0.0-20190828172938-92c8520ef9f8 // indirect+	github.com/containerd/typeurl v0.0.0-20190228175220-2a93cfde8c20+	github.com/containernetworking/plugins v0.7.6+	github.com/davecgh/go-spew v1.1.1+	github.com/docker/distribution v2.7.1+incompatible

requested shipping v2.8.0 here: https://github.com/docker/distribution/issues/3085

chenrui333

comment created time in 9 hours

issue openeddocker/distribution

request: release v2.8.0

Please consider shipping a new release that go mod can pick up.

The latest release v2.7.1 is too old: https://github.com/containerd/cri/pull/1377#discussion_r369117936

created time in 9 hours

Pull request review commentcontainerd/cri

Move to go modules

+module github.com/containerd/cri++go 1.13++require (+	github.com/BurntSushi/toml v0.3.1+	github.com/Microsoft/go-winio v0.4.15-0.20190919025122-fc70bd9a86b5+	github.com/Microsoft/hcsshim v0.8.7-0.20191021170233-d2849cbdb9df+	github.com/cilium/ebpf v0.0.0-20191203103619-60c3aa43f488 // indirect+	github.com/containerd/cgroups v0.0.0-20191206154412-fada802a7909+	github.com/containerd/console v0.0.0-20191219165238-8375c3424e4d // indirect+	github.com/containerd/containerd v1.3.1-0.20200109164906-0a1f2b40642e+	github.com/containerd/continuity v0.0.0-20190815185530-f2a389ac0a02+	github.com/containerd/fifo v0.0.0-20190816180239-bda0ff6ed73c+	github.com/containerd/go-cni v0.0.0-20190822145629-0d360c50b10b+	github.com/containerd/go-runc v0.0.0-20191206163734-a5c2862aed5e // indirect+	github.com/containerd/ttrpc v0.0.0-20190828172938-92c8520ef9f8 // indirect+	github.com/containerd/typeurl v0.0.0-20190228175220-2a93cfde8c20+	github.com/containernetworking/plugins v0.7.6+	github.com/davecgh/go-spew v1.1.1+	github.com/docker/distribution v2.7.1+incompatible

vendor master

chenrui333

comment created time in 9 hours

pull request commentcontainerd/cri

Move to go modules

Thanks, but I think we should begin with updating vendored pkgs with vndr

chenrui333

comment created time in 11 hours

push eventcontainerd/cri

Boris Popovschi

commit sha 6b8846cdf8b8c98c1d965313d66bc8489166059a

vendor updated + added cgroupv2 metrics Signed-off-by: Boris Popovschi <zyqsempai@mail.ru>

view details

Akihiro Suda

commit sha 5e5960f2bc03e6ad768acf575edd0cfe1e27d925

Merge pull request #1376 from Zyqsempai/add-cgroups-v2-metrics Cgroupv2: Added CPU, Memory metrics

view details

push time in 11 hours

PR merged containerd/cri

Cgroupv2: Added CPU, Memory metrics ok-to-test size/XXL

Related to https://github.com/containerd/containerd/issues/3726

Vendor updated to the last and stble versions of contqinerd and cgroups Added cgroupv2 metrics.

Signed-off-by: Boris Popovschi zyqsempai@mail.ru

+4118 -1495

9 comments

74 changed files

Zyqsempai

pr closed time in 11 hours

pull request commentcontainerd/cri

Cgroupv2: Added CPU, Memory metrics

merging, typo is negligible

Zyqsempai

comment created time in 11 hours

pull request commentopencontainers/runc

Fix MAJ:MIN io.stat parsing order

Any maintainer can restart CI?

Zyqsempai

comment created time in 11 hours

issue closedmoby/moby

Kernel panic with docker swarm and mesh routing

Description

Running an application with 2 nodes in a swarm is causing a kernel panic.

Steps to reproduce the issue:

  1. Setup a docker swarm with 1 manager/worker (node A) and 1 worker (node B)
  2. Deploy a web server container and observe the node it runs on
  3. Use a browser to visit the site via the IP of the node not running the container

Describe the results you received:

A kernel panic on the node running the container

Describe the results you expected:

The website to load

Additional information you deem important (e.g. issue happens only occasionally):

If the container is running on node A, using node A's IP, the site runs fine. As soon as you use node B's IP, then node A will kernel panic. The servers do have bonded interfaces and policy based routing (2 separate networks with different gateways on the servers)

Output of docker version:

Client: Docker Engine - Community
 Version:           19.03.1
 API version:       1.40
 Go version:        go1.12.5
 Git commit:        74b1e89
 Built:             Thu Jul 25 21:21:07 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.1
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.5
  Git commit:       74b1e89
  Built:            Thu Jul 25 21:19:36 2019
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.6
  GitCommit:        894b81a4b802e4eb2a91d1ce216b8817763c29fb
 runc:
  Version:          1.0.0-rc8
  GitCommit:        425e105d5a03fabd737a126ad93d62a9eeede87f
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

Output of docker info:

Client:
 Debug Mode: false

Server:
 Containers: 7
  Running: 5
  Paused: 0
  Stopped: 2
 Images: 2
 Server Version: 19.03.1
 Storage Driver: overlay2
  Backing Filesystem: xfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: active
  NodeID: oarmbh1gwome6ylznxkp8841z
  Is Manager: true
  ClusterID: rosesdp4xpkf3r4bj6wnt93ys
  Managers: 1
  Nodes: 2
  Default Address Pool: 10.0.0.0/8
  SubnetSize: 24
  Data Path Port: 4789
  Orchestration:
   Task History Retention Limit: 5
  Raft:
   Snapshot Interval: 10000
   Number of Old Snapshots to Retain: 0
   Heartbeat Tick: 1
   Election Tick: 10
  Dispatcher:
   Heartbeat Period: 5 seconds
  CA Configuration:
   Expiry Duration: 3 months
   Force Rotate: 0
  Autolock Managers: false
  Root Rotation In Progress: false
  Node Address: 10.10.0.1
  Manager Addresses:
   10.10.0.1:2377
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 894b81a4b802e4eb2a91d1ce216b8817763c29fb
 runc version: 425e105d5a03fabd737a126ad93d62a9eeede87f
 init version: fec3683
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 3.10.0-957.27.2.el7.x86_64
 Operating System: CentOS Linux 7 (Core)
 OSType: linux
 Architecture: x86_64
 CPUs: 32
 Total Memory: 62.17GiB
 Name: node-a
 ID: GATF:YY37:DO7J:HPDY:7IJB:I6XW:QJEL:MCUO:KZJ3:L3LX:KN2K:YOE7
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

Additional environment details (AWS, VirtualBox, physical, etc.):

The servers do have bonded interfaces and policy based routing (2 separate networks with different gateways on the servers). NetworkManager service is running.

closed time in 17 hours

MJPA

issue commentmoby/moby

Kernel panic with docker swarm and mesh routing

I'm closing this because the bug seems on CentOS side.

MJPA

comment created time in 17 hours

issue closedmoby/moby

race condition in pkg/archive DecompressStream

Description

There is a race condition in pkg/archive when using cmd.Start for pigz and xz. The *bufio.Reader could be returned to the pool while the command is still writing to it. The command is wrapped in a CommandContext where the process will be killed when the context is cancelled, however this is not instantaneous, so there's a brief window while the command could still be running but the *bufio.Reader was already returned to the pool.

wrapReadCloser calls cancel(), and then readBuf.Close() which eventually returns the buffer to the pool:

https://github.com/moby/moby/blob/1d19062b640b66daaf3e6f2246a947aaaf909dec/pkg/archive/archive.go#L179-L180

However, because cmdStream runs cmd.Wait in a go routine that we never wait for to finish, it is not safe to return the reader to the pool yet. We need to ensure we wait for cmd.Wait to finish!

Steps to reproduce the issue:

I have written a reproducer at https://github.com/stbenjam/docker-race-reproducer, where you can see the behavior. Check out the repo and run go run main.go.

Describe the results you received:

In the worst case, a panic:

Waiting...
Done
Done
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x4b20b0]

goroutine 34 [running]:
bufio.(*Reader).fill(0xc0000fe300)
	/usr/lib/golang/src/bufio/bufio.go:100 +0xe0
bufio.(*Reader).WriteTo(0xc0000fe300, 0x554bc0, 0xc000118068, 0x7fb786381fb0, 0xc0000fe300, 0x4eb001)
	/usr/lib/golang/src/bufio/bufio.go:486 +0x259
io.copyBuffer(0x554bc0, 0xc000118068, 0x554a40, 0xc0000fe300, 0x0, 0x0, 0x0, 0x0, 0x0, 0xc00010c060)
	/usr/lib/golang/src/io/io.go:384 +0x352
io.Copy(0x554bc0, 0xc000118068, 0x554a40, 0xc0000fe300, 0x0, 0xc00003c7b8, 0x4ebb89)
	/usr/lib/golang/src/io/io.go:364 +0x5a
os/exec.(*Cmd).stdin.func1(0x0, 0x0)
	/usr/lib/golang/src/os/exec/exec.go:234 +0x55
os/exec.(*Cmd).Start.func1(0xc000110160, 0xc00011e120)
	/usr/lib/golang/src/os/exec/exec.go:400 +0x27
created by os/exec.(*Cmd).Start
	/usr/lib/golang/src/os/exec/exec.go:399 +0x5af
exit status 2

Describe the results you expected:

No race condition.

Here's the output of go run -race:

=================
WARNING: DATA RACE
Read at 0x00c000076088 by goroutine 16:
  bufio.(*Reader).writeBuf()
      /usr/lib/golang/src/bufio/bufio.go:510 +0x50
  bufio.(*Reader).WriteTo()
      /usr/lib/golang/src/bufio/bufio.go:468 +0x6a
  io.copyBuffer()
      /usr/lib/golang/src/io/io.go:384 +0x46a
  io.Copy()
      /usr/lib/golang/src/io/io.go:364 +0x74
  os/exec.(*Cmd).stdin.func1()
      /usr/lib/golang/src/os/exec/exec.go:234 +0x8a
  os/exec.(*Cmd).Start.func1()
      /usr/lib/golang/src/os/exec/exec.go:400 +0x34

Previous write at 0x00c000076088 by goroutine 6:
  github.com/stbenjam/docker-race-reproducer/vendor/github.com/docker/docker/pkg/pools.(*BufioReaderPool).Put()
      /usr/lib/golang/src/bufio/bufio.go:75 +0xaf
  github.com/stbenjam/docker-race-reproducer/vendor/github.com/docker/docker/pkg/pools.(*BufioReaderPool).NewReadCloserWrapper.func1()
      /home/stbenjam/go/src/github.com/stbenjam/docker-race-reproducer/vendor/github.com/docker/docker/pkg/pools/pools.go:93 +0x98
  github.com/stbenjam/docker-race-reproducer/vendor/github.com/docker/docker/pkg/ioutils.(*ReadCloserWrapper).Close()
      /home/stbenjam/go/src/github.com/stbenjam/docker-race-reproducer/vendor/github.com/docker/docker/pkg/ioutils/readers.go:20 +0x4c
  github.com/stbenjam/docker-race-reproducer/vendor/github.com/docker/docker/pkg/archive.wrapReadCloser.func1()
      /home/stbenjam/go/src/github.com/stbenjam/docker-race-reproducer/vendor/github.com/docker/docker/pkg/archive/archive.go:180 +0x67
  github.com/stbenjam/docker-race-reproducer/vendor/github.com/docker/docker/pkg/ioutils.(*ReadCloserWrapper).Close()
      /home/stbenjam/go/src/github.com/stbenjam/docker-race-reproducer/vendor/github.com/docker/docker/pkg/ioutils/readers.go:20 +0x4c
  main.decompress()
      /home/stbenjam/go/src/github.com/stbenjam/docker-race-reproducer/main.go:33 +0xdb

Goroutine 16 (running) created at:
  os/exec.(*Cmd).Start()
      /usr/lib/golang/src/os/exec/exec.go:399 +0x9bf
  github.com/stbenjam/docker-race-reproducer/vendor/github.com/docker/docker/pkg/archive.cmdStream()
      /home/stbenjam/go/src/github.com/stbenjam/docker-race-reproducer/vendor/github.com/docker/docker/pkg/archive/archive.go:1217 +0x33b
  github.com/stbenjam/docker-race-reproducer/vendor/github.com/docker/docker/pkg/archive.gzDecompress()
      /home/stbenjam/go/src/github.com/stbenjam/docker-race-reproducer/vendor/github.com/docker/docker/pkg/archive/archive.go:174 +0x17a
  github.com/stbenjam/docker-race-reproducer/vendor/github.com/docker/docker/pkg/archive.DecompressStream()
      /home/stbenjam/go/src/github.com/stbenjam/docker-race-reproducer/vendor/github.com/docker/docker/pkg/archive/archive.go:207 +0x572
  main.decompress()
      /home/stbenjam/go/src/github.com/stbenjam/docker-race-reproducer/main.go:29 +0x46

Goroutine 6 (finished) created at:
  main.main()
      /home/stbenjam/go/src/github.com/stbenjam/docker-race-reproducer/main.go:21 +0xb6
==================

closed time in 17 hours

stbenjam

issue closedmoby/moby

Kernel panic (3.16) on debian jessie when running docker containers with healthchecks


BUG REPORT INFORMATION

Description

<!-- Briefly describe the problem you are having in a few paragraphs. --> Kernel panic (3.16) on debian jessie when running docker containers with healthchecks.

Steps to reproduce the issue:

  1. Install latest debian jessie 8.7, apt update and install docker following the official instructions.
  2. pull solr:alpine image and build a new one with healthchecks using a Dockerfile.
  3. deploy 20 - 25 containers
  4. server crash / kernel panic in about 1h max

Describe the results you received: kernel panic, server crashes.

Describe the results you expected: no crash

Additional information you deem important (e.g. issue happens only occasionally):

Output of docker version:

$ docker version
Client:
 Version:      1.13.0
 API version:  1.25
 Go version:   go1.7.3
 Git commit:   49bf474
 Built:        Tue Jan 17 09:44:08 2017
 OS/Arch:      linux/amd64

Server:
 Version:      1.13.0
 API version:  1.25 (minimum version 1.12)
 Go version:   go1.7.3
 Git commit:   49bf474
 Built:        Tue Jan 17 09:44:08 2017
 OS/Arch:      linux/amd64
 Experimental: false

Output of docker info:

$ docker info
Containers: 25
 Running: 3
 Paused: 0
 Stopped: 22
Images: 5
Server Version: 1.13.0
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 94
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins: 
 Volume: local
 Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 03e5862ec0d8d3b3f750e19fca3ee367e13c090e
runc version: 2f7393a47307a16f8cee44a37b262e8b81021e3e
init version: 949e6fa
Kernel Version: 3.16.0-4-amd64
Operating System: Debian GNU/Linux 8 (jessie)
OSType: linux
Architecture: x86_64
CPUs: 5
Total Memory: 6.402 GiB
Name: deb00
ID: RE6C:VVHI:KH5X:ANQK:NCAC:APCP:JD47:OUBG:C4LZ:MGUR:AMPD:7FIH
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
 File Descriptors: 33
 Goroutines: 34
 System Time: 2017-01-24T04:56:35.547683852-05:00
 EventsListeners: 0
Registry: https://index.docker.io/v1/
WARNING: No memory limit support
WARNING: No swap limit support
WARNING: No kernel memory limit support
WARNING: No oom kill disable support
WARNING: No cpu cfs quota support
WARNING: No cpu cfs period support
Experimental: false
Insecure Registries:
 my-registry:41238
 127.0.0.0/8
Live Restore Enabled: false

Additional environment details (AWS, VirtualBox, physical, etc.):

I was able to reproduce with both 1.12.5 and 1.13.0 docker versions on physical servers and VirtualBox VMs, all running jessie with 1.16 kernel. Kernel Panic screenshots are available. I was able to reproduce with 1m healthcheck intervals as well.

20170124_kp_healthchecks_9

$ uname -a
Linux deb00 3.16.0-4-amd64 #1 SMP Debian 3.16.39-1 (2016-12-30) x86_64 GNU/Linux
$ cat /etc/debian_version 
8.7
$ uname -a
Linux deb00 3.16.0-4-amd64 #1 SMP Debian 3.16.39-1 (2016-12-30) x86_64 GNU/Linux

CPU/RAM

$ free -m
             total       used       free     shared    buffers     cached
Mem:          6555       5045       1509         10         18        390
-/+ buffers/cache:       4636       1919
Swap:            0          0          0
$ nproc
5

systemd config (running on Debug mode with private registry enabled)

$ cat /lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target docker.socket firewalld.service
Requires=docker.socket

[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd -D -H fd:// --insecure-registry=my-registry:41238
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=1048576
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
#TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

Dockerfile and image build

$ cat df/Dockerfile
FROM solr:alpine
USER root
RUN apk --no-cache add curl
USER $SOLR_USER
HEALTHCHECK --interval=10s --timeout=30s --retries=3 \
  CMD curl -sb -H "Accept: application/json" "http://localhost:8983/solr/" | grep "solr" || exit 1


$ cd df/
$ docker build -t solr:alpine_foo .
$ cd ~
$ docker images | grep alpine_foo
solr                                        alpine_foo          de916b07eb1f        3 minutes ago       286 MB

deployment of 25 containers

$ for i in `seq 1 25`; do docker run --name=solr$i -d -P solr:alpine_foo solr-create -c mycore; done

$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS                             PORTS                     NAMES
0e2d4127b121        solr:alpine_foo     "docker-entrypoint..."   2 seconds ago       Up 1 second (health: starting)     0.0.0.0:32792->8983/tcp   solr25
d6e204e16e1e        solr:alpine_foo     "docker-entrypoint..."   5 seconds ago       Up 4 seconds (health: starting)    0.0.0.0:32791->8983/tcp   solr24
f27d85c6d85e        solr:alpine_foo     "docker-entrypoint..."   7 seconds ago       Up 6 seconds (health: starting)    0.0.0.0:32790->8983/tcp   solr23
edb99c47dce4        solr:alpine_foo     "docker-entrypoint..."   10 seconds ago      Up 10 seconds (health: starting)   0.0.0.0:32789->8983/tcp   solr22
6afe1b850284        solr:alpine_foo     "docker-entrypoint..."   23 seconds ago      Up 22 seconds (healthy)            0.0.0.0:32788->8983/tcp   solr21
c62ad83e267e        solr:alpine_foo     "docker-entrypoint..."   51 minutes ago      Up 51 minutes (healthy)            0.0.0.0:32787->8983/tcp   solr20
785a185a0fcc        solr:alpine_foo     "docker-entrypoint..."   51 minutes ago      Up 51 minutes (healthy)            0.0.0.0:32786->8983/tcp   solr19
6d5f0613b87a        solr:alpine_foo     "docker-entrypoint..."   51 minutes ago      Up 51 minutes (healthy)            0.0.0.0:32785->8983/tcp   solr18
8956d70c7eba        solr:alpine_foo     "docker-entrypoint..."   51 minutes ago      Up 51 minutes (healthy)            0.0.0.0:32784->8983/tcp   solr17
02a98144aa09        solr:alpine_foo     "docker-entrypoint..."   51 minutes ago      Up 51 minutes (healthy)            0.0.0.0:32783->8983/tcp   solr16
16b5de44ba96        solr:alpine_foo     "docker-entrypoint..."   51 minutes ago      Up 51 minutes (healthy)            0.0.0.0:32782->8983/tcp   solr15
d65215e558a5        solr:alpine_foo     "docker-entrypoint..."   51 minutes ago      Up 51 minutes (healthy)            0.0.0.0:32781->8983/tcp   solr14
e625c1371df2        solr:alpine_foo     "docker-entrypoint..."   51 minutes ago      Up 51 minutes (healthy)            0.0.0.0:32780->8983/tcp   solr13
73372f71b447        solr:alpine_foo     "docker-entrypoint..."   51 minutes ago      Up 51 minutes (healthy)            0.0.0.0:32779->8983/tcp   solr12
f5972ccf1e91        solr:alpine_foo     "docker-entrypoint..."   51 minutes ago      Up 51 minutes (healthy)            0.0.0.0:32778->8983/tcp   solr11
4ecc7b1f77dd        solr:alpine_foo     "docker-entrypoint..."   51 minutes ago      Up 51 minutes (healthy)            0.0.0.0:32777->8983/tcp   solr10
d574d528446b        solr:alpine_foo     "docker-entrypoint..."   51 minutes ago      Up 51 minutes (healthy)            0.0.0.0:32776->8983/tcp   solr9
54042f7bbb25        solr:alpine_foo     "docker-entrypoint..."   51 minutes ago      Up 51 minutes (healthy)            0.0.0.0:32775->8983/tcp   solr8
c91e7be83158        solr:alpine_foo     "docker-entrypoint..."   51 minutes ago      Up 51 minutes (healthy)            0.0.0.0:32774->8983/tcp   solr7
5c1dc6ef2984        solr:alpine_foo     "docker-entrypoint..."   51 minutes ago      Up 51 minutes (healthy)            0.0.0.0:32773->8983/tcp   solr6
ba01f507100b        solr:alpine_foo     "docker-entrypoint..."   51 minutes ago      Up 51 minutes (healthy)            0.0.0.0:32772->8983/tcp   solr5
df9dd6f20d85        solr:alpine_foo     "docker-entrypoint..."   51 minutes ago      Up 51 minutes (healthy)            0.0.0.0:32771->8983/tcp   solr4
c6efe430c12c        solr:alpine_foo     "docker-entrypoint..."   51 minutes ago      Up 51 minutes (healthy)            0.0.0.0:32770->8983/tcp   solr3
466711c8bb38        solr:alpine_foo     "docker-entrypoint..."   51 minutes ago      Up 51 minutes (healthy)            0.0.0.0:32769->8983/tcp   solr2
9a72d27349b9        solr:alpine_foo     "docker-entrypoint..."   51 minutes ago      Up 51 minutes (healthy)            0.0.0.0:32768->8983/tcp   solr1

kern.log log

root@deb00:~# tail -f /var/log/kern.log 
Jan 24 05:14:13 deb00 kernel: [61066.239971] docker0: port 3(veth1f50700) entered disabled state
Jan 24 05:14:13 deb00 kernel: [61066.295894] docker0: port 3(veth1f50700) entered disabled state
Jan 24 05:14:13 deb00 kernel: [61066.296537] device veth1f50700 left promiscuous mode
Jan 24 05:14:13 deb00 kernel: [61066.296543] docker0: port 3(veth1f50700) entered disabled state
Jan 24 05:14:13 deb00 kernel: [61066.348712] docker0: port 1(veth7983bb8) entered disabled state
Jan 24 05:14:13 deb00 kernel: [61066.349502] device veth7983bb8 left promiscuous mode
Jan 24 05:14:13 deb00 kernel: [61066.349509] docker0: port 1(veth7983bb8) entered disabled state
Jan 24 05:14:13 deb00 kernel: [61066.451630] docker0: port 2(vethed02b3b) entered disabled state
Jan 24 05:14:13 deb00 kernel: [61066.452480] device vethed02b3b left promiscuous mode
Jan 24 05:14:13 deb00 kernel: [61066.452486] docker0: port 2(vethed02b3b) entered disabled state
 
Jan 24 06:05:06 deb00 kernel: [64119.060363] aufs au_opts_verify:1570:dockerd[7742]: dirperm1 breaks the protection by the permission bits on the lower branch
Jan 24 06:05:06 deb00 kernel: [64119.106266] aufs au_opts_verify:1570:dockerd[7742]: dirperm1 breaks the protection by the permission bits on the lower branch
Jan 24 06:05:06 deb00 kernel: [64119.138685] aufs au_opts_verify:1570:dockerd[5590]: dirperm1 breaks the protection by the permission bits on the lower branch
Jan 24 06:05:06 deb00 kernel: [64119.155874] device vethb111342 entered promiscuous mode
Jan 24 06:05:06 deb00 kernel: [64119.155903] IPv6: ADDRCONF(NETDEV_UP): vethb111342: link is not ready
Jan 24 06:05:06 deb00 kernel: [64119.155905] docker0: port 1(vethb111342) entered forwarding state
Jan 24 06:05:06 deb00 kernel: [64119.155907] docker0: port 1(vethb111342) entered forwarding state
Jan 24 06:05:06 deb00 kernel: [64119.156296] docker0: port 1(vethb111342) entered disabled state
Jan 24 06:05:06 deb00 kernel: [64119.915571] IPv6: ADDRCONF(NETDEV_CHANGE): vethb111342: link becomes ready
Jan 24 06:05:06 deb00 kernel: [64119.915591] docker0: port 1(vethb111342) entered forwarding state
Jan 24 06:05:06 deb00 kernel: [64119.915597] docker0: port 1(vethb111342) entered forwarding state
Jan 24 06:05:19 deb00 kernel: [64132.101267] aufs au_opts_verify:1570:dockerd[22281]: dirperm1 breaks the protection by the permission bits on the lower branch
Jan 24 06:05:19 deb00 kernel: [64132.132114] aufs au_opts_verify:1570:dockerd[22281]: dirperm1 breaks the protection by the permission bits on the lower branch
Jan 24 06:05:19 deb00 kernel: [64132.186436] aufs au_opts_verify:1570:dockerd[13684]: dirperm1 breaks the protection by the permission bits on the lower branch
Jan 24 06:05:19 deb00 kernel: [64132.187419] device vethc70fcda entered promiscuous mode
Jan 24 06:05:19 deb00 kernel: [64132.187448] IPv6: ADDRCONF(NETDEV_UP): vethc70fcda: link is not ready
Jan 24 06:05:19 deb00 kernel: [64132.187450] docker0: port 2(vethc70fcda) entered forwarding state
Jan 24 06:05:19 deb00 kernel: [64132.187453] docker0: port 2(vethc70fcda) entered forwarding state
Jan 24 06:05:19 deb00 kernel: [64132.188356] docker0: port 2(vethc70fcda) entered disabled state
Jan 24 06:05:19 deb00 kernel: [64132.415258] IPv6: ADDRCONF(NETDEV_CHANGE): vethc70fcda: link becomes ready
Jan 24 06:05:19 deb00 kernel: [64132.415282] docker0: port 2(vethc70fcda) entered forwarding state
Jan 24 06:05:19 deb00 kernel: [64132.415289] docker0: port 2(vethc70fcda) entered forwarding state
Jan 24 06:05:21 deb00 kernel: [64134.929534] docker0: port 1(vethb111342) entered forwarding state
Jan 24 06:05:22 deb00 kernel: [64135.107574] aufs au_opts_verify:1570:dockerd[13684]: dirperm1 breaks the protection by the permission bits on the lower branch
Jan 24 06:05:22 deb00 kernel: [64135.154440] aufs au_opts_verify:1570:dockerd[13684]: dirperm1 breaks the protection by the permission bits on the lower branch
Jan 24 06:05:22 deb00 kernel: [64135.214953] aufs au_opts_verify:1570:dockerd[22281]: dirperm1 breaks the protection by the permission bits on the lower branch
Jan 24 06:05:22 deb00 kernel: [64135.227971] device vethec89273 entered promiscuous mode
Jan 24 06:05:22 deb00 kernel: [64135.228005] IPv6: ADDRCONF(NETDEV_UP): vethec89273: link is not ready
Jan 24 06:05:22 deb00 kernel: [64135.228007] docker0: port 3(vethec89273) entered forwarding state
Jan 24 06:05:22 deb00 kernel: [64135.228011] docker0: port 3(vethec89273) entered forwarding state
Jan 24 06:05:22 deb00 kernel: [64135.228691] docker0: port 3(vethec89273) entered disabled state
Jan 24 06:05:22 deb00 kernel: [64135.439325] IPv6: ADDRCONF(NETDEV_CHANGE): vethec89273: link becomes ready
Jan 24 06:05:22 deb00 kernel: [64135.439350] docker0: port 3(vethec89273) entered forwarding state
Jan 24 06:05:22 deb00 kernel: [64135.439356] docker0: port 3(vethec89273) entered forwarding state
Jan 24 06:05:24 deb00 kernel: [64137.875979] aufs au_opts_verify:1570:dockerd[798]: dirperm1 breaks the protection by the permission bits on the lower branch
Jan 24 06:05:24 deb00 kernel: [64137.904122] aufs au_opts_verify:1570:dockerd[798]: dirperm1 breaks the protection by the permission bits on the lower branch
Jan 24 06:05:24 deb00 kernel: [64137.943760] aufs au_opts_verify:1570:dockerd[798]: dirperm1 breaks the protection by the permission bits on the lower branch
Jan 24 06:05:24 deb00 kernel: [64137.944868] device veth0e73444 entered promiscuous mode
Jan 24 06:05:24 deb00 kernel: [64137.944907] IPv6: ADDRCONF(NETDEV_UP): veth0e73444: link is not ready
Jan 24 06:05:24 deb00 kernel: [64137.944910] docker0: port 24(veth0e73444) entered forwarding state
Jan 24 06:05:24 deb00 kernel: [64137.944914] docker0: port 24(veth0e73444) entered forwarding state
Jan 24 06:05:24 deb00 kernel: [64137.945418] docker0: port 24(veth0e73444) entered disabled state
Jan 24 06:05:25 deb00 kernel: [64138.124250] IPv6: ADDRCONF(NETDEV_CHANGE): veth0e73444: link becomes ready
Jan 24 06:05:25 deb00 kernel: [64138.124273] docker0: port 24(veth0e73444) entered forwarding state
Jan 24 06:05:25 deb00 kernel: [64138.124279] docker0: port 24(veth0e73444) entered forwarding state
Jan 24 06:05:27 deb00 kernel: [64140.469756] aufs au_opts_verify:1570:dockerd[31573]: dirperm1 breaks the protection by the permission bits on the lower branch
Jan 24 06:05:27 deb00 kernel: [64140.528564] aufs au_opts_verify:1570:dockerd[31573]: dirperm1 breaks the protection by the permission bits on the lower branch
Jan 24 06:05:27 deb00 kernel: [64140.580007] aufs au_opts_verify:1570:dockerd[14822]: dirperm1 breaks the protection by the permission bits on the lower branch
Jan 24 06:05:27 deb00 kernel: [64140.580961] device veth4e45fad entered promiscuous mode
Jan 24 06:05:27 deb00 kernel: [64140.581001] IPv6: ADDRCONF(NETDEV_UP): veth4e45fad: link is not ready
Jan 24 06:05:27 deb00 kernel: [64140.581004] docker0: port 25(veth4e45fad) entered forwarding state
Jan 24 06:05:27 deb00 kernel: [64140.581008] docker0: port 25(veth4e45fad) entered forwarding state
Jan 24 06:05:27 deb00 kernel: [64140.583610] docker0: port 25(veth4e45fad) entered disabled state
Jan 24 06:05:27 deb00 kernel: [64140.799238] IPv6: ADDRCONF(NETDEV_CHANGE): veth4e45fad: link becomes ready
Jan 24 06:05:27 deb00 kernel: [64140.799281] docker0: port 25(veth4e45fad) entered forwarding state
Jan 24 06:05:27 deb00 kernel: [64140.799288] docker0: port 25(veth4e45fad) entered forwarding state
Jan 24 06:05:34 deb00 kernel: [64147.470937] docker0: port 2(vethc70fcda) entered forwarding state
Jan 24 06:05:37 deb00 kernel: [64150.478787] docker0: port 3(vethec89273) entered forwarding state
Jan 24 06:05:40 deb00 kernel: [64153.166907] docker0: port 24(veth0e73444) entered forwarding state
Jan 24 06:05:42 deb00 kernel: [64155.854926] docker0: port 25(veth4e45fad) entered forwarding state

journalctl docker.service log

Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.687202920-05:00" level=debug msg="Health check for container f27d85c6d85ecb5d4b4da33d2f80db6a0c1e6e25ff47322a9b27010d8286874a done (exitCode=0)"
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.826251206-05:00" level=debug msg="libcontainerd: received containerd event: &types.Event{Type:\"start-process\", Id:\"5c1dc6ef2984b779b2908e65227a7b08340316a8d885c46af3026ce2b33a60fd\", Status:0x0, Pid:\"75cdbe367c5d4a39bcc36abe55d2075bd0e53c584b7ade767b541ac63a91af78\", Timestamp:(*timestamp.Timestamp)(0xc42258b540)}"
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.826306190-05:00" level=debug msg="libcontainerd: event unhandled: type:\"start-process\" id:\"5c1dc6ef2984b779b2908e65227a7b08340316a8d885c46af3026ce2b33a60fd\" pid:\"75cdbe367c5d4a39bcc36abe55d2075bd0e53c584b7ade767b541ac63a91af78\" timestamp:<seconds:1485256066 nanos:825781474 > "
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.923630832-05:00" level=debug msg="containerd: process exited" id=c6efe430c12cb26df844ce89684fdf635094cf1d7753f47ef144e5c80d7a3961 pid=3acabb3e02b3a94208f0ad82ded31e35cf493d900131060242f3aad275b5eb37 status=0 systemPid=10874
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.923768909-05:00" level=debug msg="containerd: process exited" id=466711c8bb38384e2ded29ee65c74745938518bc3d154a36ea68dc810743701f pid=1d3c900a7bf1fb131551eb42df4946fa9aa8f53f9fa56c575bd9cdd34a1c4b2a status=0 systemPid=10899
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.923870866-05:00" level=debug msg="containerd: process exited" id=e625c1371df27b6b584aa6a89246dd317ae7284b63ee7a0f13b221cec9ebdf0e pid=1dcdfc1504b96d067b1709ebb6f5fc7cd0ce20bd9beeb247bbde1c79df3b8d3b status=0 systemPid=10924
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.923955838-05:00" level=debug msg="containerd: process exited" id=6d5f0613b87a0a6b0d9287244d060d465c4e8fc58b1f8e5e939897dfc1a02331 pid=da793799a6f09b2c08746d5a9541150fcd46655bf92e016e0cf1c845d0097a26 status=0 systemPid=10952
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.924061267-05:00" level=debug msg="containerd: process exited" id=f5972ccf1e91e227bca28daaf306a4428a352ff20f22e37917bd478e475c0474 pid=1d4d89df42b2727d75e2806049aaff4511cd05aa9bed9164e2c3c8fb912c9965 status=0 systemPid=10983
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.924145329-05:00" level=debug msg="containerd: process exited" id=d574d528446b370208a1a7bb6feb9c68c3286cbb5e709b5f4bc8ec0c0687224e pid=302e71450682dd715058183d75e921366cac12ef73163a275b579d5210b3217b status=0 systemPid=11009
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.924232799-05:00" level=debug msg="containerd: process exited" id=54042f7bbb25861e2c185fb1f688df3e387f6b3267033b3c7009795e0518ab65 pid=57fd314c8c3ca9014470a5f2fad63218dcd81382953029175f63060edec5b429 status=0 systemPid=11035
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.924313674-05:00" level=debug msg="containerd: process exited" id=4ecc7b1f77dde35cdc3e4f2b66af227a08d7002de55ed969c1fe73a1771e5e13 pid=44d26c7e44b3ba2d35e4e4917e1580279af80b76d770de13c8b52bb36f3cc712 status=0 systemPid=11060
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.92439678-05:00" level=debug msg="containerd: process exited" id=5c1dc6ef2984b779b2908e65227a7b08340316a8d885c46af3026ce2b33a60fd pid=75cdbe367c5d4a39bcc36abe55d2075bd0e53c584b7ade767b541ac63a91af78 status=0 systemPid=11087
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.925035538-05:00" level=debug msg="libcontainerd: received containerd event: &types.Event{Type:\"start-process\", Id:\"02a98144aa096b066ec8208cf20e8b73f97a12af117f8e71b0f009c3eb156803\", Status:0x0, Pid:\"d93650074d3bc447a15035e8b88b325187fad5491941d82eaa5c21daf76f470d\", Timestamp:(*timestamp.Timestamp)(0xc42258b780)}"
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.925082085-05:00" level=debug msg="libcontainerd: event unhandled: type:\"start-process\" id:\"02a98144aa096b066ec8208cf20e8b73f97a12af117f8e71b0f009c3eb156803\" pid:\"d93650074d3bc447a15035e8b88b325187fad5491941d82eaa5c21daf76f470d\" timestamp:<seconds:1485256066 nanos:923592156 > "
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.925134521-05:00" level=debug msg="libcontainerd: received containerd event: &types.Event{Type:\"exit\", Id:\"5c1dc6ef2984b779b2908e65227a7b08340316a8d885c46af3026ce2b33a60fd\", Status:0x0, Pid:\"75cdbe367c5d4a39bcc36abe55d2075bd0e53c584b7ade767b541ac63a91af78\", Timestamp:(*timestamp.Timestamp)(0xc42258b980)}"
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.925198780-05:00" level=debug msg="libcontainerd: received containerd event: &types.Event{Type:\"exit\", Id:\"c6efe430c12cb26df844ce89684fdf635094cf1d7753f47ef144e5c80d7a3961\", Status:0x0, Pid:\"3acabb3e02b3a94208f0ad82ded31e35cf493d900131060242f3aad275b5eb37\", Timestamp:(*timestamp.Timestamp)(0xc42258ba80)}"
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.925245670-05:00" level=debug msg="libcontainerd: received containerd event: &types.Event{Type:\"exit\", Id:\"466711c8bb38384e2ded29ee65c74745938518bc3d154a36ea68dc810743701f\", Status:0x0, Pid:\"1d3c900a7bf1fb131551eb42df4946fa9aa8f53f9fa56c575bd9cdd34a1c4b2a\", Timestamp:(*timestamp.Timestamp)(0xc42258bb80)}"
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.925285513-05:00" level=debug msg="libcontainerd: received containerd event: &types.Event{Type:\"exit\", Id:\"e625c1371df27b6b584aa6a89246dd317ae7284b63ee7a0f13b221cec9ebdf0e\", Status:0x0, Pid:\"1dcdfc1504b96d067b1709ebb6f5fc7cd0ce20bd9beeb247bbde1c79df3b8d3b\", Timestamp:(*timestamp.Timestamp)(0xc42258bc80)}"
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.925336801-05:00" level=debug msg="libcontainerd: received containerd event: &types.Event{Type:\"exit\", Id:\"6d5f0613b87a0a6b0d9287244d060d465c4e8fc58b1f8e5e939897dfc1a02331\", Status:0x0, Pid:\"da793799a6f09b2c08746d5a9541150fcd46655bf92e016e0cf1c845d0097a26\", Timestamp:(*timestamp.Timestamp)(0xc42258bd80)}"
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.925389067-05:00" level=debug msg="libcontainerd: received containerd event: &types.Event{Type:\"exit\", Id:\"f5972ccf1e91e227bca28daaf306a4428a352ff20f22e37917bd478e475c0474\", Status:0x0, Pid:\"1d4d89df42b2727d75e2806049aaff4511cd05aa9bed9164e2c3c8fb912c9965\", Timestamp:(*timestamp.Timestamp)(0xc42258be80)}"
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.925453559-05:00" level=debug msg="libcontainerd: received containerd event: &types.Event{Type:\"exit\", Id:\"d574d528446b370208a1a7bb6feb9c68c3286cbb5e709b5f4bc8ec0c0687224e\", Status:0x0, Pid:\"302e71450682dd715058183d75e921366cac12ef73163a275b579d5210b3217b\", Timestamp:(*timestamp.Timestamp)(0xc42258bf90)}"
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.925493496-05:00" level=debug msg="libcontainerd: received containerd event: &types.Event{Type:\"exit\", Id:\"54042f7bbb25861e2c185fb1f688df3e387f6b3267033b3c7009795e0518ab65\", Status:0x0, Pid:\"57fd314c8c3ca9014470a5f2fad63218dcd81382953029175f63060edec5b429\", Timestamp:(*timestamp.Timestamp)(0xc422530090)}"
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.925560423-05:00" level=debug msg="libcontainerd: received containerd event: &types.Event{Type:\"exit\", Id:\"4ecc7b1f77dde35cdc3e4f2b66af227a08d7002de55ed969c1fe73a1771e5e13\", Status:0x0, Pid:\"44d26c7e44b3ba2d35e4e4917e1580279af80b76d770de13c8b52bb36f3cc712\", Timestamp:(*timestamp.Timestamp)(0xc422530190)}"
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.925628581-05:00" level=debug msg="attach: stderr: end"
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.925651890-05:00" level=debug msg="attach: stderr: end"
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.925672400-05:00" level=debug msg="attach: stderr: end"
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.925688682-05:00" level=debug msg="attach: stderr: end"
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.925702981-05:00" level=debug msg="attach: stderr: end"
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.925714115-05:00" level=debug msg="attach: stderr: end"
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.925725705-05:00" level=debug msg="attach: stderr: end"
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.925741190-05:00" level=debug msg="attach: stderr: end"
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.925760157-05:00" level=debug msg="attach: stderr: end"
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.925767679-05:00" level=debug msg="attach: stdout: end"
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.925783010-05:00" level=debug msg="Health check for container 4ecc7b1f77dde35cdc3e4f2b66af227a08d7002de55ed969c1fe73a1771e5e13 done (exitCode=0)"
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.925798701-05:00" level=debug msg="attach: stdout: end"
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.925813227-05:00" level=debug msg="Health check for container 5c1dc6ef2984b779b2908e65227a7b08340316a8d885c46af3026ce2b33a60fd done (exitCode=0)"
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.925826147-05:00" level=debug msg="attach: stdout: end"
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.925840091-05:00" level=debug msg="Health check for container c6efe430c12cb26df844ce89684fdf635094cf1d7753f47ef144e5c80d7a3961 done (exitCode=0)"
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.925856902-05:00" level=debug msg="attach: stdout: end"
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.925870755-05:00" level=debug msg="Health check for container 466711c8bb38384e2ded29ee65c74745938518bc3d154a36ea68dc810743701f done (exitCode=0)"
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.925883904-05:00" level=debug msg="attach: stdout: end"
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.925895322-05:00" level=debug msg="Health check for container e625c1371df27b6b584aa6a89246dd317ae7284b63ee7a0f13b221cec9ebdf0e done (exitCode=0)"
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.925904500-05:00" level=debug msg="attach: stdout: end"
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.925914345-05:00" level=debug msg="Health check for container 6d5f0613b87a0a6b0d9287244d060d465c4e8fc58b1f8e5e939897dfc1a02331 done (exitCode=0)"
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.925923895-05:00" level=debug msg="attach: stdout: end"
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.925932839-05:00" level=debug msg="Health check for container f5972ccf1e91e227bca28daaf306a4428a352ff20f22e37917bd478e475c0474 done (exitCode=0)"
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.925941144-05:00" level=debug msg="attach: stdout: end"
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.925951870-05:00" level=debug msg="Health check for container d574d528446b370208a1a7bb6feb9c68c3286cbb5e709b5f4bc8ec0c0687224e done (exitCode=0)"
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.925964701-05:00" level=debug msg="attach: stdout: end"
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.925976209-05:00" level=debug msg="Health check for container 54042f7bbb25861e2c185fb1f688df3e387f6b3267033b3c7009795e0518ab65 done (exitCode=0)"
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.931357764-05:00" level=debug msg="containerd: process exited" id=02a98144aa096b066ec8208cf20e8b73f97a12af117f8e71b0f009c3eb156803 pid=d93650074d3bc447a15035e8b88b325187fad5491941d82eaa5c21daf76f470d status=0 systemPid=11113
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.931753083-05:00" level=debug msg="libcontainerd: received containerd event: &types.Event{Type:\"exit\", Id:\"02a98144aa096b066ec8208cf20e8b73f97a12af117f8e71b0f009c3eb156803\", Status:0x0, Pid:\"d93650074d3bc447a15035e8b88b325187fad5491941d82eaa5c21daf76f470d\", Timestamp:(*timestamp.Timestamp)(0xc42278a850)}"
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.931882110-05:00" level=debug msg="attach: stderr: end"
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.931892183-05:00" level=debug msg="attach: stdout: end"
Jan 24 06:07:46 deb00 dockerd[510]: time="2017-01-24T06:07:46.931907482-05:00" level=debug msg="Health check for container 02a98144aa096b066ec8208cf20e8b73f97a12af117f8e71b0f009c3eb156803 done (exitCode=0)"
Jan 24 06:07:49 deb00 dockerd[510]: time="2017-01-24T06:07:49.327408426-05:00" level=debug msg="Running health check for container 9a72d27349b98d703be01c0824cd79651b4ba264a296bf795b76aeb13b2a7819 ..."
Jan 24 06:07:49 deb00 dockerd[510]: time="2017-01-24T06:07:49.327477546-05:00" level=debug msg="starting exec command 839b336433a84b11c21306cf91b8e1106e93322fe760ab409790a0c34d016b13 in container 9a72d27349b98d703be01c0824cd79651b4ba264a296bf795b76aeb13b2a7819"
Jan 24 06:07:49 deb00 dockerd[510]: time="2017-01-24T06:07:49.327964123-05:00" level=debug msg="attach: stdout: begin"
Jan 24 06:07:49 deb00 dockerd[510]: time="2017-01-24T06:07:49.328020158-05:00" level=debug msg="attach: stderr: begin"

closed time in 17 hours

ko-christ

issue commentmoby/moby

Kernel panic (3.16) on debian jessie when running docker containers with healthchecks

Jessie is no longer supported: https://github.com/docker/docker-ce-packaging/pull/253

If somebody is still hitting this, please ask the distro's kernel maintainers.

ko-christ

comment created time in 17 hours

issue commentmoby/moby

Race condition in debian initscript

wheezy is no longer supported https://github.com/docker/docker-ce-packaging/pull/253

itsafire

comment created time in 17 hours

issue closedmoby/moby

Race condition in debian initscript

For some system startups I get the following error output on rebooting my machine:

Mon Jul  6 11:00:46 2015: [....] Starting Docker: docker ok
Mon Jul  6 11:00:46 2015: [....] Starting OpenBSD Secure Shell server: sshd ok
Mon Jul  6 11:00:46 2015: [....] Starting MTA:Post http:///var/run/docker.sock/v1.17/containers/redis/start: dial unix /var/run/docker.sock: no such file or directory. Are you trying to connect to a TLS-enabled daemon without TLS?
Mon Jul  6 11:00:46 2015: Error: failed to start one or more containers

Docker is started ok, but not right when the initscript says: Starting Docker: docker ok. It is most probably not ready then, but a little bit later.

Sometimes when the filesystem on my VM is not yet cached by the host I experience this error. On our production server with rotating disc memory I had to add a sleep 10 to the script that will setup our environment after a reboot. Otherwise docker would not be ready.

The start script is placing docker as a required dependency and seems to do the right thing:

#!/bin/sh

### BEGIN INIT INFO
# Provides: dockermachines
# Required-Start:       docker
# Required-Stop:    docker
# Default-Start:    2 3 4 5
# Default-Stop:         0 1 6
# Short-Description:    docker web server environment
### END INIT INFO

# do the startup ...

Steps to reproduce:

  1. /etc/init.d/docker stop
  2. rm /var/run/docker.sock
  3. /etc/init.d/docker start && ls /var/run/docker.sock

This should give the following output:

[ ok ] Starting Docker: docker.
ls: cannot access /var/run/docker.sock: No such file or directory

/var/run/docker.sock is created eventually.

Setup info:

uname -a :
Linux debian 3.16.0-0.bpo.4-amd64 #1 SMP Debian 3.16.7-ckt4-3~bpo70+1 (2015-02-12) x86_64 GNU/Linux

docker version: 
Client version: 1.5.0
Client API version: 1.17
Go version (client): go1.4.1
Git commit (client): a8a31ef
OS/Arch (client): linux/amd64
Server version: 1.5.0
Server API version: 1.17
Go version (server): go1.4.1
Git commit (server): a8a31ef

docker info:
Containers: 4
Images: 279
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 287
Execution Driver: native-0.2
Kernel Version: 3.16.0-0.bpo.4-amd64
Operating System: Debian GNU/Linux 7 (wheezy)
CPUs: 2
Total Memory: 986.8 MiB
Name: debian
ID: UCC5:UZDR:M3KU:3WGY:73I4:OQAW:DKKF:KJX2:EY5F:QCOX:EZHB:USYF
WARNING: No memory limit support
WARNING: No swap limit support

closed time in 17 hours

itsafire

issue closedmoby/moby

seccomp support on Debian Jessie

The necessary package for seccomp support in Debian Jessie has been backported.

The build process could be updated to use this backport, and remove the restriction documented here. (quoted below)

Note: seccomp profiles require seccomp 2.2.1 and are only available starting with Debian 9 “Stretch”, Ubuntu 15.10 “Wily”, and Fedora 22. To use this feature on Ubuntu 14.04, Debian Wheezy, or Debian Jessie, you must download the latest static Docker Linux binary. This feature is currently not available on other distributions.

closed time in 17 hours

lblackstone

issue commentmoby/moby

seccomp support on Debian Jessie

Jessie is no longer supported https://github.com/docker/docker-ce-packaging/pull/253

lblackstone

comment created time in 17 hours

issue closedmoby/moby

Add ability to communicate with daemon over ssh subsytem

It is too complex to set up authenticated https binding. Using ssh subsystem functionality would mean no additional authentication is needed. You can get std in and out and tunnel it to existing http calls, but both client and server have to understand this.

closed time in 17 hours

tomasol

issue commentmoby/moby

Add ability to communicate with daemon over ssh subsytem

DOCKER_HOST=ssh://<user>@<host> was implemented in Docker 18.09

tomasol

comment created time in 17 hours

issue commentmoby/moby

Swarm Jobs Proposal

ReplicatedJob and GlobalJob got implemented in https://github.com/moby/moby/pull/40307

dperny

comment created time in 17 hours

pull request commentopencontainers/runc

Added conversion for cpu.weight v2

@mrunalp @hqhq @dqminh PTAL (and https://github.com/opencontainers/runc/pull/2212 https://github.com/opencontainers/runc/pull/2192 as well as this one)

Zyqsempai

comment created time in 17 hours

pull request commentopencontainers/runc

Fix MAJ:MIN io.stat parsing order

Please retry CI?

Zyqsempai

comment created time in 17 hours

pull request commentcontainerd/containerd

feature: support cgroupstat metric in runc shimv2

needs rebase

fuweid

comment created time in 20 hours

Pull request review commentcontainerd/containerd

Unify dialer implementations

 func connect(address string, d func(string, time.Duration) (net.Conn, error)) (n }  func annonDialer(address string, timeout time.Duration) (net.Conn, error) {-	address = strings.TrimPrefix(address, "unix://")-	return net.DialTimeout("unix", "\x00"+address, timeout)+	return dialer.Dialer("\x00"+address, timeout)

Let's keep TrimPrefix here. I don't think we should accept "\x00unix:///path/to/socket" .

vladimiroff

comment created time in 20 hours

pull request commentmoby/moby

Remove v2 schema1 push

needs rebase

tiborvass

comment created time in 20 hours

issue closedmoby/moby

v1 registry - error pulling images with docker 1.8 / OEL 7.1

I'm on docker 1.8 (docker-engine.x86_64 / 1.8.0-1.el7 ) I'm trying to pull images hosted on my internal corporate registry.

using docker pull registry.mycorp.com/myfavimg this results in Could not reach any registry endpoint

However I'm able to pull from docker hub without any issue

$ docker info Containers: 0 Images: 15 Storage Driver: devicemapper Pool Name: docker-252:1-104464387-pool Pool Blocksize: 65.54 kB Backing Filesystem: extfs Data file: /dev/loop0 Metadata file: /dev/loop1 Data Space Used: 2.617 GB Data Space Total: 107.4 GB Data Space Available: 104.8 GB Metadata Space Used: 2.404 MB Metadata Space Total: 2.147 GB Metadata Space Available: 2.145 GB Udev Sync Supported: true Deferred Removal Enabled: false Data loop file: /scratch/docker/devicemapper/devicemapper/data Metadata loop file: /scratch/docker/devicemapper/devicemapper/metadata Library Version: 1.02.93-RHEL7 (2015-01-28) Execution Driver: native-0.2 Logging Driver: json-file Kernel Version: 3.8.13-98.el7uek.x86_64 Operating System: Oracle Linux Server 7.1 CPUs: 24 Total Memory: 141.4 GiB ID: GFRH:Y3RH:66XS:XED5:N7KJ:V4RG:O4M6:INTA:DDIL:BNLL:DJ3N:ARCM

*$ docker version * Client: Version: 1.8.0 API version: 1.20 Go version: go1.4.2 Git commit: 0d03096 Built: Tue Aug 11 16:48:33 UTC 2015 OS/Arch: linux/amd64

Server: Version: 1.8.0 API version: 1.20 Go version: go1.4.2 Git commit: 0d03096 Built: Tue Aug 11 16:48:33 UTC 2015 OS/Arch: linux/amd64

closed time in 20 hours

vshiva

issue commentmoby/moby

v1 registry - error pulling images with docker 1.8 / OEL 7.1

v1 support has been deprecated. Let me close this issue.

vshiva

comment created time in 20 hours

pull request commentmoby/moby

Remove more registry v1 code

needs rebase

tiborvass

comment created time in 20 hours

issue closedmoby/moby

Enable DCT by default options

It isn't immediately clear how to enabled DCT on a host, the only solution to enable appears to be via the command line option --disable-content-trust or the environment variable DOCKER_CONTENT_TRUST, is there no way of informing the docker daemon to enable trusted images only and to use an on-premise notary server on https://some_url

closed time in 20 hours

gambol99

issue commentmoby/moby

Enable DCT by default options

duplicate: https://github.com/moby/moby/issues/19128

gambol99

comment created time in 20 hours

issue closedmoby/moby

[Proposal] Move 'contrib/syntax' to its own repository

Hi,

wanted to ask, wouldn't it make sense to move contrib/syntax to its own separate repository? The thing is that current moby\moby repository is 89 MB. Where the contrib\syntax is just 10.5 KB. So for editors like vim huge plugins are around 2 MB, which is more exclusion. Plus most of the updates of the repository are not related to syntax at all. So when you run :PlugInstall or :PlugUpdate with vim-plug it takes more time to process the whole moby\moby repository.

Best, Stanislav

closed time in 20 hours

stolho

issue commentmoby/moby

[Proposal] Move 'contrib/syntax' to its own repository

vim plugin was removed from this repo: https://github.com/moby/moby/pull/40354

stolho

comment created time in 20 hours

issue closedmoby/moby

SIGSEGV: segmentation violation (containerd)

<!-- If you are reporting a new issue, make sure that we do not have any duplicates already open. You can ensure this by searching the issue list for this repository. If there is a duplicate, please close your issue and add a comment to the existing issue instead.

If you suspect your issue is a bug, please edit your issue description to include the BUG REPORT INFORMATION shown below. If you fail to provide this information within 7 days, we cannot debug your issue and will close it. We will, however, reopen it if you later provide the information.

For more information about reporting issues, see https://github.com/moby/moby/blob/master/CONTRIBUTING.md#reporting-other-issues


GENERAL SUPPORT INFORMATION

The GitHub issue tracker is for bug reports and feature requests. General support for docker can be found at the following locations:

  • Docker Support Forums - https://forums.docker.com
  • Slack - community.docker.com #general channel
  • Post a question on StackOverflow, using the Docker tag

General support for moby can be found at the following locations:

  • Moby Project Forums - https://forums.mobyproject.org
  • Slack - community.docker.com #moby-project channel
  • Post a question on StackOverflow, using the Moby tag

BUG REPORT INFORMATION

Use the commands below to provide key information from your environment: You do NOT have to include this information if this is a FEATURE REQUEST -->

Description

Today i loged in on the docker machine, and typed "docker service ls" because some services were not available, but didn't received any output it just stuck and wait. Than i looked in the syslog and found a crash.

Steps to reproduce the issue: N/A happend over night ..

Describe the results you received:

Stacktrace in the center:

Feb  6 08:12:08 docker0104 dockerd[1601]: time="2018-02-06T08:12:08.155021222+01:00" level=warning msg="Health check for container 8eca74c3ef255d0c96bf49a13300c50a51317f0b5717d3e9c612fa9c8dc1eea3 error: context cancelled"
Feb  6 08:12:08 docker0104 dockerd[1601]: time="2018-02-06T08:12:08.155085944+01:00" level=warning msg="Health check for container 59561e71392b4a43f7c7e21e3ac1511431bd76884cad495ec118a95962bde645 error: context cancelled"
Feb  6 08:12:08 docker0104 dockerd[1601]: time="2018-02-06T08:12:08.837900388+01:00" level=warning msg="Health check for container 973cebc9ba19cacceb5496cac892c22fd2582d81b65c92d053fd67b07f7953f9 error: context cancelled"
Feb  6 08:12:09 docker0104 dockerd[1601]: time="2018-02-06T08:12:09.088110210+01:00" level=warning msg="Ignoring Exit Event, no such exec command found" container=8eca74c3ef255d0c96bf49a13300c50a51317f0b5717d3e9c612fa9c8dc1eea3 exec-id=02bf67035e7eec5935ec5f2d38ad6b1ede7a31761e0263f3126502b80d391956 exec-pid=31930
Feb  6 08:12:10 docker0104 dockerd[1601]: time="2018-02-06T08:12:10.135118421+01:00" level=warning msg="Ignoring Exit Event, no such exec command found" container=59561e71392b4a43f7c7e21e3ac1511431bd76884cad495ec118a95962bde645 exec-id=26a565618e4b4b25eb299dd33b6f99f522623cfdcfae1192a7d4e2c901d65ef5 exec-pid=31929
Feb  6 08:12:15 docker0104 dockerd[1601]: time="2018-02-06T08:12:15.565959041+01:00" level=warning msg="Ignoring Exit Event, no such exec command found" container=973cebc9ba19cacceb5496cac892c22fd2582d81b65c92d053fd67b07f7953f9 exec-id=ef2bf3c52cafd7e9bd6ee12303b343de1a71982ad6340d65eb01aba159586fb6 exec-pid=31972
Feb  6 08:14:08 docker0104 dockerd[1601]: time="2018-02-06T08:14:08.436518314+01:00" level=warning msg="Health check for container 973cebc9ba19cacceb5496cac892c22fd2582d81b65c92d053fd67b07f7953f9 error: context cancelled"
Feb  6 08:14:11 docker0104 dockerd[1601]: time="2018-02-06T08:14:11.123815156+01:00" level=error msg="stream copy error: reading from a closed fifo"
Feb  6 08:14:11 docker0104 dockerd[1601]: time="2018-02-06T08:14:11.123808979+01:00" level=error msg="stream copy error: reading from a closed fifo"
Feb  6 08:14:11 docker0104 dockerd[1601]: time="2018-02-06T08:14:11.123809011+01:00" level=error msg="stream copy error: reading from a closed fifo"
Feb  6 08:14:11 docker0104 dockerd[1601]: time="2018-02-06T08:14:11.123809081+01:00" level=error msg="stream copy error: reading from a closed fifo"
Feb  6 08:14:17 docker0104 dockerd[1601]: time="2018-02-06T08:14:16.881082312+01:00" level=warning msg="Health check for container 59561e71392b4a43f7c7e21e3ac1511431bd76884cad495ec118a95962bde645 error: context cancelled"
Feb  6 08:14:17 docker0104 dockerd[1601]: time="2018-02-06T08:14:17.020975028+01:00" level=warning msg="Health check for container 8eca74c3ef255d0c96bf49a13300c50a51317f0b5717d3e9c612fa9c8dc1eea3 error: context cancelled"
Feb  6 08:14:17 docker0104 dockerd[1601]: time="2018-02-06T08:14:17.849623194+01:00" level=warning msg="Health check for container 007ad739e63dbaba722b0921983047b9c1f1f33f5a2e53097f77b1fdbd49419e error: context cancelled"
Feb  6 08:14:23 docker0104 dockerd[1601]: time="2018-02-06T08:14:23.884214594+01:00" level=warning msg="Ignoring Exit Event, no such exec command found" container=59561e71392b4a43f7c7e21e3ac1511431bd76884cad495ec118a95962bde645 exec-id=1ee95a310a0ac25158bb9ff98183e77a780df29e9bd1e996ea965cb1ac968c32 exec-pid=2575
Feb  6 08:14:23 docker0104 dockerd[1601]: time="2018-02-06T08:14:23.884197831+01:00" level=warning msg="Ignoring Exit Event, no such exec command found" container=8eca74c3ef255d0c96bf49a13300c50a51317f0b5717d3e9c612fa9c8dc1eea3 exec-id=8503ecf6d18493cf3c8ed3eb1b394ef786fe396c001f22f6afa9efc5be44a0f1 exec-pid=2576
Feb  6 08:14:24 docker0104 dockerd[1601]: time="2018-02-06T08:14:24.002480451+01:00" level=error msg="stream copy error: reading from a closed fifo"
Feb  6 08:14:24 docker0104 dockerd[1601]: time="2018-02-06T08:14:24.002484701+01:00" level=error msg="stream copy error: reading from a closed fifo"
Feb  6 08:14:34 docker0104 dockerd[1601]: time="2018-02-06T08:14:34.733983207+01:00" level=error msg="stream copy error: reading from a closed fifo"
Feb  6 08:14:35 docker0104 dockerd[1601]: time="2018-02-06T08:14:34.734030773+01:00" level=error msg="stream copy error: reading from a closed fifo"
Feb  6 08:14:35 docker0104 dockerd[1601]: time="2018-02-06T08:14:34.734030226+01:00" level=error msg="stream copy error: reading from a closed fifo"
Feb  6 08:14:35 docker0104 dockerd[1601]: time="2018-02-06T08:14:34.734031828+01:00" level=error msg="stream copy error: reading from a closed fifo"
Feb  6 08:14:36 docker0104 kernel: [59254.874198] CIFS VFS: Server srv-fs002.boeblingen.mcl.local has not responded in 120 seconds. Reconnecting...
Feb  6 08:14:36 docker0104 kernel: [59254.883852] CIFS VFS: Free previous auth_key.response = ffff8a79456d2800
Feb  6 08:14:46 docker0104 dockerd[1601]: time="2018-02-06T08:14:43.922767870+01:00" level=warning msg="Ignoring Exit Event, no such exec command found" container=007ad739e63dbaba722b0921983047b9c1f1f33f5a2e53097f77b1fdbd49419e exec-id=13610d9387c1dd04139b07fba26733d6e67f0c15df36dea214527e866a8b9527 exec-pid=2513
Feb  6 08:14:59 docker0104 dockerd[1601]: time="2018-02-06T08:14:56.361745666+01:00" level=info msg="killing and restarting containerd" module=libcontainerd pid=1874
Feb  6 08:17:16 docker0104 CRON[2750]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Feb  6 08:17:56 docker0104 kernel: [59455.421161] CIFS VFS: Free previous auth_key.response = ffff8a7926615e00
Feb  6 08:17:56 docker0104 kernel: [59455.421741] CIFS VFS: Free previous auth_key.response = ffff8a7f3dfb9300
Feb  6 08:17:56 docker0104 kernel: [59455.422601] CIFS VFS: Free previous auth_key.response = ffff8a7913681800
Feb  6 08:18:14 docker0104 gitlab-runner[1524]: time="2018-02-06T08:17:54+01:00" level=warning msg="Checking for jobs... failed" runner=96809613 status="couldn't execute POST against https://git.mcl.de/api/v4/jobs/request: Post https://git.mcl.de/api/v4/jobs/request: dial tcp: i/o timeout" #012<nil>
Feb  6 08:18:31 docker0104 gitlab-ci-multi-runner[1524]: time="2018-02-06T08:17:54+01:00" level=warning msg="Checking for jobs... failed" runner=96809613 status="couldn't execute POST against https://git.mcl.de/api/v4/jobs/request: Post https://git.mcl.de/api/v4/jobs/request: dial tcp: i/o timeout"
Feb  6 08:19:14 docker0104 kernel: [59532.708869] CIFS VFS: Free previous auth_key.response = ffff8a7af5673000
Feb  6 08:19:14 docker0104 kernel: [59532.709189] CIFS VFS: Free previous auth_key.response = ffff8a7a93b6a900
Feb  6 08:19:40 docker0104 dockerd[1601]: panic: runtime error: invalid memory address or nil pointer dereference
Feb  6 08:19:40 docker0104 dockerd[1601]: [signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0x1c0a0e9]
Feb  6 08:19:40 docker0104 dockerd[1601]: goroutine 2976627 [running]:
Feb  6 08:19:40 docker0104 dockerd[1601]: github.com/docker/docker/vendor/github.com/containerd/containerd/dialer.Dialer.func2(0xc4259fc180)
Feb  6 08:19:40 docker0104 dockerd[1601]: #011/go/src/github.com/docker/docker/vendor/github.com/containerd/containerd/dialer/dialer.go:46 +0x59
Feb  6 08:19:40 docker0104 dockerd[1601]: created by github.com/docker/docker/vendor/github.com/containerd/containerd/dialer.Dialer
Feb  6 08:19:40 docker0104 dockerd[1601]: #011/go/src/github.com/docker/docker/vendor/github.com/containerd/containerd/dialer/dialer.go:43 +0x1c7
Feb  6 08:19:40 docker0104 systemd[1]: docker.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Feb  6 08:19:40 docker0104 systemd[1]: docker.service: Unit entered failed state.
Feb  6 08:19:40 docker0104 systemd[1]: docker.service: Failed with result 'exit-code'.
Feb  6 08:19:41 docker0104 systemd[1]: docker.service: Service hold-off time over, scheduling restart.
Feb  6 08:19:41 docker0104 systemd[1]: Stopped Docker Application Container Engine.
Feb  6 08:19:41 docker0104 systemd[1]: Closed Docker Socket for the API.
Feb  6 08:19:41 docker0104 systemd[1]: Stopping Docker Socket for the API.
Feb  6 08:19:41 docker0104 systemd[1]: Starting Docker Socket for the API.
Feb  6 08:19:41 docker0104 systemd[1]: Listening on Docker Socket for the API.
Feb  6 08:19:41 docker0104 systemd[1]: Starting Docker Application Container Engine...
Feb  6 08:19:46 docker0104 dockerd[2809]: time="2018-02-06T08:19:46.970783969+01:00" level=info msg="libcontainerd: started new docker-containerd process" pid=2926
Feb  6 08:19:46 docker0104 dockerd[2809]: time="2018-02-06T08:19:46+01:00" level=info msg="starting containerd" module=containerd revision=89623f28b87a6004d4b785663257362d1658a729 version=v1.0.0
Feb  6 08:19:47 docker0104 dockerd[2809]: time="2018-02-06T08:19:47+01:00" level=info msg="setting subreaper..." module=containerd
Feb  6 08:19:47 docker0104 dockerd[2809]: time="2018-02-06T08:19:47+01:00" level=info msg="changing OOM score to -500" module=containerd
Feb  6 08:19:47 docker0104 dockerd[2809]: time="2018-02-06T08:19:47+01:00" level=info msg="loading plugin "io.containerd.content.v1.content"..." module=containerd type=io.containerd.content.v1
Feb  6 08:19:47 docker0104 dockerd[2809]: time="2018-02-06T08:19:47+01:00" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." module=containerd type=io.containerd.snapshotter.v1
Feb  6 08:19:47 docker0104 dockerd[2809]: time="2018-02-06T08:19:47+01:00" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter" module=containerd
Feb  6 08:19:47 docker0104 dockerd[2809]: time="2018-02-06T08:19:47+01:00" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." module=containerd type=io.containerd.snapshotter.v1
Feb  6 08:19:47 docker0104 dockerd[2809]: time="2018-02-06T08:19:47+01:00" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." module=containerd type=io.containerd.metadata.v1
Feb  6 08:19:47 docker0104 dockerd[2809]: time="2018-02-06T08:19:47+01:00" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter" module="containerd/io.containerd.metadata.v1.bolt"
Feb  6 08:19:47 docker0104 dockerd[2809]: time="2018-02-06T08:19:47+01:00" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." module=containerd type=io.containerd.differ.v1
Feb  6 08:19:47 docker0104 dockerd[2809]: time="2018-02-06T08:19:47+01:00" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." module=containerd type=io.containerd.gc.v1
Feb  6 08:19:47 docker0104 dockerd[2809]: time="2018-02-06T08:19:47+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." module=containerd type=io.containerd.grpc.v1
Feb  6 08:19:47 docker0104 dockerd[2809]: time="2018-02-06T08:19:47+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." module=containerd type=io.containerd.grpc.v1
Feb  6 08:19:47 docker0104 dockerd[2809]: time="2018-02-06T08:19:47+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." module=containerd type=io.containerd.grpc.v1
Feb  6 08:19:47 docker0104 dockerd[2809]: time="2018-02-06T08:19:47+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." module=containerd type=io.containerd.grpc.v1
Feb  6 08:19:47 docker0104 dockerd[2809]: time="2018-02-06T08:19:47+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." module=containerd type=io.containerd.grpc.v1
Feb  6 08:19:47 docker0104 dockerd[2809]: time="2018-02-06T08:19:47+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." module=containerd type=io.containerd.grpc.v1
Feb  6 08:19:47 docker0104 dockerd[2809]: time="2018-02-06T08:19:47+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." module=containerd type=io.containerd.grpc.v1
Feb  6 08:19:47 docker0104 dockerd[2809]: time="2018-02-06T08:19:47+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." module=containerd type=io.containerd.grpc.v1
Feb  6 08:19:47 docker0104 dockerd[2809]: time="2018-02-06T08:19:47+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." module=containerd type=io.containerd.grpc.v1
Feb  6 08:19:47 docker0104 dockerd[2809]: time="2018-02-06T08:19:47+01:00" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." module=containerd type=io.containerd.monitor.v1
Feb  6 08:19:47 docker0104 dockerd[2809]: time="2018-02-06T08:19:47+01:00" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." module=containerd type=io.containerd.runtime.v1
Feb  6 08:19:48 docker0104 dockerd[2809]: time="2018-02-06T08:19:48+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." module=containerd type=io.containerd.grpc.v1
Feb  6 08:19:48 docker0104 dockerd[2809]: time="2018-02-06T08:19:48+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." module=containerd type=io.containerd.grpc.v1
Feb  6 08:19:48 docker0104 dockerd[2809]: time="2018-02-06T08:19:48+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." module=containerd type=io.containerd.grpc.v1
Feb  6 08:19:48 docker0104 dockerd[2809]: time="2018-02-06T08:19:48+01:00" level=info msg=serving... address="/var/run/docker/containerd/docker-containerd-debug.sock" module="containerd/debug"
Feb  6 08:19:48 docker0104 dockerd[2809]: time="2018-02-06T08:19:48+01:00" level=info msg=serving... address="/var/run/docker/containerd/docker-containerd.sock" module="containerd/grpc"
Feb  6 08:19:48 docker0104 dockerd[2809]: time="2018-02-06T08:19:48+01:00" level=info msg="containerd successfully booted in 1.307976s" module=containerd
Feb  6 08:20:28 docker0104 dockerd[2809]: time="2018-02-06T08:20:28.707192430+01:00" level=info msg="Graph migration to content-addressability took 0.00 seconds"
Feb  6 08:20:28 docker0104 dockerd[2809]: time="2018-02-06T08:20:28.708151181+01:00" level=warning msg="Your kernel does not support swap memory limit"
Feb  6 08:20:28 docker0104 dockerd[2809]: time="2018-02-06T08:20:28.708222439+01:00" level=warning msg="Your kernel does not support cgroup rt period"
Feb  6 08:20:28 docker0104 dockerd[2809]: time="2018-02-06T08:20:28.708238812+01:00" level=warning msg="Your kernel does not support cgroup rt runtime"
Feb  6 08:20:28 docker0104 dockerd[2809]: time="2018-02-06T08:20:28.709013738+01:00" level=info msg="Loading containers: start."
Feb  6 08:20:37 docker0104 dockerd[2809]: time="2018-02-06T08:20:37.454079181+01:00" level=error msg="stream copy error: reading from a closed fifo"
Feb  6 08:20:37 docker0104 dockerd[2809]: time="2018-02-06T08:20:37.454101420+01:00" level=error msg="stream copy error: reading from a closed fifo"
Feb  6 08:20:37 docker0104 dockerd[2809]: time="2018-02-06T08:20:37+01:00" level=warning msg="unable to retrieve cgroup on stop" error="cgroup does not exist: not found" module="containerd/io.containerd.monitor.v1.cgroups"
Feb  6 08:20:37 docker0104 dockerd[2809]: time="2018-02-06T08:20:37.526503122+01:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb  6 08:20:37 docker0104 dockerd[2809]: time="2018-02-06T08:20:37.560030756+01:00" level=error msg="stream copy error: reading from a closed fifo"
Feb  6 08:20:37 docker0104 dockerd[2809]: time="2018-02-06T08:20:37.560026611+01:00" level=error msg="stream copy error: reading from a closed fifo"
Feb  6 08:20:37 docker0104 dockerd[2809]: time="2018-02-06T08:20:37+01:00" level=warning msg="unable to retrieve cgroup on stop" error="cgroup does not exist: not found" module="containerd/io.containerd.monitor.v1.cgroups"
Feb  6 08:20:37 docker0104 dockerd[2809]: time="2018-02-06T08:20:37.616211833+01:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb  6 08:20:37 docker0104 dockerd[2809]: time="2018-02-06T08:20:37.681315029+01:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb  6 08:20:37 docker0104 dockerd[2809]: time="2018-02-06T08:20:37.697471483+01:00" level=error msg="stream copy error: reading from a closed fifo"
Feb  6 08:20:37 docker0104 dockerd[2809]: time="2018-02-06T08:20:37.697532925+01:00" level=error msg="stream copy error: reading from a closed fifo"
Feb  6 08:20:37 docker0104 dockerd[2809]: time="2018-02-06T08:20:37+01:00" level=warning msg="unable to retrieve cgroup on stop" error="cgroup does not exist: not found" module="containerd/io.containerd.monitor.v1.cgroups"

Describe the results you expected:

No crash

Additional information you deem important (e.g. issue happens only occasionally):

Output of docker version:

Client:
 Version:	18.01.0-ce
 API version:	1.35
 Go version:	go1.9.2
 Git commit:	03596f5
 Built:	Wed Jan 10 20:13:21 2018
 OS/Arch:	linux/amd64
 Experimental:	false
 Orchestrator:	swarm

Server:
 Engine:
  Version:	18.01.0-ce
  API version:	1.35 (minimum version 1.12)
  Go version:	go1.9.2
  Git commit:	03596f5
  Built:	Wed Jan 10 20:11:47 2018
  OS/Arch:	linux/amd64
  Experimental:	false

Output of docker info:

Containers: 122
 Running: 17
 Paused: 0
 Stopped: 105
Images: 141
Server Version: 18.01.0-ce
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: active
 NodeID: odfdzn8rp9qcvblnk39p5lpto
 Is Manager: true
 ClusterID: fqffr81nxyg046kjm2uaniz8g
 Managers: 1
 Nodes: 1
 Orchestration:
  Task History Retention Limit: 5
 Raft:
  Snapshot Interval: 10000
  Number of Old Snapshots to Retain: 0
  Heartbeat Tick: 1
  Election Tick: 3
 Dispatcher:
  Heartbeat Period: 5 seconds
 CA Configuration:
  Expiry Duration: 3 months
  Force Rotate: 0
 Autolock Managers: false
 Root Rotation In Progress: false
 Node Address: 192.168.85.104
 Manager Addresses:
  192.168.85.104:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 89623f28b87a6004d4b785663257362d1658a729
runc version: b2567b37d7b75eb4cf325b77297b140ea686ce8f
init version: 949e6fa
Security Options:
 apparmor
 seccomp
  Profile: default
Kernel Version: 4.13.0-32-generic
Operating System: Ubuntu 17.10
OSType: linux
Architecture: x86_64
CPUs: 16
Total Memory: 39.28GiB
Name: docker0104
ID: QRLZ:G6LR:KI4X:JY46:PI4C:3LAU:ES2K:LHQO:FU4I:6RE5:NU4H:VHVQ
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Username: fank
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

WARNING: No swap limit support

Additional environment details (AWS, VirtualBox, physical, etc.): Physical server running latest ubuntu 17.10

closed time in 20 hours

Fank

issue closedmoby/moby

SIGSEGV [libnetwork]

Description

The docker service on one of our Swarm nodes crashed yesterday. The service automatically restarted, but left a number of containers running from before the crash and unable to access the network until they were restarted.

Steps to reproduce the issue: Unknown, the server was running normally with no changes being made.

Describe the results you received:

The docker service unexpectedly crashed with the following log output:

Aug 01 17:44:07 go2-docker-1 dockerd[5875]: time="2018-08-01T17:44:07.457222097-05:00" level=warning msg="Peer operation failed:Unable to find the peerDB for nid:wehmbwcr18tt35ct5uqhf9c3t op:&{3 wehmbwcr18tt35ct5uqhf9c3t  [] [] [] [] fal
Aug 01 17:44:07 go2-docker-1 dockerd[5875]: panic: runtime error: invalid memory address or nil pointer dereference
Aug 01 17:44:07 go2-docker-1 dockerd[5875]: [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x555991b67fc6]
Aug 01 17:44:07 go2-docker-1 dockerd[5875]: goroutine 2869372 [running]:
Aug 01 17:44:07 go2-docker-1 dockerd[5875]: github.com/docker/docker/vendor/github.com/docker/libnetwork.(*network).CopyTo(0xc421922000, 0x555993901f60, 0xc4235f4700, 0x0, 0x0)
Aug 01 17:44:07 go2-docker-1 dockerd[5875]:         /go/src/github.com/docker/docker/vendor/github.com/docker/libnetwork/network.go:500 +0x5d6
Aug 01 17:44:07 go2-docker-1 dockerd[5875]: github.com/docker/docker/vendor/github.com/docker/libnetwork/datastore.(*cache).get(0xc42039d780, 0xc421c89580, 0x36, 0x555993901f60, 0xc4235f4700, 0x0, 0x0)
Aug 01 17:44:07 go2-docker-1 dockerd[5875]:         /go/src/github.com/docker/docker/vendor/github.com/docker/libnetwork/datastore/cache.go:160 +0x1ef
Aug 01 17:44:07 go2-docker-1 dockerd[5875]: github.com/docker/docker/vendor/github.com/docker/libnetwork/datastore.(*datastore).GetObject(0xc42046d5c0, 0xc421c89580, 0x36, 0x555993901f60, 0xc4235f4700, 0x0, 0x0)
Aug 01 17:44:07 go2-docker-1 dockerd[5875]:         /go/src/github.com/docker/docker/vendor/github.com/docker/libnetwork/datastore/datastore.go:481 +0x195
Aug 01 17:44:07 go2-docker-1 dockerd[5875]: github.com/docker/docker/vendor/github.com/docker/libnetwork.(*controller).getNetworkFromStore(0xc4200e6c00, 0xc4222e89a0, 0x19, 0x49, 0xc42302e8c0, 0x49)
Aug 01 17:44:07 go2-docker-1 dockerd[5875]:         /go/src/github.com/docker/docker/vendor/github.com/docker/libnetwork/store.go:85 +0x161
Aug 01 17:44:07 go2-docker-1 dockerd[5875]: github.com/docker/docker/vendor/github.com/docker/libnetwork.(*controller).NetworkByID(0xc4200e6c00, 0xc4222e89a0, 0x19, 0x5559938c4f30, 0xc422cfd450, 0xc422369db8, 0x555990e6d456)
Aug 01 17:44:07 go2-docker-1 dockerd[5875]:         /go/src/github.com/docker/docker/vendor/github.com/docker/libnetwork/controller.go:1042 +0x4e
Aug 01 17:44:07 go2-docker-1 dockerd[5875]: github.com/docker/docker/daemon.(*Daemon).GetNetworkByID(0xc420462380, 0xc4222e89a0, 0x19, 0xc422369e80, 0xc422369e78, 0xc422369e70, 0xc422369e60)
Aug 01 17:44:07 go2-docker-1 dockerd[5875]:         /go/src/github.com/docker/docker/daemon/network.go:86 +0x59
Aug 01 17:44:07 go2-docker-1 dockerd[5875]: github.com/docker/docker/daemon.(*Daemon).DeleteManagedNetwork(0xc420462380, 0xc4222e89a0, 0x19, 0x5559923336c6, 0x12)
Aug 01 17:44:07 go2-docker-1 dockerd[5875]:         /go/src/github.com/docker/docker/daemon/network.go:491 +0x45
Aug 01 17:44:07 go2-docker-1 dockerd[5875]: github.com/docker/docker/daemon/cluster/executor/container.(*containerAdapter).removeNetworks(0xc4216c0680, 0x5559938f8360, 0xc42096e540, 0x0, 0x0)
Aug 01 17:44:07 go2-docker-1 dockerd[5875]:         /go/src/github.com/docker/docker/daemon/cluster/executor/container/adapter.go:173 +0xf5
Aug 01 17:44:07 go2-docker-1 dockerd[5875]: github.com/docker/docker/daemon/cluster/executor/container.(*controller).Remove(0xc42205a280, 0x5559938f8360, 0xc42096e540, 0x5559938f8360, 0xc42096e540)
Aug 01 17:44:07 go2-docker-1 dockerd[5875]:         /go/src/github.com/docker/docker/daemon/cluster/executor/container/controller.go:401 +0xb8
Aug 01 17:44:07 go2-docker-1 dockerd[5875]: github.com/docker/docker/vendor/github.com/docker/swarmkit/agent.reconcileTaskState.func1(0xc421bd19e0)
Aug 01 17:44:07 go2-docker-1 dockerd[5875]:         /go/src/github.com/docker/docker/vendor/github.com/docker/swarmkit/agent/worker.go:270 +0xa9
Aug 01 17:44:07 go2-docker-1 dockerd[5875]: created by github.com/docker/docker/vendor/github.com/docker/swarmkit/agent.reconcileTaskState
Aug 01 17:44:07 go2-docker-1 dockerd[5875]:         /go/src/github.com/docker/docker/vendor/github.com/docker/swarmkit/agent/worker.go:313 +0xf75
Aug 01 17:44:07 go2-docker-1 systemd[1]: docker.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Aug 01 17:44:07 go2-docker-1 systemd[1]: docker.service: Unit entered failed state.
Aug 01 17:44:07 go2-docker-1 systemd[1]: docker.service: Failed with result 'exit-code'.
Aug 01 17:44:07 go2-docker-1 systemd[1]: docker.service: Service hold-off time over, scheduling restart.
Aug 01 17:44:07 go2-docker-1 systemd[1]: Stopped Docker Application Container Engine.
Aug 01 17:44:07 go2-docker-1 systemd[1]: Starting Docker Application Container Engine...
Aug 01 17:44:07 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:07-05:00" level=warning msg="Running experimental build"
Aug 01 17:44:07 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:07.741426488-05:00" level=warning msg="Error while setting daemon root propagation, this is not generally critical but may cause some functionality to not work or fallba
Aug 01 17:44:07 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:07.743305865-05:00" level=info msg="libcontainerd: started new docker-containerd process" pid=64757
Aug 01 17:44:07 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:07-05:00" level=info msg="starting containerd" module=containerd revision=773c489c9c1b21a6d78b5c538cd395416ec50f88 version=v1.0.3
Aug 01 17:44:07 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:07-05:00" level=info msg="loading plugin "io.containerd.content.v1.content"..." module=containerd type=io.containerd.content.v1
Aug 01 17:44:07 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:07-05:00" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." module=containerd type=io.containerd.snapshotter.v1
Aug 01 17:44:07 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:07-05:00" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs m
Aug 01 17:44:07 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:07-05:00" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." module=containerd type=io.containerd.snapshotter.v1
Aug 01 17:44:07 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:07-05:00" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." module=containerd type=io.containerd.metadata.v1
Aug 01 17:44:07 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:07-05:00" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be
Aug 01 17:44:07 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:07-05:00" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." module=containerd type=io.containerd.differ.v1
Aug 01 17:44:07 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:07-05:00" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." module=containerd type=io.containerd.gc.v1
Aug 01 17:44:07 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:07-05:00" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." module=containerd type=io.containerd.grpc.v1
Aug 01 17:44:07 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:07-05:00" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." module=containerd type=io.containerd.grpc.v1
Aug 01 17:44:07 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:07-05:00" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." module=containerd type=io.containerd.grpc.v1
Aug 01 17:44:07 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:07-05:00" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." module=containerd type=io.containerd.grpc.v1
Aug 01 17:44:07 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:07-05:00" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." module=containerd type=io.containerd.grpc.v1
Aug 01 17:44:07 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:07-05:00" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." module=containerd type=io.containerd.grpc.v1
Aug 01 17:44:07 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:07-05:00" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." module=containerd type=io.containerd.grpc.v1
Aug 01 17:44:07 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:07-05:00" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." module=containerd type=io.containerd.grpc.v1
Aug 01 17:44:07 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:07-05:00" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." module=containerd type=io.containerd.grpc.v1
Aug 01 17:44:07 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:07-05:00" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." module=containerd type=io.containerd.monitor.v1
Aug 01 17:44:07 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:07-05:00" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." module=containerd type=io.containerd.runtime.v1
Aug 01 17:44:07 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:07-05:00" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." module=containerd type=io.containerd.grpc.v1
Aug 01 17:44:07 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:07-05:00" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." module=containerd type=io.containerd.grpc.v1
Aug 01 17:44:07 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:07-05:00" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." module=containerd type=io.containerd.grpc.v1
Aug 01 17:44:07 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:07-05:00" level=info msg=serving... address="/var/run/docker/containerd/docker-containerd-debug.sock" module="containerd/debug"
Aug 01 17:44:07 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:07-05:00" level=info msg=serving... address="/var/run/docker/containerd/docker-containerd.sock" module="containerd/grpc"
Aug 01 17:44:07 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:07-05:00" level=info msg="containerd successfully booted in 0.065684s" module=containerd
Aug 01 17:44:07 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:07.836595293-05:00" level=info msg="[graphdriver] using prior storage driver: overlay2"
Aug 01 17:44:08 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:08.118872455-05:00" level=info msg="Graph migration to content-addressability took 0.00 seconds"
Aug 01 17:44:08 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:08.119221058-05:00" level=warning msg="Your kernel does not support cgroup rt period"
Aug 01 17:44:08 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:08.119461560-05:00" level=warning msg="Your kernel does not support cgroup rt runtime"
Aug 01 17:44:08 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:08.120475379-05:00" level=info msg="Loading containers: start."
Aug 01 17:44:08 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:08.773112838-05:00" level=error msg="stream copy error: reading from a closed fifo"
Aug 01 17:44:08 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:08.773649369-05:00" level=error msg="stream copy error: reading from a closed fifo"
Aug 01 17:44:08 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:08-05:00" level=warning msg="unable to retrieve cgroup on stop" error="cgroup does not exist: not found" module="containerd/io.containerd.monitor.v1.cgroups"
Aug 01 17:44:09 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:09-05:00" level=error msg="connecting to shim" error=<nil> id=1de999e625646e04a99312776e57e38a16784d91bf0e9a1787bb134da183e31e module="containerd/io.containerd.runtime.v
Aug 01 17:44:09 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:09.017100512-05:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 01 17:44:09 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:09-05:00" level=warning msg="unmount task rootfs" error="no such file or directory" id=1de999e625646e04a99312776e57e38a16784d91bf0e9a1787bb134da183e31e module="container
Aug 01 17:44:09 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:09.102973311-05:00" level=warning msg="unknown container" container=1de999e625646e04a99312776e57e38a16784d91bf0e9a1787bb134da183e31e module=libcontainerd namespace=moby
Aug 01 17:44:09 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:09.105375153-05:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 01 17:44:09 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:09-05:00" level=error msg="connecting to shim" error=<nil> id=1f3bb4916f17aa98702f3ef0132d95256efd92439e02d6ee4d167964b79ee302 module="containerd/io.containerd.runtime.v
Aug 01 17:44:09 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:09.375524645-05:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 01 17:44:09 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:09-05:00" level=error msg="connecting to shim" error=<nil> id=d0b997b91de4634d4820d563981938e0d5937dbce69c33138e1093f159db871f module="containerd/io.containerd.runtime.v
Aug 01 17:44:09 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:09.437314203-05:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 01 17:44:09 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:09-05:00" level=warning msg="unmount task rootfs" error="no such file or directory" id=1f3bb4916f17aa98702f3ef0132d95256efd92439e02d6ee4d167964b79ee302 module="container
Aug 01 17:44:09 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:09-05:00" level=warning msg="unmount task rootfs" error="no such file or directory" id=d0b997b91de4634d4820d563981938e0d5937dbce69c33138e1093f159db871f module="container
Aug 01 17:44:09 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:09.485646220-05:00" level=warning msg="unknown container" container=1f3bb4916f17aa98702f3ef0132d95256efd92439e02d6ee4d167964b79ee302 module=libcontainerd namespace=moby
Aug 01 17:44:09 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:09.485718161-05:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 01 17:44:09 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:09.485918104-05:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 01 17:44:09 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:09-05:00" level=error msg="connecting to shim" error=<nil> id=4b4484f0a99c676364abfd53d49cf7559f841a1a078f9b70024af287d42789e1 module="containerd/io.containerd.runtime.v
Aug 01 17:44:09 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:09.524834783-05:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 01 17:44:09 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:09-05:00" level=error msg="connecting to shim" error=<nil> id=448dc73adbb211196163dece41e676b17b932eb9ffd5e650ff16868bce64e2d0 module="containerd/io.containerd.runtime.v
Aug 01 17:44:09 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:09-05:00" level=error msg="connecting to shim" error=<nil> id=d680a7fdc2884e8ad7ba3b516d755017b0ea545cffc90d4879a29d099d02a746 module="containerd/io.containerd.runtime.v
Aug 01 17:44:09 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:09.559608105-05:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 01 17:44:09 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:09.559658537-05:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 01 17:44:09 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:09-05:00" level=warning msg="unmount task rootfs" error="no such file or directory" id=4b4484f0a99c676364abfd53d49cf7559f841a1a078f9b70024af287d42789e1 module="container
Aug 01 17:44:09 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:09.597922690-05:00" level=warning msg="Ignoring Exit Event, no such exec command found" container=d0b997b91de4634d4820d563981938e0d5937dbce69c33138e1093f159db871f exec-i
Aug 01 17:44:09 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:09.600914419-05:00" level=warning msg="unknown container" container=4b4484f0a99c676364abfd53d49cf7559f841a1a078f9b70024af287d42789e1 module=libcontainerd namespace=moby
Aug 01 17:44:09 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:09.605348129-05:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 01 17:44:09 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:09-05:00" level=error msg="connecting to shim" error=<nil> id=9a1ba9da95aea9ed275f9ced39dfe72ebd980dd8241075eab407c1ad05cf1c99 module="containerd/io.containerd.runtime.v
Aug 01 17:44:09 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:09-05:00" level=warning msg="unmount task rootfs" error="no such file or directory" id=448dc73adbb211196163dece41e676b17b932eb9ffd5e650ff16868bce64e2d0 module="container
Aug 01 17:44:09 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:09.616778707-05:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 01 17:44:09 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:09.626711833-05:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 01 17:44:09 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:09.639629312-05:00" level=warning msg="Ignoring Exit Event, no such exec command found" container=448dc73adbb211196163dece41e676b17b932eb9ffd5e650ff16868bce64e2d0 exec-i
Aug 01 17:44:09 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:09-05:00" level=warning msg="unmount task rootfs" error="no such file or directory" id=d680a7fdc2884e8ad7ba3b516d755017b0ea545cffc90d4879a29d099d02a746 module="container
Aug 01 17:44:09 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:09.646629860-05:00" level=warning msg="unknown container" container=d680a7fdc2884e8ad7ba3b516d755017b0ea545cffc90d4879a29d099d02a746 module=libcontainerd namespace=moby
Aug 01 17:44:09 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:09.646985619-05:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 01 17:44:09 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:09-05:00" level=warning msg="unmount task rootfs" error="no such file or directory" id=9a1ba9da95aea9ed275f9ced39dfe72ebd980dd8241075eab407c1ad05cf1c99 module="container
Aug 01 17:44:09 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:09.669119062-05:00" level=warning msg="unknown container" container=9a1ba9da95aea9ed275f9ced39dfe72ebd980dd8241075eab407c1ad05cf1c99 module=libcontainerd namespace=moby
Aug 01 17:44:09 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:09.669473501-05:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 01 17:44:10 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:10.237492518-05:00" level=info msg="Removing stale sandbox ae6239dd73bd457ec7e4f086e96250181493ddd4083f1595b12d8b24aa07566b (1de999e625646e04a99312776e57e38a16784d91bf0e
Aug 01 17:44:10 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:10.250663289-05:00" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 641f93109d11535673164e58a2494d5b0533cf210ce286
Aug 01 17:44:10 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:10.525597629-05:00" level=info msg="Removing stale sandbox 6a6b3bcc64e2c697710567ee61eb14bb354e57325b3076ca620b5456e0a115a3 (4b4484f0a99c676364abfd53d49cf7559f841a1a078f
Aug 01 17:44:10 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:10.533442563-05:00" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 641f93109d11535673164e58a2494d5b0533cf210ce286
Aug 01 17:44:10 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:10.762178819-05:00" level=info msg="Removing stale sandbox 74ed696661281b01521ca1d0c4fb08b172cb29d679762c1b2458f47b249f4d07 (cdf5d50c537c5e249c6c55c820027ba1761538b3c699
Aug 01 17:44:10 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:10.770406140-05:00" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 641f93109d11535673164e58a2494d5b0533cf210ce286
Aug 01 17:44:10 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:10.948145423-05:00" level=info msg="Removing stale sandbox b518c252511cab6ee86b9030a4d37f9ecc2d1beac859fb53c033f906d21537fc (9a1ba9da95aea9ed275f9ced39dfe72ebd980dd82410
Aug 01 17:44:10 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:10.956452145-05:00" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 641f93109d11535673164e58a2494d5b0533cf210ce286
Aug 01 17:44:11 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:11.209800534-05:00" level=info msg="Removing stale sandbox df4e4cb30a99ba166358bf4e1d095a679aec5c89e66f55ae59efd19b7261ad3c (d798ef02c1ecc9f76011450c6437a02b3af13f45d3e4
Aug 01 17:44:11 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:11.222853029-05:00" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 641f93109d11535673164e58a2494d5b0533cf210ce286
Aug 01 17:44:11 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:11.390821449-05:00" level=info msg="Removing stale sandbox ingress_sbox (ingress-sbox)"
Aug 01 17:44:11 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:11.397671109-05:00" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 641f93109d11535673164e58a2494d5b0533cf210ce286
Aug 01 17:44:11 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:11.895338396-05:00" level=info msg="Removing stale sandbox 1ca94b06e97e857d37a17cc31511c47bb109b1e979cc585df32952c7be08c4de (31775ce935618256b3b6ca77dbef64faf2fa06cff295
Aug 01 17:44:11 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:11.909157620-05:00" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 641f93109d11535673164e58a2494d5b0533cf210ce286
Aug 01 17:44:12 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:12.150586326-05:00" level=info msg="Removing stale sandbox b30df00a4da4201814e263e8ea3c605bae2ba83304ac77b26f932b64189d96d9 (82e202ba88ef6e388d2d3d915a7f290a7988e1d972d3
Aug 01 17:44:12 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:12.161152438-05:00" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 641f93109d11535673164e58a2494d5b0533cf210ce286
Aug 01 17:44:12 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:12.452133183-05:00" level=info msg="Removing stale sandbox 323d524c00e01dc8c2cc4e1b32ce4301ad4690a1524491ef4329229d7b283602 (448dc73adbb211196163dece41e676b17b932eb9ffd5
Aug 01 17:44:12 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:12.458955770-05:00" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 641f93109d11535673164e58a2494d5b0533cf210ce286
Aug 01 17:44:12 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:12.623321982-05:00" level=info msg="Removing stale sandbox 486f9535408e93a108b1424555b208748d04eaa0fb68c08805c5d8c4cc7351a6 (c2632467335acfc0ff5a50d6f913db82e27161777d9c
Aug 01 17:44:12 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:12.631461544-05:00" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 641f93109d11535673164e58a2494d5b0533cf210ce286
Aug 01 17:44:12 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:12.814569863-05:00" level=info msg="Removing stale sandbox f5858b47f1ddd4b6824b68f0bd903f642694e63d8504dc09d27595755e300979 (10cb5306e12cfc14d60e2c0d5a320793155896a55f70
Aug 01 17:44:12 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:12.824593255-05:00" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 641f93109d11535673164e58a2494d5b0533cf210ce286
Aug 01 17:44:13 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:13.032343568-05:00" level=info msg="Removing stale sandbox 06b51cc2d0a0cb6f2698539d97eaefe2fe4b07fd706408bb53f8350d447aea92 (d0b997b91de4634d4820d563981938e0d5937dbce69c
Aug 01 17:44:13 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:13.043237002-05:00" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 641f93109d11535673164e58a2494d5b0533cf210ce286
Aug 01 17:44:13 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:13.303070392-05:00" level=info msg="Removing stale sandbox 258090d4bc40d624035d493f14a61a39078915ca2d2fa6ef21354a53447fc1d6 (4b203d24df6c118c6bf8e03f3fd73f4efc6e8f23ba38
Aug 01 17:44:13 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:13.310205554-05:00" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 641f93109d11535673164e58a2494d5b0533cf210ce286
Aug 01 17:44:13 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:13.633221224-05:00" level=info msg="Removing stale sandbox a04bef17ab2c87776cd06247e943c563d57e276d38cbd273c7b37059ec58f8ef (88dd522ae29d7205d1cfcb039b5d258b0f8fe6bad704
Aug 01 17:44:13 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:13.643644955-05:00" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 641f93109d11535673164e58a2494d5b0533cf210ce286
Aug 01 17:44:13 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:13.926823464-05:00" level=info msg="Removing stale sandbox cad43e59b27d121638dc76527ceec213ec894747b2a43417e0eef5913b393252 (4de7e09a53111f96febe3d0321f346f3046ac6fddd3c
Aug 01 17:44:13 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:13.973267012-05:00" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 641f93109d11535673164e58a2494d5b0533cf210ce286
Aug 01 17:44:14 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:14.264704651-05:00" level=info msg="Removing stale sandbox d679cc2b0c5fe15980ae1297831fee604abb55a6ad912050ff13d84f42183a9a (d680a7fdc2884e8ad7ba3b516d755017b0ea545cffc9
Aug 01 17:44:14 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:14.280136615-05:00" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 641f93109d11535673164e58a2494d5b0533cf210ce286
Aug 01 17:44:14 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:14.455235662-05:00" level=info msg="Removing stale sandbox d99e6f44a150944bc63b6a54b45f834d8f0dd6e1d3b0d26eeeb4938390fd4b07 (1aa3f12fab19563b5556b47d94783074b82406847908
Aug 01 17:44:14 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:14.461713964-05:00" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 641f93109d11535673164e58a2494d5b0533cf210ce286
Aug 01 17:44:14 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:14.657283927-05:00" level=info msg="Removing stale sandbox f6a69018ffb484ec1c9a767ac6ba7375211079bc683b422ccfc70adb952d1567 (ac8edbd08f787b0b8e3b2e3e47f7e36156bbfdf5f8ca
Aug 01 17:44:14 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:14.667552234-05:00" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 641f93109d11535673164e58a2494d5b0533cf210ce286
Aug 01 17:44:14 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:14.967359397-05:00" level=info msg="Removing stale sandbox 0244f089f24dcc64993e0f8d6a6e601bed153495f5afd1ae1b0a0a77bfb899b4 (776cc5a40d67221e21dc2cf1995e1f7156a6933294b1
Aug 01 17:44:14 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:14.973987485-05:00" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 641f93109d11535673164e58a2494d5b0533cf210ce286
Aug 01 17:44:15 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:15.253618020-05:00" level=info msg="Removing stale sandbox 8ee8692b30a3936dd7025c679d19149cfa0ef4a1cec536b397556f3b3dca4355 (1f3bb4916f17aa98702f3ef0132d95256efd92439e02
Aug 01 17:44:15 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:15.271102925-05:00" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 641f93109d11535673164e58a2494d5b0533cf210ce286
Aug 01 17:44:15 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:15.300502135-05:00" level=info msg="There are old running containers, the network config will not take affect"
Aug 01 17:44:15 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:15.363395655-05:00" level=info msg="Loading containers: done."
Aug 01 17:44:15 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:15.396345211-05:00" level=info msg="Docker daemon" commit=f150324 graphdriver(s)=overlay2 version=18.05.0-ce
Aug 01 17:44:15 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:15.444514916-05:00" level=info msg="manager selected by agent for new session: {t23dp006ulwzw85oxhoqpm99u 192.168.4.58:2377}" module=node/agent node.id=qb1qx4jr4mpi74mc5
Aug 01 17:44:15 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:15.449277204-05:00" level=info msg="waiting 0s before registering session" module=node/agent node.id=qb1qx4jr4mpi74mc5bhtk1krj
Aug 01 17:44:16 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:16.135513306-05:00" level=error msg="fatal task error" error="task: non-zero exit (2)" module=node/agent/worker/taskmanager node.id=qb1qx4jr4mpi74mc5bhtk1krj service.id=
Aug 01 17:44:16 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:16.996384652-05:00" level=error msg="fatal task error" error="task: non-zero exit (143)" module=node/agent/worker/taskmanager node.id=qb1qx4jr4mpi74mc5bhtk1krj service.i
Aug 01 17:44:17 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:17.070870612-05:00" level=error msg="fatal task error" error="task: non-zero exit (2)" module=node/agent/worker/taskmanager node.id=qb1qx4jr4mpi74mc5bhtk1krj service.id=
Aug 01 17:44:17 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:17.518944703-05:00" level=error msg="fatal task error" error="task: non-zero exit (2)" module=node/agent/worker/taskmanager node.id=qb1qx4jr4mpi74mc5bhtk1krj service.id=
Aug 01 17:44:17 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:17.582416569-05:00" level=error msg="fatal task error" error="task: non-zero exit (1)" module=node/agent/worker/taskmanager node.id=qb1qx4jr4mpi74mc5bhtk1krj service.id=
Aug 01 17:44:17 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:17.950801858-05:00" level=info msg="Initializing Libnetwork Agent Listen-Addr=0.0.0.0 Local-addr=192.168.4.84 Adv-addr=192.168.4.84 Data-addr= Remote-addr-list=[192.168.
Aug 01 17:44:17 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:17.951221082-05:00" level=info msg="New memberlist node - Node:go2-docker-1 will use memberlist nodeID:22c7d6df5398 with config:&{NodeID:22c7d6df5398 Hostname:go2-docker
Aug 01 17:44:17 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:17.951942837-05:00" level=info msg="Node 22c7d6df5398/192.168.4.84, joined gossip cluster"
Aug 01 17:44:17 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:17.952065853-05:00" level=info msg="Daemon has completed initialization"
Aug 01 17:44:17 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:17.978205208-05:00" level=info msg="Node 22c7d6df5398/192.168.4.84, added to nodes list"
Aug 01 17:44:17 go2-docker-1 systemd[1]: Started Docker Application Container Engine.
Aug 01 17:44:17 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:17.992847593-05:00" level=info msg="API listen on /var/run/docker.sock"
Aug 01 17:44:17 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:17.994058634-05:00" level=info msg="Node 5ac4e6f4b6a5/192.168.4.83, joined gossip cluster"
Aug 01 17:44:17 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:17.994410999-05:00" level=info msg="Node 5ac4e6f4b6a5/192.168.4.83, added to nodes list"
Aug 01 17:44:17 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:17.994806803-05:00" level=info msg="Node c822629d2b9f/192.168.4.85, joined gossip cluster"
Aug 01 17:44:17 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:17.995120194-05:00" level=info msg="Node c822629d2b9f/192.168.4.85, added to nodes list"
Aug 01 17:44:17 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:17.995467412-05:00" level=info msg="Node 1f6fc2af1bce/192.168.4.86, joined gossip cluster"
Aug 01 17:44:17 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:17.995768888-05:00" level=info msg="Node 1f6fc2af1bce/192.168.4.86, added to nodes list"
Aug 01 17:44:17 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:17.996079801-05:00" level=info msg="Node c25cfb97778a/192.168.4.81, joined gossip cluster"
Aug 01 17:44:17 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:17.996385391-05:00" level=info msg="Node c25cfb97778a/192.168.4.81, added to nodes list"
Aug 01 17:44:17 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:17.996690297-05:00" level=info msg="Node 9a7a683df739/192.168.4.82, joined gossip cluster"
Aug 01 17:44:17 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:17.997086006-05:00" level=info msg="Node 9a7a683df739/192.168.4.82, added to nodes list"
Aug 01 17:44:17 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:17.997429565-05:00" level=info msg="Node 9a7fe77e02f7/192.168.4.58, joined gossip cluster"
Aug 01 17:44:17 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:17.997763403-05:00" level=info msg="Node 9a7fe77e02f7/192.168.4.58, added to nodes list"
Aug 01 17:44:18 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:18.024644404-05:00" level=warning msg="failed to deactivate service binding for container swmid-474820-production_app.1.zzk8o4rwjpm3fm10fkjirpify" error="No such contain
Aug 01 17:44:18 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:18.025487337-05:00" level=warning msg="failed to deactivate service binding for container swmid-474820-production_db.1.lu3xn5wc2cd7ilv5bssv1v8w6" error="No such containe
Aug 01 17:44:18 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:18.025824811-05:00" level=warning msg="failed to deactivate service binding for container swmid-474820-production_app.1.sijmftyxevzopbfxp66yknhqo" error="No such contain
Aug 01 17:44:18 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:18.026183316-05:00" level=warning msg="failed to deactivate service binding for container swmid-474820-production_app.1.jk6a6ys81mjsw0ofn7ci7arqn" error="No such contain
Aug 01 17:44:18 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:18.026588497-05:00" level=warning msg="failed to deactivate service binding for container swmid-474820-production_db.1.r86pdmz1k87dwdowic5begmf6" error="No such containe
Aug 01 17:44:18 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:18.027026295-05:00" level=warning msg="failed to deactivate service binding for container swmid-474820-production_db.1.zz23ykd6lg6fiiwd4xizowp6w" error="No such containe
Aug 01 17:44:20 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:20.026040491-05:00" level=warning msg="Peer operation failed:Unable to find the peerDB for nid:wehmbwcr18tt35ct5uqhf9c3t op:&{3 wehmbwcr18tt35ct5uqhf9c3t  [] [] [] [] fa
Aug 01 17:44:24 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:24-05:00" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/a2020c61d738d07c5c5731dedaa5d5852c5a8bee341f054323b40ea11f41820a/shim.sock"
Aug 01 17:44:24 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:24-05:00" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/9fc5bb4d9bbcf2f06c377dc159e5ae81755ac41615223b79d821bc8e11663b14/shim.sock"
Aug 01 17:44:25 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:25-05:00" level=error msg="connecting to shim" error=<nil> id=c2632467335acfc0ff5a50d6f913db82e27161777d9cc96cca1394ec09c7999c module="containerd/io.containerd.runtime.v
Aug 01 17:44:25 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:25-05:00" level=warning msg="unmount task rootfs" error="no such file or directory" id=c2632467335acfc0ff5a50d6f913db82e27161777d9cc96cca1394ec09c7999c module="container
Aug 01 17:44:25 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:25.901791633-05:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 01 17:44:25 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:25.901866959-05:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 01 17:44:25 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:25-05:00" level=error msg="connecting to shim" error=<nil> id=10cb5306e12cfc14d60e2c0d5a320793155896a55f708bcdeff024c937db29b0 module="containerd/io.containerd.runtime.v
Aug 01 17:44:25 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:25-05:00" level=warning msg="unmount task rootfs" error="no such file or directory" id=10cb5306e12cfc14d60e2c0d5a320793155896a55f708bcdeff024c937db29b0 module="container
Aug 01 17:44:25 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:25.997883250-05:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 01 17:44:25 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:25.998422350-05:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 01 17:44:26 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:26.019899746-05:00" level=warning msg="error locating sandbox id 486f9535408e93a108b1424555b208748d04eaa0fb68c08805c5d8c4cc7351a6: sandbox 486f9535408e93a108b1424555b208
Aug 01 17:44:26 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:26.035937027-05:00" level=warning msg="error locating sandbox id f5858b47f1ddd4b6824b68f0bd903f642694e63d8504dc09d27595755e300979: sandbox f5858b47f1ddd4b6824b68f0bd903f
Aug 01 17:44:26 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:26.140691898-05:00" level=warning msg="Ignoring Exit Event, no such exec command found" container=10cb5306e12cfc14d60e2c0d5a320793155896a55f708bcdeff024c937db29b0 exec-i
Aug 01 17:44:26 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:26.182219098-05:00" level=warning msg="Ignoring Exit Event, no such exec command found" container=c2632467335acfc0ff5a50d6f913db82e27161777d9cc96cca1394ec09c7999c exec-i
Aug 01 17:44:26 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:26.184665866-05:00" level=error msg="fatal task error" error="task: non-zero exit (143)" module=node/agent/worker/taskmanager node.id=qb1qx4jr4mpi74mc5bhtk1krj service.i
Aug 01 17:44:48 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:48.101222709-05:00" level=error msg="Bulk sync to node 9a7a683df739 timed out"
Aug 01 17:44:48 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:48.104152676-05:00" level=error msg="Bulk sync to node 9a7a683df739 timed out"
Aug 01 17:44:48 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:48.238532056-05:00" level=error msg="fatal task error" error="invalid mount config for type \"bind\": bind mount source path does not exist: /docker/swmid-474820-product
Aug 01 17:44:48 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:48.574899646-05:00" level=error msg="Bulk sync to node 1f6fc2af1bce timed out"
Aug 01 17:44:48 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:48.585103401-05:00" level=error msg="fatal task error" error="invalid mount config for type \"bind\": bind mount source path does not exist: /docker/swmid-474820-product
Aug 01 17:44:49 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:49-05:00" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/fc009ef054f22ce065d8d82c083723d0124a5639c43694c30a451734d8e77ed3/shim.sock"
Aug 01 17:44:49 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:49-05:00" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/04be62ebc14ec314ebb7e961a98563d1808b94f2a55e8a39f16da283b222997a/shim.sock"
Aug 01 17:44:49 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:49.161885935-05:00" level=warning msg="failed to deactivate service binding for container swmid-474820-production_app.1.sb2rl51rxlz5nz6fu2gg4c4r6" error="No such contain
Aug 01 17:44:49 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:49.250045088-05:00" level=error msg="fatal task error" error="invalid mount config for type \"bind\": bind mount source path does not exist: /docker/swmid-474820-product
Aug 01 17:44:49 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:49.415574184-05:00" level=warning msg="Falling back to default propagation for bind source in daemon root" container=6b6190ddc8e21f329de9b262ef066ec232ff1ae3d478680739af
Aug 01 17:44:49 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:49.417618699-05:00" level=warning msg="Falling back to default propagation for bind source in daemon root" container=6b6190ddc8e21f329de9b262ef066ec232ff1ae3d478680739af
Aug 01 17:44:49 go2-docker-1 dockerd[64752]: time="2018-08-01T17:44:49.430457622-05:00" level=warning msg="Falling back to default propagation for bind source in daemon root" container=8ae0acf744f10e0dd6832c1e2f9225eee8ec05e1a0550e75c52c

We had a number of these entries in the log at the time as the service had been deployed but the directories had not been set up:

Aug 01 17:45:08 go2-docker-1 dockerd[64752]: time="2018-08-01T17:45:08.012079531-05:00" level=error msg="fatal task error" error="invalid mount config for type \"bind\": bind mount source path does not exist: /docker/swmid-474820-product
Aug 01 17:45:08 go2-docker-1 dockerd[64752]: time="2018-08-01T17:45:08.247791932-05:00" level=error msg="fatal task error" error="invalid mount config for type \"bind\": bind mount source path does not exist: /docker/swmid-474820-product
Aug 01 17:45:08 go2-docker-1 dockerd[64752]: time="2018-08-01T17:45:08.456897351-05:00" level=warning msg="Peer operation failed:Unable to find the peerDB for nid:wehmbwcr18tt35ct5uqhf9c3t op:&{3 wehmbwcr18tt35ct5uqhf9c3t  [] [] [] [] fa
Aug 01 17:45:08 go2-docker-1 dockerd[64752]: time="2018-08-01T17:45:08.683185541-05:00" level=warning msg="failed to deactivate service binding for container swmid-474820-production_app.1.mbpj3tn1iwx5q1pjw5uvgs6xl" error="No such contain

Describe the results you expected: The docker service should not have crashed, or should have automatically fixed the network issues when restarting.

Additional information you deem important (e.g. issue happens only occasionally): Issue has only happened once so far.

Output of docker version:

Client: Version: 18.05.0-ce API version: 1.37 Go version: go1.9.5 Git commit: f150324 Built: Wed May 9 22:16:25 2018 OS/Arch: linux/amd64 Experimental: false Orchestrator: swarm

Server: Engine: Version: 18.05.0-ce API version: 1.37 (minimum version 1.12) Go version: go1.9.5 Git commit: f150324 Built: Wed May 9 22:14:32 2018 OS/Arch: linux/amd64 Experimental: true

Output of docker info:

Containers: 21 Running: 21 Paused: 0 Stopped: 0 Images: 19 Server Version: 18.05.0-ce Storage Driver: overlay2 Backing Filesystem: extfs Supports d_type: true Native Overlay Diff: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host ipvlan macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog Swarm: active NodeID: qb1qx4jr4mpi74mc5bhtk1krj Is Manager: false Node Address: 192.168.4.84 Manager Addresses: 192.168.4.58:2377 Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: 773c489c9c1b21a6d78b5c538cd395416ec50f88 runc version: 4fc53a81fb7c994640722ac585fa9ca548971871 init version: 949e6fa Security Options: apparmor seccomp Profile: default Kernel Version: 4.4.0-127-generic Operating System: Ubuntu 16.04.4 LTS OSType: linux Architecture: x86_64 CPUs: 2 Total Memory: 7.779GiB Name: go2-docker-1 ID: O5Y3:FFU6:7MJQ:CPH2:UAPC:KG5R:E7LQ:H2HH:UPYX:MQSB:UG56:C2QF Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Registry: https://index.docker.io/v1/ Labels: Experimental: true Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false

Additional environment details (AWS, VirtualBox, physical, etc.):

ESXi

closed time in 20 hours

akanix42

issue commentmoby/moby

SIGSEGV [libnetwork]

I'm closing, but feel free to open a new issue if this wasn't fixed.

akanix42

comment created time in 20 hours

issue closedmoby/moby

Multiple Inheritance of Docker Images

Feature Request to support multiple inheritance in Docker.

I have a scenario where I have several projects, each with their own docker image and I have larger images containing more than one project but which require re-building each one again as I can't inherit from multiple images, even though the end state contents are the same as the single project images. This is very wasteful of space, bandwidth and builds.

It's a massively useful feature to be able to inherit from multiple existing docker images using FROM - it would save build time, bandwidth and storage to be able to re-use the image layers.

Since docker images are just filesystem layers conflict resolution could be as simple as default last write wins (ie. last FROM line wins) with configurability for first write wins or specific image layer wins or possibly an optional number priority suffix to the FROM line to maintain backwards compatability.

closed time in 21 hours

HariSekhon

issue commentmoby/moby

Multiple Inheritance of Docker Images

Multi-stage Dockerfile (introduced in 17.05) should have covered use cases.

HariSekhon

comment created time in 21 hours

issue closedmoby/moby

Kernel Panic, OS X 10.11.5 (15F34), Docker Version 1.12.0-rc3-beta18 (build: 9996)

<!-- If you are reporting a new issue, make sure that we do not have any duplicates already open. You can ensure this by searching the issue list for this repository. If there is a duplicate, please close your issue and add a comment to the existing issue instead.

If you suspect your issue is a bug, please edit your issue description to include the BUG REPORT INFORMATION shown below. If you fail to provide this information within 7 days, we cannot debug your issue and will close it. We will, however, reopen it if you later provide the information.

For more information about reporting issues, see https://github.com/docker/docker/blob/master/CONTRIBUTING.md#reporting-other-issues


BUG REPORT INFORMATION

Use the commands below to provide key information from your environment: You do NOT have to include this information if this is a FEATURE REQUEST -->

Output of docker version:

Client:
 Version:      1.12.0-rc3
 API version:  1.24
 Go version:   go1.6.2
 Git commit:   91e29e8
 Built:        Sat Jul  2 00:09:24 2016
 OS/Arch:      darwin/amd64
 Experimental: true

Server:
 Version:      1.12.0-rc3
 API version:  1.24
 Go version:   go1.6.2
 Git commit:   876f3a7
 Built:        Tue Jul  5 02:20:13 2016
 OS/Arch:      linux/amd64
 Experimental: true

Output of docker info:

This hangs in my version of the beta. Produces no output.

Additional environment details (AWS, VirtualBox, physical, etc.):

OS X Crash Log:

Anonymous UUID:       FB5840D3-D36A-55DF-46DD-AA6A54205DC7

Fri Jul  8 15:43:18 2016

*** Panic Report ***
panic(cpu 3 caller 0xffffff801f6f37dc): "Invalid queue element linkage for 0xffffff81fe6a5000: next 0xffffff81fe66e000 next->prev 0xffffff81fe6a5000 prev 0xffffff801fee0b78 prev->next 0xffffff81fe686000"@/Library/Caches/com.apple.xbs/Sources/xnu/xnu-3248.50.21/osfmk/kern/queue.h:245
Backtrace (CPU 3), Frame : Return Address
0xffffff921e6ebd30 : 0xffffff801f6dab12 
0xffffff921e6ebdb0 : 0xffffff801f6f37dc 
0xffffff921e6ebe10 : 0xffffff801f6f0dd6 
0xffffff921e6ebe50 : 0xffffff7fa0736275 
0xffffff921e6ebeb0 : 0xffffff7fa0736fcb 
0xffffff921e6ebf20 : 0xffffff801fc18c8c 
0xffffff921e6ebf60 : 0xffffff801fc286a1 
0xffffff921e6ebfb0 : 0xffffff801f7ecc66 
      Kernel Extensions in backtrace:
         com.apple.kec.pthread(1.0)[39D0B4EB-B7F4-3891-96C2-F8B886656C8A]@0xffffff7fa0730000->0xffffff7fa073cfff

BSD process name corresponding to current thread: com.docker.osxfs

Mac OS version:
15F34

Kernel version:
Darwin Kernel Version 15.5.0: Tue Apr 19 18:36:36 PDT 2016; root:xnu-3248.50.21~8/RELEASE_X86_64
Kernel UUID: 7E7B0822-D2DE-3B39-A7A5-77B40A668BC6
Kernel slide:     0x000000001f400000
Kernel text base: 0xffffff801f600000
__HIB  text base: 0xffffff801f500000
System model name: MacBookPro12,1 (Mac-E43C1C25D4880AD6)

System uptime in nanoseconds: 259969609360727
last loaded kext at 255790208351886: com.apple.driver.AppleXsanScheme   3 (addr 0xffffff7fa23c9000, size 32768)
last unloaded kext at 256200373204970: com.apple.driver.AppleXsanScheme 3 (addr 0xffffff7fa23c9000, size 32768)
loaded kexts:
org.virtualbox.kext.VBoxNetAdp  5.0.18
org.virtualbox.kext.VBoxNetFlt  5.0.18
org.virtualbox.kext.VBoxUSB 5.0.18
org.virtualbox.kext.VBoxDrv 5.0.18
com.intel.kext.intelhaxm    6.0.1
expressvpn.tun  1.0
com.globaldelight.driver.Boom2Device    1.1
at.obdev.nke.LittleSnitch   4356
com.logitech.manager.kernel.driver  5.40.1
com.apple.filesystems.smbfs 3.0.1
com.apple.driver.AGPM   110.22.0
com.apple.driver.X86PlatformShim    1.0.0
com.apple.driver.ApplePlatformEnabler   2.6.0d0
com.apple.filesystems.autofs    3.0
com.apple.driver.AppleOSXWatchdog   1
com.apple.driver.AppleGraphicsDevicePolicy  3.12.7
com.apple.driver.AppleUpstreamUserClient    3.6.1
com.apple.driver.AppleHDA   274.9
com.apple.driver.AudioAUUC  1.70
com.apple.driver.pmtelemetry    1
com.apple.iokit.IOUserEthernet  1.0.1
com.apple.driver.AppleIntelBDWGraphics  10.1.4
com.apple.iokit.BroadcomBluetoothHostControllerUSBTransport 4.4.5f3
com.apple.driver.AppleBacklight 170.8.9
com.apple.iokit.IOBluetoothSerialManager    4.4.5f3
com.apple.Dont_Steal_Mac_OS_X   7.0.0
com.apple.driver.AppleHV    1
com.apple.driver.AppleMCCSControl   1.2.13
com.apple.driver.AppleIntelBDWGraphicsFramebuffer   10.1.4
com.apple.driver.AppleLPC   3.1
com.apple.driver.AppleSMCLMU    208
com.apple.driver.AppleCameraInterface   5.46.0
com.apple.driver.AppleIntelSlowAdaptiveClocking 4.0.0
com.apple.driver.AppleThunderboltIP 3.0.8
com.apple.driver.AppleUSBCardReader 3.7.1
com.apple.AppleFSCompression.AppleFSCompressionTypeDataless 1.0.0d1
com.apple.AppleFSCompression.AppleFSCompressionTypeZlib 1.0.0
com.apple.BootCache 38
com.apple.iokit.IOAHCIBlockStorage  2.8.5
com.apple.driver.AppleAHCIPort  3.1.8
com.apple.driver.AppleTopCaseHIDEventDriver 86
com.apple.driver.AirPort.Brcm4360   1040.1.1a6
com.apple.driver.AppleSmartBatteryManager   161.0.0
com.apple.driver.AppleRTC   2.0
com.apple.driver.AppleACPIButtons   4.0
com.apple.driver.AppleHPET  1.8
com.apple.driver.AppleSMBIOS    2.1
com.apple.driver.AppleACPIEC    4.0
com.apple.driver.AppleAPIC  1.7
com.apple.nke.applicationfirewall   163
com.apple.security.quarantine   3
com.apple.security.TMSafetyNet  8
com.apple.driver.usb.IOUSBHostHIDDevice 1.0.1
com.apple.driver.usb.cdc    5.0.0
com.apple.driver.usb.AppleUSBHostCompositeDevice    1.0.1
com.apple.iokit.IOUSBUserClient 900.4.1
com.apple.kext.triggers 1.0
com.apple.driver.DspFuncLib 274.9
com.apple.kext.OSvKernDSPLib    525
com.apple.driver.AppleGraphicsControl   3.12.8
com.apple.iokit.IOSurface   108.2.1
com.apple.iokit.IOBluetoothHostControllerUSBTransport   4.4.5f3
com.apple.driver.AppleBacklightExpert   1.1.0
com.apple.driver.CoreCaptureResponder   1
com.apple.driver.AppleHDAController 274.9
com.apple.iokit.IOHDAFamily 274.9
com.apple.iokit.IOAudioFamily   204.4
com.apple.vecLib.kext   1.2.0
com.apple.driver.AppleSMBusController   1.0.14d1
com.apple.iokit.IONDRVSupport   2.4.1
com.apple.AppleGraphicsDeviceControl    3.12.8
com.apple.iokit.IOAcceleratorFamily2    205.10
com.apple.driver.X86PlatformPlugin  1.0.0
com.apple.driver.IOPlatformPluginFamily 6.0.0d7
com.apple.driver.AppleIntelLpssUARTCommon   2.0.60
com.apple.driver.AppleSMC   3.1.9
com.apple.iokit.IOGraphicsFamily    2.4.1
com.apple.iokit.IOSerialFamily  11
com.apple.iokit.IOSlowAdaptiveClockingFamily    1.0.0
com.apple.iokit.IOSCSIBlockCommandsDevice   3.7.7
com.apple.iokit.IOUSBMassStorageDriver  1.0.0
com.apple.iokit.IOSCSIArchitectureModelFamily   3.7.7
com.apple.driver.usb.networking 5.0.0
com.apple.driver.CoreStorage    517.50.1
com.apple.driver.AppleThunderboltDPInAdapter    4.1.3
com.apple.driver.AppleThunderboltDPAdapterFamily    4.1.3
com.apple.driver.AppleThunderboltPCIDownAdapter 2.0.2
com.apple.iokit.IOAHCIFamily    2.8.1
com.apple.driver.AppleHIDKeyboard   181
com.apple.driver.AppleMultitouchDriver  304.12
com.apple.driver.AppleHIDTransport  5
com.apple.driver.AppleHSSPIHIDDriver    43
com.apple.driver.AppleThunderboltNHI    4.0.4
com.apple.iokit.IOThunderboltFamily 6.0.2
com.apple.iokit.IO80211Family   1110.26
com.apple.driver.mDNSOffloadUserClient  1.0.1b8
com.apple.iokit.IONetworkingFamily  3.2
com.apple.driver.corecapture    1.0.4
com.apple.driver.AppleHSSPISupport  43
com.apple.driver.AppleIntelLpssSpiController    2.0.60
com.apple.driver.AppleIntelLpssGspi 2.0.60
com.apple.driver.AppleIntelLpssDmac 2.0.60
com.apple.driver.usb.AppleUSBXHCIPCI    1.0.1
com.apple.driver.usb.AppleUSBXHCI   1.0.1
com.apple.driver.AppleEFINVRAM  2.0
com.apple.driver.AppleEFIRuntime    2.0
com.apple.iokit.IOSMBusFamily   1.1
com.apple.security.sandbox  300.0
com.apple.kext.AppleMatch   1.0.0d1
com.apple.driver.AppleKeyStore  2
com.apple.driver.AppleMobileFileIntegrity   1.0.5
com.apple.driver.AppleCredentialManager 1.0
com.apple.driver.DiskImages 417.4
com.apple.iokit.IOStorageFamily 2.1
com.apple.driver.IOBluetoothHIDDriver   4.4.5f3
com.apple.iokit.IOBluetoothFamily   4.4.5f3
com.apple.iokit.IOReportFamily  31
com.apple.iokit.IOUSBHIDDriver  900.4.1
com.apple.iokit.IOHIDFamily 2.0.0
com.apple.driver.AppleFDEKeyStore   28.30
com.apple.iokit.IOUSBFamily 900.4.1
com.apple.iokit.IOUSBHostFamily 1.0.1
com.apple.driver.AppleUSBHostMergeProperties    1.0.1
com.apple.driver.AppleACPIPlatform  4.0
com.apple.iokit.IOPCIFamily 2.9
com.apple.iokit.IOACPIFamily    1.4
com.apple.kec.pthread   1
com.apple.kec.Libm  1
com.apple.kec.corecrypto    1.0
Model: MacBookPro12,1, BootROM MBP121.0167.B16, 2 processors, Intel Core i5, 2.9 GHz, 16 GB, SMC 2.28f7
Graphics: Intel Iris Graphics 6100, Intel Iris Graphics 6100, Built-In
Memory Module: BANK 0/DIMM0, 8 GB, DDR3, 1867 MHz, 0x80AD, 0x483943434E4E4E434C544D4C41522D4E5544
Memory Module: BANK 1/DIMM0, 8 GB, DDR3, 1867 MHz, 0x80AD, 0x483943434E4E4E434C544D4C41522D4E5544
AirPort: spairport_wireless_card_type_airport_extreme (0x14E4, 0x133), Broadcom BCM43xx 1.0 (7.21.95.175.1a6)
Bluetooth: Version 4.4.5f3 17904, 3 services, 27 devices, 1 incoming serial ports
Network Service: Wi-Fi, AirPort, en0
Serial ATA Device: APPLE SSD SM0512G, 500.28 GB
USB Device: USB 3.0 Bus
USB Device: Bluetooth USB Host Controller
USB Device: USB Receiver
USB Device: USB Optical Mouse
Thunderbolt Bus: MacBook Pro, Apple Inc., 27.1

Steps to reproduce the issue: Unable to reproduce

Describe the results you received: Kernel panic, black screen, machine reboot

Describe the results you expected: N/A

Additional information you deem important (e.g. issue happens only occasionally): Only one occurrence

closed time in 21 hours

cguess

issue commentmoby/moby

Kernel Panic, OS X 10.11.5 (15F34), Docker Version 1.12.0-rc3-beta18 (build: 9996)

I assume this was already resolved.

If not please open an issue on https://github.com/docker/for-mac/issues

cguess

comment created time in 21 hours

issue closedmoby/moby

Support for Debian Wheezy

I followed the installation steps at https://docs.docker.com/installation/debian/ for Debian Wheezy. However, I was unable to install the package docker-engine because of this error:

The following packages have unmet dependencies: docker-engine : Depends: init-system-helpers (>= 1.18~) but it is not installable

In issue https://github.com/docker/machine/issues/1607 I read that if support for Wheezy is requested, an issue should be openend - which I'm doing with this.

We have a large cluster currently running many Debian Wheezy machines. They won't be upgraded to Debian 8 in the foreseeable future, so support for Debian 7 would be highly appreciated.

closed time in 21 hours

bjoernjacobs

issue commentmoby/moby

Support for Debian Wheezy

Support for wheezy was dropped after the release of 18.03 https://download.docker.com/linux/debian/dists/wheezy/pool/stable/amd64/

bjoernjacobs

comment created time in 21 hours

issue closedmoby/moby

Proposal: Improvements for creating images without Dockerfile

Hello,

Actually there is 2 way for creating images with Docker:

  1. The popular Dockerfile
  2. Creating a new container, make some changes and commit this container to an Image

For various reasons, I really don't like using a Dockerfile (this is not the topic of why, but i can comment more on this subject if you want) and I prefer to use the commit method.

However, using this workflow for long term and in a team has serious downsides: the history of an image is too light (see #9785 for example) and there is no possibility to have a diff between 2 images (only 2 containers atm).

Here are a bunch of propositions in order to improve this workflow:

Commit message should be mandatory

On a new commit you will have to set a message of what you have changed, which can be empty. Having it mandatory will force people to set something, and we can hope that it will be something useful. (Do you imagine the crap in a repository if git commit would not require a message ?)

Add a mandatory author information for a commit

A commit message specifies the what and/or how of the change. For collaboration we also need the who (mainly to make the author wear a dunce cap when his commit is stupid), for that purpose a commit, in docker, should require a name and an email of the commit author.

To not have too much information to enter at each commit, the docker client can use environnements variables to set the default author name and email.

Add information to history output

A line, in docker history, should return the message, author name and author email of commit. (see also #9785)

Diff between 2 images

History is great but not enough, when reviewing a commit, we also need to see which and how these files have been changed. For a start this diff can exactly be like a diff between 2 containers but it can also be improved:

Binary (actual) diff

A diff between 2 binary files is not possible, so only the information about the status of the file (added/modified/deleted) should be show (like the current diff on containers).

Textual diff

When the file is a text (like a configuration), it would be nice if the diff could show more information than the binary one, which lines has changed, before and after content, ... like a git diff

Docker hub and repository integration

With all these information Docker Hub or any other image storage service would be able to show better insight of an image and gain trust from users using images created without Dockerfile.

closed time in 21 hours

joelwurtz

issue commentmoby/moby

Proposal: Improvements for creating images without Dockerfile

Custom syntax can be now implemented with # syntax = <BuildKit frontend image> directive. Closing.

joelwurtz

comment created time in 21 hours

pull request commentcontainerd/cri

Add correct paths for cri's systemd config files in CentOS.

Please sign the commit (git commit -a -s --amend)

georgegoh

comment created time in 21 hours

Pull request review commentcontainerd/cri

Add correct paths for cri's systemd config files in CentOS.

         name: net.ipv4.ip_forward         value: 1 -    - name: "Check kubelet args in kubelet config"+    - name: "Check kubelet args in kubelet config (Ubuntu)"       shell: grep "^Environment=\"KUBELET_EXTRA_ARGS=" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf || true       register: check_args+      when: ansible_distribution == "Ubuntu" -    - name: "Add runtime args in kubelet conf"+    - name: "Add runtime args in kubelet conf (Ubuntu)"       lineinfile:         dest: "/etc/systemd/system/kubelet.service.d/10-kubeadm.conf"         line: "Environment=\"KUBELET_EXTRA_ARGS= --runtime-cgroups=/system.slice/containerd.service --container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock\""         insertafter: '\[Service\]'-      when: check_args.stdout == ""-    +      when: andisble_distribution == "Ubuntu" and check_args.stdout == ""

typo

georgegoh

comment created time in 21 hours

Pull request review commentcontainerd/aufs

Fix `skipping test that requires root` issue

 before_script: script:     - DCO_VERBOSITY=-q ../project/script/validate/dco     - ../project/script/validate/fileheader ../project/-    - go test -v -race -covermode=atomic -coverprofile=coverage.txt ./...+    - sudo $GOROOT/bin/go test -test.root -test.v -race -covermode=atomic -coverprofile=coverage.txt ./...

So previous version didn't have that issue because we don't need sudo go for go test -c

chenrui333

comment created time in 21 hours

Pull request review commentcontainerd/aufs

Fix `skipping test that requires root` issue

 before_script: script:     - DCO_VERBOSITY=-q ../project/script/validate/dco     - ../project/script/validate/fileheader ../project/-    - go test -v -race -covermode=atomic -coverprofile=coverage.txt ./...+    - sudo $GOROOT/bin/go test -test.root -test.v -race -covermode=atomic -coverprofile=coverage.txt ./...

Why not simply use go test -c as in former revisions?

chenrui333

comment created time in a day

Pull request review commentcontainerd/zfs

Improve travis CI and fix go test issue

-dist: xenial-sudo: required+dist: bionic language: go go:-  - "1.12.x"   - "1.13.x" -go_import_path: github.com/containerd/zfs- install:   - sudo apt-get install -y zfsutils-linux && sudo modprobe zfs-  - cd $GOPATH/src/github.com/containerd/zfs-  - GO111MODULE="on" go mod vendor-  - go get -u github.com/vbatts/git-validation-  - go get -u github.com/kunalkushwaha/ltag+  - go mod vendor+  - pushd ..; go get -u github.com/vbatts/git-validation; popd+  - pushd ..; go get -u github.com/kunalkushwaha/ltag; popd  before_script:   - pushd ..; git clone https://github.com/containerd/project; popd  script:   - DCO_VERBOSITY=-q ../project/script/validate/dco   - ../project/script/validate/fileheader ../project/-  - go test -race -covermode=atomic -c .-  - sudo ./zfs.test -test.root -test.v -test.coverprofile=coverage.txt+  - sudo $GOROOT/bin/go test -test.root -test.v -race -covermode=atomic -coverprofile=coverage.txt ./...

What is the advantage of this change?

chenrui333

comment created time in a day

issue closedmoby/moby

Add option to docker build to output build statistics

It would be really nice to give the option to show the build statistics for a build when it is complete. For example.

  • Build time
  • CPU usage
  • Memory used (Max)
  • network usage (in/out)

I know most of this information isn't available at the moment, but it would be nice to show the information we have available to us now. The one that I want/need the most is the Memory used for a build.

We can get the memory used from the build container stats before it is destroyed (in /cgroup/<container>/) List of available values. https://www.kernel.org/doc/Documentation/cgroups/memory.txt

The hard part about this is that the build process actually has many containers, so we would need to combine the different results into one result, not sure how to do that.

This is just a rough idea, and we need to figure out a lot of details. If you have some ideas on how to make this better, please let me know.

closed time in a day

kencochrane

issue commentmoby/moby

Add option to docker build to output build statistics

This should be discussed in BuildKit repo.

If somebody has a detailed proposal, feel free to open an issue at https://github.com/moby/buildkit/issues

kencochrane

comment created time in a day

issue closedmoby/moby

docker panics encoding JSON

Describe the results you received:

Oct 13 01:45:11 service-prd96-15 dockerd: panic: runtime error: invalid memory address or nil pointer dereference [recovered]
Oct 13 01:45:11 service-prd96-15 dockerd: panic: runtime error: invalid memory address or nil pointer dereference
Oct 13 01:45:11 service-prd96-15 dockerd: [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x7c13ad]
Oct 13 01:45:11 service-prd96-15 dockerd: goroutine 68523535 [running]:
Oct 13 01:45:11 service-prd96-15 dockerd: panic(0x16d5a00, 0xc42000c060)
Oct 13 01:45:11 service-prd96-15 dockerd: /usr/local/go/src/runtime/panic.go:500 +0x1a1
Oct 13 01:45:11 service-prd96-15 dockerd: encoding/json.(*encodeState).marshal.func1(0xc4257ef550)
Oct 13 01:45:11 service-prd96-15 dockerd: /usr/local/go/src/encoding/json/encode.go:272 +0x1b9
Oct 13 01:45:11 service-prd96-15 dockerd: panic(0x16d5a00, 0xc42000c060)
Oct 13 01:45:11 service-prd96-15 dockerd: /usr/local/go/src/runtime/panic.go:458 +0x243
Oct 13 01:45:11 service-prd96-15 dockerd: encoding/json.(*encodeState).string(0xc421a4e160, 0x0, 0x1, 0x1, 0x1)
Oct 13 01:45:11 service-prd96-15 dockerd: /usr/local/go/src/encoding/json/encode.go:852 +0x6d
Oct 13 01:45:11 service-prd96-15 dockerd: encoding/json.stringEncoder(0xc421a4e160, 0x1630320, 0xc42d6bf2c0, 0x198, 0x100)
Oct 13 01:45:11 service-prd96-15 dockerd: /usr/local/go/src/encoding/json/encode.go:564 +0x228
Oct 13 01:45:11 service-prd96-15 dockerd: encoding/json.(*structEncoder).encode(0xc4209fc4b0, 0xc421a4e160, 0x18abb20, 0xc42d6bf2c0, 0x199, 0x100)
Oct 13 01:45:11 service-prd96-15 dockerd: /usr/local/go/src/encoding/json/encode.go:601 +0x253
Oct 13 01:45:11 service-prd96-15 dockerd: encoding/json.(*structEncoder).(encoding/json.encode)-fm(0xc421a4e160, 0x18abb20, 0xc42d6bf2c0, 0x199, 0xc42d6b0100)
Oct 13 01:45:11 service-prd96-15 dockerd: /usr/local/go/src/encoding/json/encode.go:615 +0x64
Oct 13 01:45:11 service-prd96-15 dockerd: encoding/json.(*ptrEncoder).encode(0xc420d8ab40, 0xc421a4e160, 0x1740c40, 0xc42d6bf2c0, 0x16, 0x100)
Oct 13 01:45:11 service-prd96-15 dockerd: /usr/local/go/src/encoding/json/encode.go:742 +0xe3
Oct 13 01:45:11 service-prd96-15 dockerd: encoding/json.(*ptrEncoder).(encoding/json.encode)-fm(0xc421a4e160, 0x1740c40, 0xc42d6bf2c0, 0x16, 0xc44b270100)
Oct 13 01:45:11 service-prd96-15 dockerd: /usr/local/go/src/encoding/json/encode.go:747 +0x64
Oct 13 01:45:11 service-prd96-15 dockerd: encoding/json.(*mapEncoder).encode(0xc420d8ab48, 0xc421a4e160, 0x16c0560, 0xc424751918, 0x195, 0x100)
Oct 13 01:45:11 service-prd96-15 dockerd: /usr/local/go/src/encoding/json/encode.go:646 +0x542
Oct 13 01:45:11 service-prd96-15 dockerd: encoding/json.(*mapEncoder).(encoding/json.encode)-fm(0xc421a4e160, 0x16c0560, 0xc424751918, 0x195, 0x100)
Oct 13 01:45:11 service-prd96-15 dockerd: /usr/local/go/src/encoding/json/encode.go:662 +0x64
Oct 13 01:45:11 service-prd96-15 dockerd: encoding/json.(*structEncoder).encode(0xc420931740, 0xc421a4e160, 0x18c6ba0, 0xc424751800, 0x199, 0x100)
Oct 13 01:45:11 service-prd96-15 dockerd: /usr/local/go/src/encoding/json/encode.go:601 +0x253
Oct 13 01:45:11 service-prd96-15 dockerd: encoding/json.(*structEncoder).(encoding/json.encode)-fm(0xc421a4e160, 0x18c6ba0, 0xc424751800, 0x199, 0xc424750100)
Oct 13 01:45:11 service-prd96-15 dockerd: /usr/local/go/src/encoding/json/encode.go:615 +0x64
Oct 13 01:45:11 service-prd96-15 dockerd: encoding/json.(*ptrEncoder).encode(0xc420d8ac20, 0xc421a4e160, 0x18db200, 0xc424751800, 0x16, 0x18d0100)
Oct 13 01:45:11 service-prd96-15 dockerd: /usr/local/go/src/encoding/json/encode.go:742 +0xe3
Oct 13 01:45:11 service-prd96-15 dockerd: encoding/json.(*ptrEncoder).(encoding/json.encode)-fm(0xc421a4e160, 0x18db200, 0xc424751800, 0x16, 0xc424750100)
Oct 13 01:45:11 service-prd96-15 dockerd: /usr/local/go/src/encoding/json/encode.go:747 +0x64
Oct 13 01:45:11 service-prd96-15 dockerd: encoding/json.(*encodeState).reflectValue(0xc421a4e160, 0x18db200, 0xc424751800, 0x16, 0x100)
Oct 13 01:45:11 service-prd96-15 dockerd: /usr/local/go/src/encoding/json/encode.go:307 +0x82
Oct 13 01:45:11 service-prd96-15 dockerd: encoding/json.(*encodeState).marshal(0xc421a4e160, 0x18db200, 0xc424751800, 0x100, 0x0, 0x0)
Oct 13 01:45:11 service-prd96-15 dockerd: /usr/local/go/src/encoding/json/encode.go:280 +0xb8
Oct 13 01:45:11 service-prd96-15 dockerd: encoding/json.(*Encoder).Encode(0xc4257ef608, 0x18db200, 0xc424751800, 0x7fc48061ef30, 0xc42a63b410)
Oct 13 01:45:11 service-prd96-15 dockerd: /usr/local/go/src/encoding/json/stream.go:193 +0x8e
Oct 13 01:45:11 service-prd96-15 dockerd: github.com/docker/docker/container.(*Container).ToDisk(0xc424751800, 0x0, 0x0)
Oct 13 01:45:11 service-prd96-15 dockerd: /root/rpmbuild/BUILD/docker-engine/.gopath/src/github.com/docker/docker/container/container.go:164 +0x17b
Oct 13 01:45:11 service-prd96-15 dockerd: github.com/docker/docker/daemon.(*Daemon).setHostConfig(0xc420472800, 0xc424751800, 0xc425b21c00, 0x0, 0x0)
Oct 13 01:45:11 service-prd96-15 dockerd: /root/rpmbuild/BUILD/docker-engine/.gopath/src/github.com/docker/docker/daemon/container.go:211 +0x118
Oct 13 01:45:11 service-prd96-15 dockerd: github.com/docker/docker/daemon.(*Daemon).create(0xc420472800, 0xc43fd0c0a0, 0x46, 0xc42a5437c0, 0xc425b21c00, 0xc420fc6650, 0x0, 0x1, 0x0, 0x0, ...)
Oct 13 01:45:11 service-prd96-15 dockerd: /root/rpmbuild/BUILD/docker-engine/.gopath/src/github.com/docker/docker/daemon/create.go:132 +0x4e3
Oct 13 01:45:11 service-prd96-15 dockerd: github.com/docker/docker/daemon.(*Daemon).containerCreate(0xc420472800, 0xc43fd0c0a0, 0x46, 0xc42a5437c0, 0xc425b21c00, 0xc420fc6650, 0x0, 0xc420fc6601, 0x0, 0x0, ...)
Oct 13 01:45:11 service-prd96-15 dockerd: /root/rpmbuild/BUILD/docker-engine/.gopath/src/github.com/docker/docker/daemon/create.go:61 +0x1bf
Oct 13 01:45:11 service-prd96-15 dockerd: github.com/docker/docker/daemon.(*Daemon).CreateManagedContainer(0xc420472800, 0xc43fd0c0a0, 0x46, 0xc42a5437c0, 0xc425b21c00, 0xc420fc6650, 0x0, 0x0, 0x0, 0x0, ...)
Oct 13 01:45:11 service-prd96-15 dockerd: /root/rpmbuild/BUILD/docker-engine/.gopath/src/github.com/docker/docker/daemon/create.go:29 +0xa1
Oct 13 01:45:11 service-prd96-15 dockerd: github.com/docker/docker/daemon/cluster/executor/container.(*containerAdapter).create(0xc42a157ad0, 0x2444520, 0xc4484f1280, 0x2444501, 0xc42c17c040)
Oct 13 01:45:11 service-prd96-15 dockerd: /root/rpmbuild/BUILD/docker-engine/.gopath/src/github.com/docker/docker/daemon/cluster/executor/container/adapter.go:224 +0x198
Oct 13 01:45:11 service-prd96-15 dockerd: github.com/docker/docker/daemon/cluster/executor/container.(*controller).Prepare(0xc42f5d1ef0, 0x2444520, 0xc4484f1280, 0xc4203a4490, 0x2449820)
Oct 13 01:45:11 service-prd96-15 dockerd: /root/rpmbuild/BUILD/docker-engine/.gopath/src/github.com/docker/docker/daemon/cluster/executor/container/controller.go:152 +0x286
Oct 13 01:45:11 service-prd96-15 dockerd: github.com/docker/docker/vendor/github.com/docker/swarmkit/agent/exec.Do(0x2444520, 0xc4484f1280, 0xc4285e5a20, 0x2449820, 0xc42f5d1ef0, 0x0, 0x0, 0x0)
Oct 13 01:45:11 service-prd96-15 dockerd: /root/rpmbuild/BUILD/docker-engine/.gopath/src/github.com/docker/docker/vendor/github.com/docker/swarmkit/agent/exec/controller.go:308 +0x995
Oct 13 01:45:11 service-prd96-15 dockerd: github.com/docker/docker/vendor/github.com/docker/swarmkit/agent.(*taskManager).run.func2(0x24445e0, 0xc429219380, 0x0, 0x0)
Oct 13 01:45:11 service-prd96-15 dockerd: /root/rpmbuild/BUILD/docker-engine/.gopath/src/github.com/docker/docker/vendor/github.com/docker/swarmkit/agent/task.go:134 +0x142
Oct 13 01:45:11 service-prd96-15 dockerd: github.com/docker/docker/vendor/github.com/docker/swarmkit/agent.runctx(0x24445e0, 0xc429219380, 0xc4312fd3e0, 0xc43e6c82a0, 0xc4484f1500)
Oct 13 01:45:11 service-prd96-15 dockerd: /root/rpmbuild/BUILD/docker-engine/.gopath/src/github.com/docker/docker/vendor/github.com/docker/swarmkit/agent/helpers.go:9 +0x55
Oct 13 01:45:11 service-prd96-15 dockerd: created by github.com/docker/docker/vendor/github.com/docker/swarmkit/agent.(*taskManager).run
Oct 13 01:45:11 service-prd96-15 dockerd: /root/rpmbuild/BUILD/docker-engine/.gopath/src/github.com/docker/docker/vendor/github.com/docker/swarmkit/agent/task.go:150 +0x5e4
Oct 13 01:45:12 service-prd96-15 systemd: docker.service: main process exited, code=exited, status=2/INVALIDARGUMENT

Output of docker version:

Client:
 Version:      17.03.1-ce
 API version:  1.27
 Go version:   go1.7.5
 Git commit:   c6d412e
 Built:        Mon Mar 27 17:05:44 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.03.1-ce
 API version:  1.27 (minimum version 1.12)
 Go version:   go1.7.5
 Git commit:   c6d412e
 Built:        Mon Mar 27 17:05:44 2017
 OS/Arch:      linux/amd64
 Experimental: false

closed time in a day

carlory

issue commentmoby/moby

docker panics encoding JSON

should have been fixed?

carlory

comment created time in a day

issue closedmoby/moby

[Proposal] Use accelerator in docker container

Hi, all

#23917 introduces Nvidia-docker, it has provided a convenient way to use Nvidia GPU accelerator in container. To use more hardware accelerators(e.g. AMD GPU, FPGA, QAT) in containers, we propose a common way to provide this feature with docker plugins.

Abstract

Accelerators can be considered as a special kind of resource like cpu and memory, though it may contain some other basic resources (e.g., volume and devices in nvidia-docker). The implementation of accelerator should be transparent to containers, which means container should not need to know which devices or runtime libs are required, or whether they are matched. It is accelerator driver's responsibility for handling all these things.

Accelerator request is made up of accelerator runtime descriptions, and descriptions only. Docker image do not need to hold accelerator runtime libraries in it, so it would not bind to any specific device. Accelerator requests can be image labels or docker run arguments.

Accelerator supports should be able to extended by docker plugin mechanism. Vendors can build their own accelerator plugin to express:

  • which accelerator runtimes (e.g. cuda, opencl, rsa) it support
  • how to allocate/release accelerator resource
  • how to prepare required environment, e.g. devices, libs, etc.
  • how to reset/reuse accelerator resource
  • how to collecting or preprocessing status data
  • ...

When docker engine receives the request to run a container with accelerator , it will:

  • selecting the right accelerator plugin
  • allocating accelerator resource from the plugin
  • updating container configuration according to information provided by the plugin.

The lifecycle of accelerator is really hard to determine, some may die with the container while others may survive(used by other container), and accelerator sharing may be needed, so a docker subcommand may be necessary to maintain accelerator status.

How to request accelerators in a container

Apps may request accelerators through runtime description. Apps are implemented with specified runtimes like 'cuda:8.0' or 'opencl', thus this is the accelerator request. It can be used through one of the following ways:

  • docker image label

    runtime request are determined when docker image is built, so users can store this information in images through label:

    LABEL runtime="gpu0=cuda:8.0;gpu1=opencl:2.2" // means nothing ,just for example

    LABEL runtime="fpga0=com.company/fpga/sha256:1.0"

    In the first example, the runtime label means request two accelerators: one with cuda:8.0 and the other opencl:2.2.

    The second example will allocate one fpga accelerator that support sha256:v1.0 from company.com.

  • docker run arguments

    users can also specify accelerators when creating/running a container, just like this:

    # docker create --accel cuda:8.0 nvidia/digits:4.0

    # docker create --accel fpga0=com.company/fpga/compress:1.0 --accel fpga1=com.company/fpga/decomp:1.0 XXXX

    or overwrite an accelerator runtime LABEL defined in image

    # docker create --accel gpu0=cuda:8.0 nvidia/digits:4.0

    Accelerators explicit allocated by --accel options are persistent, which means they are reserved to container until the container is removed. These implicit accelerators defined by image LABEL are non-persistent, which only reserved to container when it is running. This distinction is made to prevent waste of resources for unused&stopped container. If accelerators should be reserved even when container is stopped, they should be declared as persistent using --accel option.

How to parse requests into accelerators

Docker engine can query all the accelerator driver plugins with the requested runtime description. Drivers scan its capacity(all runtimes it supports), and check whether it can provide specific runtimes or not. Then engine will request driver to allocate accelerators.

Accelerator abstraction in docker

The accelerators in container contain three attributes:

  • volume(to provide runtime libraries),
  • device,
  • env(maybe not necessary, but it will provide flexibility to accelerators)

Drivers will provide these informations about a accelerator to docker engine, and container configurations about volume ,device and env will be updated.

Interfaces provided by accelerator (driver) plugin

The accelerator drivers need to support basic accelerator operations, include:

  • Query // Is runtime supported in this driver
  • AllocateAccel // To create a new accelerator, correspond to docker create.
  • PrepareAccel // Get accelerator ready(e.g. program bitstream into fpga), return volume, device and env, correspond to docker start.
  • ResetAccel // Reset accelerator, correspond to docker stop.
  • ReleaseAccel // To remove accelerator, correspond to docker rm

Accelerator drivers should implement all these functions to fulfil accelerator management in container lifecycle. Docker engine will call these functions in the right place to use accelerators in container.

These functions are easy to expand with docker plugins. Accelerator vendor can provide their device support by implementing all the accelerator plugin interfaces.

Accelerator management in docker

As metioned above, we may need accelerator subcommand to maintain all accelerator status and to create/list/remove accelerators, just like network and volume(not sure if this is realy necessary).

ping @justincormack @flx42 @3XX0 , please take a look, thks :smiley:

cc @forever043

closed time in a day

x1022as

issue commentmoby/moby

[Proposal] Use accelerator in docker container

Docker 19.03 has docker run --gpus

x1022as

comment created time in a day

issue closedmoby/moby

docker run has no output in dind via exposed port

Steps to reproduce the issue:

  1. Start dind docker run --privileged --name dind -p 4000:4000 -d docker:1.12.0-rc3-dind --host tcp://0.0.0.0:4000
  2. Run docker -H :4000 run busybox echo 1

Describe the results you received:

No output

Describe the results you expected:

1

Additional information you deem important (e.g. issue happens only occasionally):

These work:

docker -H :4000 run -i busybox echo 1 docker -H $(docker inspect -f '{{.NetworkSettings.Networks.bridge.IPAddress}}' dind):4000 run busybox echo 1

closed time in a day

tonistiigi

issue commentmoby/moby

docker run has no output in dind via exposed port

Should have been resolved?

tonistiigi

comment created time in a day

issue closedmoby/moby

AppArmor prevents java process from starting

Attached dockerfile installs and tries to start ambari-server inside kubuntu 14.04 based container. With apparmor in enforce mode I get the following error:

$ docker build -t apparmortest .
(...)
$ docker run apparmortest
(...)
Waiting for server start.........
ERROR: Exiting with exit code -1. 
REASON: Ambari Server java process died with exitcode -1. Check /var/log/ambari-server/ambari-server.out for more information.

I cannot see anything useful in mentioned logfile. The problem seems to be apparmor related - after switching it to complain mode (sudo aa-complain /etc/apparmor.d/docker; before that one specific line in /etc/apparmor.d/docker; needs to be commented out since it confuses aa-complain) ambari-server starts properly:

$ docker run apparmortest
(...)
Waiting for server start....................
Ambari Server 'start' completed successfully.

I'm using devicemapper in order to avoid any aufs+java related problems. Could you have a look on this? Should docker apparmor rules be altered or this is somehow expected (ie the ambari-server does something extra that it shouldnt)?

Docker file: dockerfile.zip

Standard info:

$ uname -a
Linux  3.19.0-43-generic #49~14.04.1-Ubuntu SMP Thu Dec 31 15:44:49 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux


$ docker info
Containers: 28
Images: 48
Server Version: 1.9.1
Storage Driver: devicemapper
 Pool Name: docker-8:4-148510495-pool
 Pool Blocksize: 65.54 kB
 Base Device Size: 107.4 GB
 Backing Filesystem: ext4
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data Space Used: 6.812 GB
 Data Space Total: 107.4 GB
 Data Space Available: 100.6 GB
 Metadata Space Used: 6.697 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.141 GB
 Udev Sync Supported: true
 Deferred Removal Enabled: false
 Deferred Deletion Enabled: false
 Deferred Deleted Device Count: 0
 Data loop file: /home/docker/devicemapper/devicemapper/data
 Metadata loop file: /home/docker/devicemapper/devicemapper/metadata
 Library Version: 1.02.77 (2012-10-15)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.19.0-43-generic
Operating System: Ubuntu 14.04.3 LTS
CPUs: 8
Total Memory: 15.54 GiB
Name: ----
ID: 5NCJ:KUJT:U2QP:S6WW:4S63:3JUX:EE3N:JHSI:JIEW:GVLX:ZSWS:5GI7
WARNING: No swap limit support

$ docker version
Client:
 Version:      1.9.1
 API version:  1.21
 Go version:   go1.4.2
 Git commit:   a34a1d5
 Built:        Fri Nov 20 13:12:04 UTC 2015
 OS/Arch:      linux/amd64

Server:
 Version:      1.9.1
 API version:  1.21
 Go version:   go1.4.2
 Git commit:   a34a1d5
 Built:        Fri Nov 20 13:12:04 UTC 2015
 OS/Arch:      linux/amd64

closed time in a day

fruboes

issue commentmoby/moby

AppArmor prevents java process from starting

I assume this was resolved in recent releases

fruboes

comment created time in a day

issue closedmoby/moby

Docker run of debian:wheezy fails with Invalid configuration: lstat /mnt/sda1/var/lib/docker/aufs/diff/10…c3: no such file or directory

I'm using a mac, just getting started, tried Kitematic, it didn't seem to start right the first time (would not install the commands nor any image), but I found it made a docker-machine named dev for me and the second run managed to install the docker commands.

Docker version

$ docker version
Client version: 1.7.1
Client API version: 1.19
Go version (client): go1.4.2
Git commit (client): 786b29d
OS/Arch (client): darwin/amd64
Server version: 1.7.1
Server API version: 1.19
Go version (server): go1.4.2
Git commit (server): 786b29d
OS/Arch (server): linux/amd64

Uname -a

$ uname -a
Darwin Daniels-Mini.home 14.4.0 Darwin Kernel Version 14.4.0: Thu May 28 11:35:04 PDT 2015; root:xnu-2782.30.5~1/RELEASE_X86_64 x86_64

Docker info

Note that I actually did docker-machine rm dev and created a new one after getting the issues described below, so the info is about a newly created dev boot2docker machine... and on that new machine the docker run -i -t debian:wheezy bash command works as expected.

$ docker info
Containers: 0
Images: 0
Storage Driver: aufs
 Root Dir: /mnt/sda1/var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 0
 Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 4.0.7-boot2docker
Operating System: Boot2Docker 1.7.1 (TCL 6.3); master : c202798 - Wed Jul 15 00:16:02 UTC 2015
CPUs: 1
Total Memory: 996.2 MiB
Name: dev
ID: PAM6:MFVH:3VMO:6B77:XJFW:CDWP:W6WA:KDRO:3W4F:YEGQ:EYFA:5RJO
Debug mode (server): true
File Descriptors: 9
Goroutines: 15
System Time: 2015-07-19T09:47:23.851899931Z
EventsListeners: 0
Init SHA1:
Init Path: /usr/local/bin/docker
Docker Root Dir: /mnt/sda1/var/lib/docker
Username: dlamblin
Registry: https://index.docker.io/v1/
Labels:
 provider=virtualbox

In a command line I did:

eval "$(docker-machine env dev)"
docker run -i -t debian:wheezy /bin/bash
Unable to find image 'debian:wheezy' locally
wheezy: Pulling from debian
4c8cbfd2973e: Already exists
60c52dbe9d91: Already exists
debian:wheezy: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security.
Digest: sha256:305366f2c21122c14d8c688b6d07449d7aeba95e230ce3c7b6752197c47767e2
Status: Downloaded newer image for debian:wheezy
Error response from daemon: open /mnt/sda1/var/lib/docker/aufs/layers/60c52dbe9d9121f0baf4d44fece2d447a0a48f4da84522f0eb7082a0ca6b465e: no such file or directory
Daniels-Mini:~ dlamblin$ docker ps -a
CONTAINER ID        IMAGE               COMMAND                CREATED             STATUS                          PORTS               NAMES
4ea60ad81f75        debian:wheezy       "/bin/bash"            10 minutes ago                                                          adoring_colden

I tried this with just debian and it worked, but I again tried with debian:wheezy and debian:wheezy-backports and it did not work. I did this quite a few times, also removing containers and images.

So I got annoyed and thought I'd reboot the boot2docker host or something.

$ docker-machine active
dev
$ docker-machine stop dev
$ docker-machine upgrade dev
$ docker-machine start dev
Starting VM...
Started machines may have new IP addresses. You may need to re-run the `docker-machine env` command.
$ docker-machine upgrade dev
Stopping machine to do the upgrade... [Yous geniuses]
Upgrading machine dev...
Downloading https://github.com/boot2docker/boot2docker/releases/download/v1.7.1/boot2docker.iso to /Users/dlamblin/.docker/machine/cache/boot2docker.iso...
Starting machine back up...
Starting VM...
$ docker-machine env dev
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.100:2376"
export DOCKER_CERT_PATH="/Users/dlamblin/.docker/machine/machines/dev"
export DOCKER_MACHINE_NAME="dev"
# Run this command to configure your shell:
# eval "$(docker-machine env dev)"
$ eval "$(docker-machine env dev)"
$ docker ps -a
CONTAINER ID        IMAGE               COMMAND                CREATED             STATUS                      PORTS               NAMES
4ea60ad81f75        debian:wheezy       "/bin/bash"            22 minutes ago                                                      adoring_colden
830c33df50a5        debian              "bash"                 About an hour ago   Exited (0) 47 minutes ago                       debian
f91f0f8328c9        registry:2          "registry cmd/regist   5 hours ago         Exited (2) 13 minutes ago                       stupefied_cori
$ docker-machine ssh dev
docker@dev:~$ ls /mnt/sda1/var/lib/docker/aufs/diff/4ea60ad81f75e68784f97455fb8e1635f559278351433a0d249263592815567e
ls: /mnt/sda1/var/lib/docker/aufs/diff/4ea60ad81f75e68784f97455fb8e1635f559278351433a0d249263592815567e: No such file or directory
docker@dev:~$ ls lstat /mnt/sda1/var/lib/docker/aufs/diff/
ls: lstat: No such file or directory
/mnt/sda1/var/lib/docker/aufs/diff/:
0c22bb9e0906a2c3feeeb922f4bf253d813d3e780a3f4f93138eee7b09fc47ad/      830c33df50a529ac6e225c5f85d07dfa23422125f71da186168cd646f2b9a9c8-init/
0f5121dd42a613481a72899ea1ac4620a909427cf81bc9edff59a2b85c8fc73f/      8d38711ccc0dfe0af69a22969209f6b4002d5bfc87961508411ea777a2c5a276/
124e2127157f398735e5888601e8b02cf832e037ef951317bc0a4f6256723d7b/      8ddc08289e1a9e4413633bcc969369954a4d4605f56801428986e3b0b358446f/
141b650c3281e7559e1bddacb7ae4db1af79c269cdc69545cf9f78bc7f322fae/      8fb45e60e014d834ddb7aac0adf9985a35359b47f84aedc48b272b8662bdd528/
1ff9f26f09fb1bc7b5955c269b1042429e86d7891c653f52f3e48f1e0365d7df/      902b87aaaec929e80541486828959f14fa061f529ad7f37ab300d4ef9f3a0dbf/
2e8a6169e70d5ccfa63d450b4253a9a1cab4082ec03501fc88107e910204c744/      9a61b6b1315e6b457c31a03346ab94486a2f5397f4a82219bee01eead1c34c2e/
4c8cbfd2973e1b92137486922e1af5677fced4f6902e6513b81413e22f33ceab/      aeb43bf230e40b8d4d23c96dadd9b8467bb2f7b6b884da7dafe060c776450131/
4ea60ad81f75e68784f97455fb8e1635f559278351433a0d249263592815567e-init/ b279b4aae82666b6c5bfc01e18efe5a8ba93627b20334b8c45d874d8dfe3a979/
607e965985c11e6a23270feec487908aeaa9af763d24a2986866a41537770c8c/      b4ad0b763f116fa6ab967020acceaddcece76a67866bfebf50d624f42f310160/
63e9d2557cd7fb600e8739cb930c6db63af6f54bc6b25778bb7848b1facb5d6b/      b4fcc08f08cb89d09ab4ff932588863e9459256918d5dc17cede16b39fff1a49/
66780839eff4ce83a47409f9e78053ec29934c6e47e3dfc56bfe6de9779c8674/      c5b806fe261f04add1deb65a519846876aa9b1c5dbbccbb68a823c275193ecce/
68aaeb079725c1fcce78413809d72f39639bd4c46ffe7b8b03dd9b048ef4e74a/      d86979befb725994b1c58c8340b3313f7b3118bf60c40972d520a43b2ea35f7f/
69c177f0c117c1ea8c4593b4fbfa7affb4096f7abc751c9d818721bfdea087bb/      f91f0f8328c9fc0c63daf703593ec361694217b9e345a632af8e09445a97ae50/
6a192b88c36f8646cc64829e509b7716292f950d9536e5b9cfd568a483c2fba4/      f91f0f8328c9fc0c63daf703593ec361694217b9e345a632af8e09445a97ae50-init/
830c33df50a529ac6e225c5f85d07dfa23422125f71da186168cd646f2b9a9c8/
docker@dev:~$

So basically what I'm seeing here is that there are a few that end in *-init, and the 4ea… file that could not be found was not found because it actually ends in -init. But it's not clear to me why the tools didn't do whatever they were expecting themselves to do.

closed time in a day

dlamblin

issue commentmoby/moby

Docker run of debian:wheezy fails with Invalid configuration: lstat /mnt/sda1/var/lib/docker/aufs/diff/10…c3: no such file or directory

AUFS was deprecated in 19.03: https://github.com/docker/cli/pull/1484/files

dlamblin

comment created time in a day

issue closedmoby/moby

Error: driver aufs is returning inconsistent paths for container

Hello, guys! Yesterday, my docker container was refused to start with error:

    "Error": "Error: driver aufs is returning inconsistent paths for container 5b176ee68c9a256eaf9fcfdc6e5b582ea2cb9df21546165e739f2229f7986d30 ('/var/lib/docker/aufs/diff/5b176ee68c9a256eaf9fcfdc6e5b582ea2cb9df21546165e739f2229f7986d30' then '/var/lib/docker/aufs/mnt/5b176ee68c9a256eaf9fcfdc6e5b582ea2cb9df21546165e739f2229f7986d30')",
    "ExitCode": -1

$ sudo ls -lha /var/lib/docker/aufs/diff/5b176ee68c9a256eaf9fcfdc6e5b582ea2cb9df21546165e739f2229f7986d30

total 3.3M
drwxr-xr-x    13 root root 4.0K Jul  7 19:11 .
drwxr-xr-x 24058 root root 3.3M Jul  8 09:44 ..
drwxr-xr-x     2 root root 4.0K Jul  7 19:11 dev
drwxr-xr-x     3 root root 4.0K Jul  7 19:11 etc
drwxr-xr-x     2 root root 4.0K Jul  7 19:11 hexlet-ide
drwxr-xr-x     3 root root 4.0K Jul  7 19:11 home
drwxr-xr-x     2 root root 4.0K Jul  7 19:11 proc
drwxr-xr-x     2 root root 4.0K Jul  7 19:11 sys
drwxr-xr-x     2 root root 4.0K Jul  7 19:11 tmp
drwxr-xr-x     4 root root 4.0K Jul  7 19:11 usr
drwxr-xr-x     3 root root 4.0K Jul  7 19:11 var
-r--r--r--     1 root root    0 Jul  7 19:11 .wh..wh.aufs
drwx------     2 root root 4.0K Jul  7 19:11 .wh..wh.orph
drwx------     2 root root 4.0K Jul  7 19:11 .wh..wh.plnk

sudo ls -lha /var/lib/docker/aufs/mnt/5b176ee68c9a256eaf9fcfdc6e5b582ea2cb9df21546165e739f2229f7986d30

total 3.3M
drwxr-xr-x     2 root root 4.0K Jul  7 19:11 .
drwxr-xr-x 24061 root root 3.3M Jul  8 09:47 ..

What happened?

UPD: $ docker version

 Client version: 1.5.0
 Client API version: 1.17
 Go version (client): go1.4.1
 Git commit (client): a8a31ef
 OS/Arch (client): linux/amd64
 Server version: 1.5.0
 Server API version: 1.17
 Go version (server): go1.4.1
 Git commit (server): a8a31ef

$ docker info

 Containers: 11023
 Images: 2021
 Storage Driver: aufs
  Root Dir: /var/lib/docker/aufs
  Backing Filesystem: extfs
  Dirs: 24067
 Execution Driver: native-0.2
 Kernel Version: 3.13.0-44-generic
 Operating System: Ubuntu 14.04.1 LTS
 CPUs: 4
 Total Memory: 7.305 GiB
 Name: ip-10-0-1-224
 ID: YN3T:56E2:VDEV:3W7E:M47C:LNM7:O6GX:G3NG:AALK:TD34:5E4F:IMLN
 WARNING: No swap limit support

uname -a Linux ip-10-0-1-224 3.13.0-44-generic #73-Ubuntu SMP Tue Dec 16 00:22:43 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

closed time in a day

PlugIN73

issue commentmoby/moby

Error: driver aufs is returning inconsistent paths for container

AUFS was deprecated in 19.03: https://github.com/docker/cli/pull/1484/files

PlugIN73

comment created time in a day

issue closedmoby/moby

ntpd from ubuntu container is blocked from the host (ubuntu) apparmor policy (surely an aufs+apparmor bug)

reopening of unfixed https://github.com/docker/docker/issues/2800

related to #2276

closed time in a day

kiorky

issue commentmoby/moby

ntpd from ubuntu container is blocked from the host (ubuntu) apparmor policy (surely an aufs+apparmor bug)

AUFS was deprecated in 19.03: https://github.com/docker/cli/pull/1484/files

kiorky

comment created time in a day

issue closedmoby/moby

/var/lib/docker/aufs shares partition with other directory

When I installed Debian on my computer, I gave separate partition for /var directory. After I installed docker "/var/lib/docker/aufs" partition appeared, but it shares the same device with /var

mount | grep sdb7
/dev/sdb7 on /var type ext4 (rw,relatime,data=ordered)
/dev/sdb7 on /var/lib/docker/aufs type ext4 (rw,relatime,data=ordered)

I guess this is the reason why /var directory does not appear in the list of available media in file managers and gnome-disk-utility, while /var/lib/docker/aufs does (i.e. it overlaps the former)

The directory itself is still available via file system hierarchy.

Also during shutdown I see message something like "Failed to unmount /var".

If I uninstall docker-engine, /var appears as it should. If then I install docker-engine again, /var/lib/docker/aufs overlaps /var again.

closed time in a day

FreeSlave

issue commentmoby/moby

/var/lib/docker/aufs shares partition with other directory

AUFS was deprecated in 19.03: https://github.com/docker/cli/pull/1484/files

FreeSlave

comment created time in a day

issue closedmoby/moby

sha calculation of docker image layers

I used following docker file:

FROM ubuntuBaseImage MAINTAINER Abhinav "abhinav.10888@gmail.com" RUN mkdir /root/imageLayer

issue following commands in the directory of the Dockerfile:

docker build -t build1 . docker build --no-cache -t build2 .

to create build1 and build2 docker images. As expected all the layers in build2 and build1 have different sha so here I want to understand how sha is calculated for a docker command in docker file. What is the other criteria except a command that generates different sha for same docker command in docker file.

Thanks Abhinav

closed time in a day

kuriousengineer
more