profile
viewpoint
Moby moby https://mobyproject.org/ An open framework to assemble specialized container systems without reinventing the wheel.

PR opened moby/buildkit

CONTRIBUTING.md: fix broken link

Signed-off-by: Akihiro Suda akihiro.suda.cz@hco.ntt.co.jp

+2 -2

0 comment

1 changed file

pr created time in a few seconds

issue openedmoby/moby

How to restrict the access of the specified physical machine user to the docker container on Linux

such as ,when I use test user to run docker container,I use docker ps only can to see this user runned container ?

created time in 15 minutes

issue commentmoby/moby

Docker build should compute image digests

I've long since moved on building images with bazel (and deploying them on kubernetes.)

phs

comment created time in an hour

fork dydongyangli/vpnkit

A toolkit for embedding VPN capabilities in your application

fork in an hour

PR opened moby/mobywebsite

Fix the link to Moby project on Github

The Moby project on Github does not have the moby branch. Suggest to use the master branch instead. See: https://github.com/moby/moby/blob/master/README.md

+1 -1

0 comment

1 changed file

pr created time in 2 hours

issue openedmoby/mobywebsite

The link to README is dead

https://github.com/moby/mobywebsite/blob/3105de0b2cfb0bc8ba4011aea553823b78b72baa/docs/_includes/links.html#L3

Current value: https://github.com/moby/moby/blob/moby/README.md Expected value: https://github.com/moby/moby/blob/master/README.md

Because there is no moby branch.

created time in 2 hours

fork dydongyangli/mobywebsite

website for the moby project

fork in 2 hours

pull request commentmoby/moby

Fix misspellings of "successfully" in error msgs

Ok, thanks anyway.

Closing since we can't merge this as is.

dnnr

comment created time in 2 hours

PR closed moby/moby

Fix misspellings of "successfully" in error msgs status/needs-vendoring

<!-- Please make sure you've read and understood our contributing guidelines; https://github.com/moby/moby/blob/master/CONTRIBUTING.md

** Make sure all your commits include a signature generated with git commit -s **

For additional information on our contributing process, read our contributing guide https://docs.docker.com/opensource/code/

If this is a bug fix, make sure your description includes "fixes #xxxx", or "closes #xxxx"

Please provide the following information: -->

- What I did

- How I did it

- How to verify it

- Description for the changelog <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: -->

- A picture of a cute animal (not mandatory but encouraged)

+6 -6

3 comments

1 changed file

dnnr

pr closed time in 2 hours

pull request commentmoby/hyperkit

Fix issue 260

<!-- AUTOMATED:POULE:DCO-EXPLANATION --> Please sign your commits following these rules: https://github.com/moby/moby/blob/master/CONTRIBUTING.md#sign-your-work The easiest way to do this is to amend the last commit:

$ git clone -b "fix-issue-260" git@github.com:amaumene/hyperkit.git somewhere
$ cd somewhere
$ git commit --amend -s --no-edit
$ git push -f

Amending updates the existing PR. You DO NOT need to open a new one.

amaumene

comment created time in 2 hours

PR opened moby/hyperkit

Fix issue 260

Fix implicit conversion changes signedness and higher order bits are zeroes after implicit conversion issues. Most probably due to upgrade to Catalina.

+19 -19

0 comment

6 changed files

pr created time in 2 hours

issue closedmoby/buildkit

Docker build fails with volume mount error on Windows host when buildKit is enabled.

Description

docker build command fails with volume mount error output when buildKit is enabled.

Steps to reproduce the issue:

  1. Make sure to have Docker version >= 19.03
  2. Enable BuildKit by setting environment variable - DOCKER_BUILDKIT=1
  3. Create a ASP.NET project with docker support through visual studio (Or download Sample repro project
  4. Run command docker build -f "D:\source\repos\WebApplication4\WebApplication4\Dockerfile" --force-rm -t webapplication4:dev --target base "D:\source\repos\WebApplication4".
  5. Check the output of the build command.

Describe the results you received:

PS C:\Users\prsangli\source\repos\WebApplication4> docker build -f "C:\Users\prsangli\source\repos\WebApplication4\WebApplication4\Dockerfile" --force-rm -t webapplication4:dev --target base  "C:\Users\prsangli\source\repos\WebApplication4"                                                                                                                                                                                                            [+] Building 0.0s (2/2) FINISHED
 => [internal] load build definition from Dockerfile                                                                                                                                                                     0.0s
 => => transferring dockerfile: 32B                                                                                                                                                                                      0.0s
 => [internal] load .dockerignore                                                                                                                                                                                        0.0s
 => => transferring context: 35B                                                                                                                                                                                         0.0s
failed to solve with frontend dockerfile.v0: failed to read dockerfile: failed to mount C:\ProgramData\Docker\tmp\buildkit-mount414087051: [{Type:bind Source:C:\ProgramData\Docker\windowsfilter\lsbbs5t6dnt8eqc4ehyv38igy Options:[rbind ro]}]: invalid windows mount type: 'bind'

Describe the results you expected: The project builds successfully.

Additional information you deem important (e.g. issue happens only occasionally):

Output of docker version:

Client: Docker Engine - Community
 Version:           19.03.2
 API version:       1.40
 Go version:        go1.12.8
 Git commit:        6a30dfc
 Built:             Thu Aug 29 05:26:49 2019
 OS/Arch:           windows/amd64
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          19.03.2
  API version:      1.40 (minimum version 1.24)
  Go version:       go1.12.8
  Git commit:       6a30dfc
  Built:            Thu Aug 29 05:39:49 2019
  OS/Arch:          windows/amd64
  Experimental:     true

Output of docker info:

Client:
 Debug Mode: false
 Plugins:
  buildx: Build with BuildKit (Docker Inc., v0.3.0-5-g5b97415-tp-docker)
  app: Docker Application (Docker Inc., v0.8.0)

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 116
 Server Version: 19.03.2
 Storage Driver: windowsfilter (windows) lcow (linux)
  Windows:
  LCOW:
 Logging Driver: json-file
 Plugins:
  Volume: local
  Network: ics l2bridge l2tunnel nat null overlay transparent
  Log: awslogs etwlogs fluentd gcplogs gelf json-file local logentries splunk syslog
 Swarm: inactive
 Default Isolation: hyperv
 Kernel Version: 10.0 18362 (18362.1.amd64fre.19h1_release.190318-1202)
 Operating System: Windows 10 Enterprise Version 1903 (OS Build 18362.418)
 OSType: windows
 Architecture: x86_64
 CPUs: 12
 Total Memory: 31.85GiB
 Name: PRSANGLI-D1
 ID: N5KU:KGSR:E2K5:YXJ5:PXT4:CJ4D:PGLN:UWEE:7EPU:ONVV:VXDY:QT4M
 Docker Root Dir: C:\ProgramData\Docker
 Debug Mode: true
  File Descriptors: -1
  Goroutines: 72
  System Time: 2019-11-12T16:21:17.2515085-08:00
  EventsListeners: 3
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: true
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false
 Product License: Community Engine

Additional environment details (AWS, VirtualBox, physical, etc.): Physical machine. Running docker commands through Containers Tools for Visual Studio.

closed time in 3 hours

pratiksanglikar

issue commentmoby/buildkit

Docker build fails with volume mount error on Windows host when buildKit is enabled.

Windows is not supported currently

pratiksanglikar

comment created time in 3 hours

pull request commentmoby/buildkit

cache: fix possible concurrent maps write on parent release

LGTM

tonistiigi

comment created time in 3 hours

fork amaumene/hyperkit

A toolkit for embedding hypervisor capabilities in your application

fork in 3 hours

issue closedmoby/buildkit

how to share host system's composer cache docker build through dockerfile?

I've been searching and trying for over an hour and I'm about to give us so thought I would ask here.

I'm willing to use the latest, experimental build, and enable buildkit via DOCKER_BUILDKIT=1

I want to do something that should be simple yet seems impossible

this is my dockerfile

FROM composer:1.8 as vendor

COPY database/ database/

COPY composer.json composer.json
COPY composer.lock composer.lock

RUN composer install \
    --ignore-platform-reqs \
    --no-interaction \
    --no-plugins \
    --no-scripts \
    --prefer-dist

This is a rather large composer.json so everytime I build it redownloads every package and takes close to a minute just for this part. When I run the same command on my host everything is already cached so it takes around 5 seconds.

I just want to share my host's ~/.composer/cache folder with the /tmp/cache folder on the image so that my builds run a lot faster and Composer can use a cache

I've tried using VOLUME but then found out this is not the intended use of VOLUME. I tried using ADD/COPY but found out that you can't ADD/COPY from files that are outside the relative path of the folder housing the Dockerfile

Finally I found a thread on SO that claims that BuildKit is the solution, using the --mount switch when using the RUN command. But I still can't figure out how to do it, it feels like this only allows to share a cache between build stages but doesn't help with sharing a cache from the host system

If I can't share the cache from the host system, I'd be happy if at least the cache persisted between builds (I don't mean between build stages inside the Dockerfile but I mean, if I run docker build multiple times, that it won't re-download half the internet every single time)

Hoping someone can clear this up for me.. many thanks

closed time in 4 hours

vesper8

issue openedmoby/moby

docker login https registry use Root Certificate or User Certificate

1:generate CA Root Certificate openssl req -newkey rsa:4096 -nodes -sha256 -keyout ca.key -x509 -days 365 -out ca.crt

2:generate user Certificate req openssl req -newkey rsa:4096 -nodes -sha256 -keyout reg.mydomain.com.key -out reg.mydomain.com.csr

3:generate User Certificate openssl x509 -req -days 365 -in reg.mydomain.com.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out reg.mydomain.com.crt

Test

for example one:

curl -iL -X GET https://reg.mydomain.com:8443/v2 --cacert ca.crt  
HTTP/1.1 301 Moved Permanently
Server: nginx
Date: Wed, 13 Nov 2019 02:09:11 GMT
Content-Type: text/html
Content-Length: 162
Location: https://reg.mydomain.com:8443/v2/
Connection: keep-alive
Strict-Transport-Security: max-age=31536000; includeSubdomains; preload
X-Frame-Options: DENY
Content-Security-Policy: frame-ancestors 'none'

HTTP/1.1 401 Unauthorized
Server: nginx
Date: Wed, 13 Nov 2019 02:09:11 GMT
Content-Type: application/json; charset=utf-8
Content-Length: 87
Connection: keep-alive
Docker-Distribution-Api-Version: registry/2.0
Set-Cookie: sid=f0b0bd69d0f6118c671c9edcd944db20; Path=/; HttpOnly
Set-Cookie: _xsrf=TTFqQ25FcE83a3pyMllvaFE3Qmp6UEN4SVI3cTZFeFA=|1573610951261431894|2df0572025f003813683486524ab3757acc2c24c; Expires=Wed, 13 Nov 2019 03:09:11 UTC; Max-Age=3600; Path=/
Www-Authenticate: Bearer realm="https://reg.mydomain.com:8443/service/token",service="harbor-registry"

{"errors":[{"code":"UNAUTHORIZED","message":"authentication required","detail":null}]}

result is success because curl --cacert use ca root Certificate

curl -iL -X GET https://reg.mydomain.com:8443/v2 --cacert reg.mydomain.com.crt 
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html

curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.

result is error because curl --cacert use ca root Certificate not user Certificate

for example two: copy ca.crt to /etc/docker/certs.d/reg.mydomain.com:8443

root@dmzy-node-01:/etc/docker/certs.d/reg.mydomain.com:8443# ls
ca.crt
docker login reg.mydomain.com:8443
Authenticating with existing credentials...
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

for example three:

sudo cp reg.mydomain.com.crt /etc/docker/certs.d/reg.mydomain.com:8443/
root@dmzy-node-01:/etc/docker/certs.d/reg.mydomain.com:8443# ls
reg.mydomain.com.crt
docker login reg.mydomain.com:8443
Authenticating with existing credentials...
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

Output of docker version:

Client: Docker Engine - Community
 Version:           19.03.4
 API version:       1.40
 Go version:        go1.12.10
 Git commit:        9013bf583a
 Built:             Fri Oct 18 15:54:09 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.4
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.10
  Git commit:       9013bf583a
  Built:            Fri Oct 18 15:52:40 2019
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.10
  GitCommit:        b34a5c8af56e510852c35414db4c1f4fa6172339
 runc:
  Version:          1.0.0-rc8+dev
  GitCommit:        3e425f80a8c931f88e6d94a8c831b9d5aa481657
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

Output of docker info:

Client:
 Debug Mode: false

Server:
 Containers: 29
  Running: 9
  Paused: 0
  Stopped: 20
 Images: 77
 Server Version: 19.03.4
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: b34a5c8af56e510852c35414db4c1f4fa6172339
 runc version: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
 init version: fec3683
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 4.15.0-66-generic
 Operating System: Ubuntu 18.04.3 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 4
 Total Memory: 8.148GiB
 Name: dmzy-node-01
 ID: AHDO:JSCB:SS6I:K6MH:CRNJ:3AN2:2JBN:Q2DP:IXVH:3QIX:IXP4:5UYR
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No swap limit support

Additional environment details (AWS, VirtualBox, physical, etc.):

created time in 4 hours

pull request commentmoby/buildkit

cache: fix possible concurrent maps write on parent release

PTAL @AkihiroSuda we needed this for patch release of docker 19.03 but would love if you could double check if there's anything weird with this fix. Thanks!

tonistiigi

comment created time in 4 hours

push eventmoby/buildkit

Tonis Tiigi

commit sha a393a767f8d114e0044c7efb3a9168ca28950286

cache: fix possible concurrent maps write on parent release Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tibor Vass

commit sha 928f3b480d7460aacb401f68610058ffdb549aca

Merge pull request #1257 from tonistiigi/1903-fix-parent-release [19.03] cache: fix possible concurrent maps write on parent release

view details

push time in 4 hours

PR merged moby/buildkit

[19.03] cache: fix possible concurrent maps write on parent release

19.03 version of https://github.com/moby/buildkit/pull/1256

@tiborvass @andrewhsu

Signed-off-by: Tonis Tiigi tonistiigi@gmail.com

+4 -6

0 comment

1 changed file

tonistiigi

pr closed time in 4 hours

push eventmoby/buildkit

Tonis Tiigi

commit sha 19558904457962c46fc9ae0fc066f903222e112d

cache: fix possible concurrent maps write on parent release Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tibor Vass

commit sha c4f5086b6a73b305c42fd814be0948114909fea6

Merge pull request #1256 from tonistiigi/fix-parent-release cache: fix possible concurrent maps write on parent release

view details

push time in 4 hours

PR merged moby/buildkit

cache: fix possible concurrent maps write on parent release

fixes https://github.com/moby/buildkit/issues/1250

Signed-off-by: Tonis Tiigi tonistiigi@gmail.com

@bpaquet @tiborvass

+4 -1

0 comment

1 changed file

tonistiigi

pr closed time in 4 hours

issue closedmoby/buildkit

fatal error: concurrent map writes

Hello,

I have lot of crash with Buildkit. This is not systematic, but very common. I have it with a raw 0.6.2 install, or through buidlx.

The logs is below.

How can I help to fix that?

Thx

fatal error: concurrent map writes

goroutine 20822 [running]: runtime.throw(0x114b515, 0x15) /usr/local/go/src/runtime/panic.go:617 +0x72 fp=0xc000275cf8 sp=0xc000275cc8 pc=0x42ead2 runtime.mapassign(0xfbf840, 0xc002192ab0, 0xc000275db8, 0x1b22ec0) /usr/local/go/src/runtime/map.go:590 +0x5e3 fp=0xc000275d80 sp=0xc000275cf8 pc=0x40f7a3 github.com/moby/buildkit/cache.(*cacheRecord).ref(...) /src/cache/refs.go:73 github.com/moby/buildkit/cache.(*cacheRecord).parentRef(0xc006cd7800, 0x42d801, 0x0) /src/cache/refs.go:143 +0xf7 fp=0xc000275dd8 sp=0xc000275d80 pc=0xb470f7 github.com/moby/buildkit/cache.(*cacheRecord).Parent(0xc006cd7800, 0x0, 0x0) /src/cache/refs.go:130 +0x30 fp=0xc000275e00 sp=0xc000275dd8 pc=0xb46fc0 github.com/moby/buildkit/cache/blobs.isTypeWindows(0x12dd6e0, 0xc008633700, 0xc006ee8900) /src/cache/blobs/blobs.go:151 +0x9d fp=0xc000275e58 sp=0xc000275e00 pc=0xd6580d github.com/moby/buildkit/cache/blobs.GetDiffPairs(0x12cb6c0, 0xc006ee8980, 0x12dd800, 0xc00010a7d0, 0x12e5040, 0xc00012a340, 0x12a9ce0, 0xc0001286a0, 0x12dd6e0, 0xc008633700, ...) /src/cache/blobs/blobs.go:40 +0xa2 fp=0xc000275ee8 sp=0xc000275e58 pc=0xd64ef2 github.com/moby/buildkit/exporter/containerimage.(*ImageWriter).exportLayers.func1.1(0x8, 0x1187000) /src/exporter/containerimage/writer.go:160 +0xac fp=0xc000275f88 sp=0xc000275ee8 pc=0xd71b2c golang.org/x/sync/errgroup.(*Group).Go.func1(0xc0017e6510, 0xc0094441e0) /src/vendor/golang.org/x/sync/errgroup/errgroup.go:58 +0x57 fp=0xc000275fd0 sp=0xc000275f88 pc=0x8b7117 runtime.goexit() /usr/local/go/src/runtime/asm_amd64.s:1337 +0x1 fp=0xc000275fd8 sp=0xc000275fd0 pc=0x45dcc1 created by golang.org/x/sync/errgroup.(*Group).Go /src/vendor/golang.org/x/sync/errgroup/errgroup.go:55 +0x66

goroutine 1 [select, 38 minutes]: main.main.func3(0xc0001a14a0, 0x0, 0x0) /src/cmd/buildkitd/main.go:263 +0x97c github.com/urfave/cli.HandleAction(0xf9d720, 0x1186628, 0xc0001a14a0, 0xc0001a14a0, 0xc00027d788) /src/vendor/github.com/urfave/cli/app.go:502 +0xc8 github.com/urfave/cli.(*App).Run(0xc0001d8540, 0xc0000321e0, 0x1, 0x1, 0x0, 0x0) /src/vendor/github.com/urfave/cli/app.go:268 +0x5aa main.main() /src/cmd/buildkitd/main.go:290 +0xd5e

goroutine 34 [chan receive, 38 minutes]: github.com/moby/buildkit/util/appcontext.Context.func1.1(0xc0004489c0, 0xc000116020, 0xc00013e008) /src/util/appcontext/appcontext.go:30 +0x38 created by github.com/moby/buildkit/util/appcontext.Context.func1 /src/util/appcontext/appcontext.go:28 +0xff

goroutine 18 [syscall, 38 minutes]: os/signal.signal_recv(0x0) /usr/local/go/src/runtime/sigqueue.go:139 +0x9c os/signal.loop() /usr/local/go/src/os/signal/signal_unix.go:23 +0x22 created by os/signal.init.0 /usr/local/go/src/os/signal/signal_unix.go:29 +0x41

goroutine 20 [chan receive]: github.com/moby/buildkit/util/pull.newResolverCache.func1(0xc0003be6c0, 0xc000326a50) /src/util/pull/resolver.go:203 +0x49 created by github.com/moby/buildkit/util/pull.newResolverCache /src/util/pull/resolver.go:201 +0x95

goroutine 23 [sync.Cond.Wait]: runtime.goparkunlock(...) /usr/local/go/src/runtime/proc.go:307 sync.runtime_notifyListWait(0xc00012a890, 0x85) /usr/local/go/src/runtime/sema.go:510 +0xf9 sync.(*Cond).Wait(0xc00012a880) /usr/local/go/src/sync/cond.go:56 +0x9e github.com/moby/buildkit/util/cond.(*StatefulCond).Wait(0xc0001e93e0) /src/util/cond/cond.go:28 +0x98 github.com/moby/buildkit/solver.(*scheduler).loop(0xc0001522a0) /src/solver/scheduler.go:101 +0x168 created by github.com/moby/buildkit/solver.newScheduler /src/solver/scheduler.go:35 +0x1ad

goroutine 24 [IO wait, 6 minutes]: internal/poll.runtime_pollWait(0x7f1d46785f90, 0x72, 0x0) /usr/local/go/src/runtime/netpoll.go:182 +0x56 internal/poll.(*pollDesc).wait(0xc000110418, 0x72, 0x0, 0x0, 0x113d221) /usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x9b internal/poll.(*pollDesc).waitRead(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:92 internal/poll.(*FD).Accept(0xc000110400, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0) /usr/local/go/src/internal/poll/fd_unix.go:384 +0x1ba net.(*netFD).accept(0xc000110400, 0xc0014ca000, 0x0, 0x0) /usr/local/go/src/net/fd_unix.go:238 +0x42 net.(*UnixListener).accept(0xc0001e95c0, 0xc0003b5e20, 0xc0003b5e28, 0x18) /usr/local/go/src/net/unixsock_posix.go:162 +0x32 net.(*UnixListener).Accept(0xc0001e95c0, 0x1186158, 0xc000084a80, 0x12db220, 0xc0014ca000) /usr/local/go/src/net/unixsock.go:260 +0x48 google.golang.org/grpc.(*Server).Serve(0xc000084a80, 0x12c5b40, 0xc0001e95c0, 0x0, 0x0) /src/vendor/google.golang.org/grpc/server.go:561 +0x1e9 main.serveGRPC.func1.1(0x0, 0x0) /src/cmd/buildkitd/main.go:323 +0x10e golang.org/x/sync/errgroup.(*Group).Go.func1(0xc0001e9530, 0xc000128f00) /src/vendor/golang.org/x/sync/errgroup/errgroup.go:58 +0x57 created by golang.org/x/sync/errgroup.(*Group).Go /src/vendor/golang.org/x/sync/errgroup/errgroup.go:55 +0x66

goroutine 25 [semacquire, 38 minutes]: sync.runtime_Semacquire(0xc0001e9540) /usr/local/go/src/runtime/sema.go:56 +0x39 sync.(*WaitGroup).Wait(0xc0001e9538) /usr/local/go/src/sync/waitgroup.go:130 +0x65 golang.org/x/sync/errgroup.(*Group).Wait(0xc0001e9530, 0x0, 0x0) /src/vendor/golang.org/x/sync/errgroup/errgroup.go:41 +0x31 main.serveGRPC.func2(0xc000105140, 0xc0001e9530) /src/cmd/buildkitd/main.go:328 +0x2b created by main.serveGRPC /src/cmd/buildkitd/main.go:327 +0x2ba

goroutine 26 [chan receive, 38 minutes]: github.com/moby/buildkit/solver.(*scheduler).loop.func2(0xc0001522a0) /src/solver/scheduler.go:76 +0x38 created by github.com/moby/buildkit/solver.(*scheduler).loop /src/solver/scheduler.go:75 +0x6d

goroutine 14780 [select]: google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000df87c0, 0x1, 0x0, 0x0, 0x0, 0x0) /src/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:317 +0x104 google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc003b10540, 0x0, 0x0) /src/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:435 +0x1b6 google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc001a82000) /src/vendor/google.golang.org/grpc/internal/transport/http2_client.go:330 +0x7b created by google.golang.org/grpc/internal/transport.newHTTP2Client /src/vendor/google.golang.org/grpc/internal/transport/http2_client.go:328 +0xeb2

goroutine 14721 [select]: google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000df8040, 0x1, 0x0, 0x0, 0x0, 0x0) /src/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:317 +0x104 google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000078300, 0x0, 0x0) /src/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:435 +0x1b6 google.golang.org/grpc/internal/transport.newHTTP2Server.func2(0xc002bde000) /src/vendor/google.golang.org/grpc/internal/transport/http2_server.go:276 +0xcb created by google.golang.org/grpc/internal/transport.newHTTP2Server /src/vendor/google.golang.org/grpc/internal/transport/http2_server.go:273 +0xfba

goroutine 14755 [IO wait]: internal/poll.runtime_pollWait(0x7f1d46785df0, 0x72, 0xffffffffffffffff) /usr/local/go/src/runtime/netpoll.go:182 +0x56 internal/poll.(*pollDesc).wait(0xc002902218, 0x72, 0x8000, 0x8000, 0xffffffffffffffff) /usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x9b internal/poll.(*pollDesc).waitRead(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:92 internal/poll.(*FD).Read(0xc002902200, 0xc000012000, 0x8000, 0x8000, 0x0, 0x0, 0x0) /usr/local/go/src/internal/poll/fd_unix.go:169 +0x19b net.(*netFD).Read(0xc002902200, 0xc000012000, 0x8000, 0x8000, 0x0, 0x8, 0x0) /usr/local/go/src/net/fd_unix.go:202 +0x4f net.(*conn).Read(0xc0014ca000, 0xc000012000, 0x8000, 0x8000, 0x0, 0x0, 0x0) /usr/local/go/src/net/net.go:177 +0x69 bufio.(*Reader).Read(0xc003b10000, 0xc002f66038, 0x9, 0x9, 0xc0007ead88, 0x3, 0x0) /usr/local/go/src/bufio/bufio.go:223 +0x23e io.ReadAtLeast(0x12a8d80, 0xc003b10000, 0xc002f66038, 0x9, 0x9, 0x9, 0xc0007eadff, 0x1040e00, 0x44d948) /usr/local/go/src/io/io.go:310 +0x88 io.ReadFull(...) /usr/local/go/src/io/io.go:329 golang.org/x/net/http2.readFrameHeader(0xc002f66038, 0x9, 0x9, 0x12a8d80, 0xc003b10000, 0x0, 0x0, 0xbf6a242e431cc7fb, 0x2140411b951) /src/vendor/golang.org/x/net/http2/frame.go:237 +0x88 golang.org/x/net/http2.(*Framer).ReadFrame(0xc002f66000, 0xc0053097c0, 0xc0053097c0, 0x0, 0x0) /src/vendor/golang.org/x/net/http2/frame.go:492 +0xa1 google.golang.org/grpc/internal/transport.(*http2Server).HandleStreams(0xc002bde000, 0xc000cb0270, 0x11861b0) /src/vendor/google.golang.org/grpc/internal/transport/http2_server.go:431 +0x7c google.golang.org/grpc.(*Server).serveStreams(0xc000084a80, 0x12dd8c0, 0xc002bde000) /src/vendor/google.golang.org/grpc/server.go:687 +0xdd google.golang.org/grpc.(*Server).handleRawConn.func1(0xc000084a80, 0x12dd8c0, 0xc002bde000) /src/vendor/google.golang.org/grpc/server.go:649 +0x43 created by google.golang.org/grpc.(*Server).handleRawConn /src/vendor/google.golang.org/grpc/server.go:648 +0x580

goroutine 14770 [semacquire, 6 minutes]: sync.runtime_Semacquire(0xc001c06bb0) /usr/local/go/src/runtime/sema.go:56 +0x39 sync.(*WaitGroup).Wait(0xc001c06ba8) /usr/local/go/src/sync/waitgroup.go:130 +0x65 golang.org/x/sync/errgroup.(*Group).Wait(0xc001c06ba0, 0xc003004520, 0xc001c06ba0) /src/vendor/golang.org/x/sync/errgroup/errgroup.go:41 +0x31 github.com/moby/buildkit/control.(*Controller).Status(0xc000172bd0, 0xc001c06b10, 0x12d82e0, 0xc002f36490, 0xc000172bd0, 0x20) /src/control/control.go:346 +0x174 github.com/moby/buildkit/api/services/control._Control_Status_Handler(0x109aa40, 0xc000172bd0, 0x12d3780, 0xc003004500, 0x10ecae0, 0x1b21718) /src/api/services/control/control.pb.go:1374 +0x109 github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc.OpenTracingStreamServerInterceptor.func1(0x109aa40, 0xc000172bd0, 0x12d3c00, 0xc001232240, 0xc0030044a0, 0x11858c8, 0x0, 0x0) /src/vendor/github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc/server.go:114 +0x365 google.golang.org/grpc.(*Server).processStreamingRPC(0xc000084a80, 0x12dd8c0, 0xc002bde000, 0xc001020000, 0xc0001e9470, 0x1ab0de0, 0x0, 0x0, 0x0) /src/vendor/google.golang.org/grpc/server.go:1183 +0x462 google.golang.org/grpc.(*Server).handleStream(0xc000084a80, 0x12dd8c0, 0xc002bde000, 0xc001020000, 0x0) /src/vendor/google.golang.org/grpc/server.go:1256 +0xd3f google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc0038f0000, 0xc000084a80, 0x12dd8c0, 0xc002bde000, 0xc001020000) /src/vendor/google.golang.org/grpc/server.go:691 +0x9f created by google.golang.org/grpc.(*Server).serveStreams.func1 /src/vendor/google.golang.org/grpc/server.go:689 +0xa1

goroutine 14771 [semacquire]: sync.runtime_Semacquire(0xc0017e6520) /usr/local/go/src/runtime/sema.go:56 +0x39 sync.(*WaitGroup).Wait(0xc0017e6518) /usr/local/go/src/sync/waitgroup.go:130 +0x65 golang.org/x/sync/errgroup.(*Group).Wait(0xc0017e6510, 0xc0086338e0, 0xc00012a480) /src/vendor/golang.org/x/sync/errgroup/errgroup.go:41 +0x31 github.com/moby/buildkit/exporter/containerimage.(*ImageWriter).exportLayers(0xc00012a480, 0x12cb780, 0xc0017e6390, 0xc0004c6928, 0x1, 0x1, 0xc0004c6750, 0x43783f, 0xc000040a00, 0xc0004c6760, ...) /src/exporter/containerimage/writer.go:170 +0x1af github.com/moby/buildkit/exporter/containerimage.(*ImageWriter).Commit(0xc00012a480, 0x12cb780, 0xc0017e6390, 0x12dd6e0, 0xc008633700, 0x0, 0xc00168a210, 0x12cb700, 0xc0017e6390, 0xc009444190, ...) /src/exporter/containerimage/writer.go:56 +0x12c github.com/moby/buildkit/exporter/containerimage.(*imageExporterInstance).Export(0xc002be4a80, 0x12cb780, 0xc0017e6390, 0x12dd6e0, 0xc008633700, 0x0, 0xc00168a210, 0x0, 0x0, 0x0) /src/exporter/containerimage/export.go:153 +0x2a1 github.com/moby/buildkit/solver/llbsolver.(*Solver).Solve.func2(0x12cb780, 0xc00168a780, 0xc0086dcea0, 0x0) /src/solver/llbsolver/solver.go:197 +0x7d github.com/moby/buildkit/solver/llbsolver.inVertexContext(0x12cb780, 0xc00168a780, 0x1148414, 0x12, 0x0, 0x0, 0xc0004c7400, 0x0, 0x0) /src/solver/llbsolver/solver.go:334 +0x233 github.com/moby/buildkit/solver/llbsolver.(*Solver).Solve(0xc000152230, 0x12cb780, 0xc002be4990, 0xc0005323c0, 0x19, 0x0, 0xc00372c020, 0xd, 0xc002be42d0, 0xc002be53b0, ...) /src/solver/llbsolver/solver.go:196 +0xa77 github.com/moby/buildkit/control.(*Controller).Solve(0xc000172bd0, 0x12cb780, 0xc002be4660, 0xc00019e7e0, 0x0, 0x0, 0x0) /src/control/control.go:276 +0x4a7 github.com/moby/buildkit/api/services/control._Control_Solve_Handler.func1(0x12cb780, 0xc002be4660, 0x11082a0, 0xc00019e7e0, 0x10ecae0, 0x1b21718, 0x12cb780, 0xc002be4660) /src/api/services/control/control.pb.go:1364 +0x86 github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc.OpenTracingServerInterceptor.func1(0x12cb6c0, 0xc000f6c000, 0x11082a0, 0xc00019e7e0, 0xc00015e060, 0xc00015e080, 0x0, 0x0, 0x12a9040, 0xc00010a3e0) /src/vendor/github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc/server.go:57 +0x2eb main.unaryInterceptor.func1(0x12cb6c0, 0xc000f6c000, 0x11082a0, 0xc00019e7e0, 0xc00015e060, 0xc00015e080, 0x0, 0x0, 0x0, 0x0) /src/cmd/buildkitd/main.go:526 +0x15f github.com/moby/buildkit/api/services/control._Control_Solve_Handler(0x109aa40, 0xc000172bd0, 0x12cb780, 0xc002be4030, 0xc0006b0000, 0xc0004e4600, 0x12cb780, 0xc002be4030, 0xc000508000, 0x482) /src/api/services/control/control.pb.go:1366 +0x158 google.golang.org/grpc.(*Server).processUnaryRPC(0xc000084a80, 0x12dd8c0, 0xc002bde000, 0xc001020100, 0xc0001e9470, 0x1aae5b8, 0x0, 0x0, 0x0) /src/vendor/google.golang.org/grpc/server.go:972 +0x470 google.golang.org/grpc.(*Server).handleStream(0xc000084a80, 0x12dd8c0, 0xc002bde000, 0xc001020100, 0x0) /src/vendor/google.golang.org/grpc/server.go:1252 +0xda6 google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc0038f0000, 0xc000084a80, 0x12dd8c0, 0xc002bde000, 0xc001020100) /src/vendor/google.golang.org/grpc/server.go:691 +0x9f created by google.golang.org/grpc.(*Server).serveStreams.func1 /src/vendor/google.golang.org/grpc/server.go:689 +0xa1

goroutine 14779 [select]: google.golang.org/grpc/internal/transport.(*recvBufferReader).read(0xc003908280, 0xc003004290, 0x5, 0x5, 0x10000c0007e6868, 0x0, 0x20) /src/vendor/google.golang.org/grpc/internal/transport/transport.go:146 +0xe5 google.golang.org/grpc/internal/transport.(*recvBufferReader).Read(0xc003908280, 0xc003004290, 0x5, 0x5, 0x857340, 0xc0018aa220, 0xc0007e6900) /src/vendor/google.golang.org/grpc/internal/transport/transport.go:140 +0x1a6 google.golang.org/grpc/internal/transport.(*transportReader).Read(0xc001c06750, 0xc003004290, 0x5, 0x5, 0x13, 0xc0007e6928, 0x876ccc) /src/vendor/google.golang.org/grpc/internal/transport/transport.go:435 +0x55 io.ReadAtLeast(0x12aa3a0, 0xc001c06750, 0xc003004290, 0x5, 0x5, 0x5, 0x13, 0x0, 0x0) /usr/local/go/src/io/io.go:310 +0x88 io.ReadFull(...) /usr/local/go/src/io/io.go:329 google.golang.org/grpc/internal/transport.(*Stream).Read(0xc001020200, 0xc003004290, 0x5, 0x5, 0x10de260, 0x7f1d46fc4aa0, 0x0) /src/vendor/google.golang.org/grpc/internal/transport/transport.go:419 +0xc8 google.golang.org/grpc.(*parser).recvMsg(0xc003004280, 0x400000, 0x13, 0x13, 0x0, 0x0, 0x7f1d46fc4a70, 0x0) /src/vendor/google.golang.org/grpc/rpc_util.go:508 +0x63 google.golang.org/grpc.recvAndDecompress(0xc003004280, 0xc001020200, 0x0, 0x0, 0x400000, 0x0, 0x0, 0x0, 0xc008de69e0, 0xc0007e6b68, ...) /src/vendor/google.golang.org/grpc/rpc_util.go:639 +0x4d google.golang.org/grpc.recv(0xc003004280, 0x7f1d46fc4880, 0x1b21718, 0xc001020200, 0x0, 0x0, 0x10de260, 0xc004ae72c0, 0x400000, 0x0, ...) /src/vendor/google.golang.org/grpc/rpc_util.go:684 +0x9b google.golang.org/grpc.(*serverStream).RecvMsg(0xc0012320c0, 0x10de260, 0xc004ae72c0, 0x0, 0x0) /src/vendor/google.golang.org/grpc/stream.go:1464 +0x14e github.com/moby/buildkit/session/grpchijack.(*conn).Read(0xc002902400, 0xc0002ca000, 0x8000, 0x8000, 0x0, 0x0, 0x0) /src/session/grpchijack/dial.go:69 +0x1d8 bufio.(*Reader).Read(0xc003b10480, 0xc002f663b8, 0x9, 0x9, 0xc00007ae00, 0x7f1d4fa69008, 0x0) /usr/local/go/src/bufio/bufio.go:223 +0x23e io.ReadAtLeast(0x12a8d80, 0xc003b10480, 0xc002f663b8, 0x9, 0x9, 0x9, 0x832c15, 0xc008de6a2c, 0xc0007e6e38) /usr/local/go/src/io/io.go:310 +0x88 io.ReadFull(...) /usr/local/go/src/io/io.go:329 golang.org/x/net/http2.readFrameHeader(0xc002f663b8, 0x9, 0x9, 0x12a8d80, 0xc003b10480, 0x0, 0xc000000000, 0x213d6e2ca1a, 0x1afe740) /src/vendor/golang.org/x/net/http2/frame.go:237 +0x88 golang.org/x/net/http2.(*Framer).ReadFrame(0xc002f66380, 0xc008de6a20, 0xc008de6a20, 0x0, 0x0) /src/vendor/golang.org/x/net/http2/frame.go:492 +0xa1 google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc001a82000) /src/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1236 +0x168 created by google.golang.org/grpc/internal/transport.newHTTP2Client /src/vendor/google.golang.org/grpc/internal/transport/http2_client.go:286 +0xd15

goroutine 14778 [chan receive, 6 minutes]: google.golang.org/grpc.(*addrConn).resetTransport(0xc00015b180) /src/vendor/google.golang.org/grpc/clientconn.go:1040 +0x5a1 created by google.golang.org/grpc.(*addrConn).connect /src/vendor/google.golang.org/grpc/clientconn.go:700 +0xb6

goroutine 14773 [chan receive, 6 minutes]: github.com/moby/buildkit/control.(*Controller).Session.func1(0xc001ad4660, 0xc002f362a0) /src/control/control.go:356 +0x34 created by github.com/moby/buildkit/control.(*Controller).Session /src/control/control.go:355 +0x153

goroutine 14754 [select, 6 minutes]: google.golang.org/grpc/internal/transport.(*http2Server).keepalive(0xc002bde000) /src/vendor/google.golang.org/grpc/internal/transport/http2_server.go:935 +0x1ed created by google.golang.org/grpc/internal/transport.newHTTP2Server /src/vendor/google.golang.org/grpc/internal/transport/http2_server.go:282 +0xfdf

goroutine 20836 [select]: github.com/moby/buildkit/util/progress.(*progressReader).Read.func1(0xc00571fe60, 0x12cb6c0, 0xc000df86c0, 0xc001c06f60) /src/util/progress/progress.go:101 +0xb7 created by github.com/moby/buildkit/util/progress.(*progressReader).Read /src/util/progress/progress.go:100 +0xdd

goroutine 14783 [select, 6 minutes]: github.com/moby/buildkit/util/progress.(*MultiReader).Reader.func1(0x12cb780, 0xc001c06ff0, 0xc002be5bc0, 0xc003004840) /src/util/progress/multireader.go:37 +0xbb created by github.com/moby/buildkit/util/progress.(*MultiReader).Reader /src/util/progress/multireader.go:36 +0x195

goroutine 20823 [runnable]: sort.Sort(0x12c6680, 0xc008ae90c0) /usr/local/go/src/sort/sort.go:216 +0x88 go.etcd.io/bbolt.(*freelist).arrayMergeSpans(0xc000110100, 0xc0021e51a0, 0x6, 0x6) /src/vendor/go.etcd.io/bbolt/freelist.go:368 +0x5d go.etcd.io/bbolt.(*freelist).release(0xc000110100, 0xfffffffffffffffe) /src/vendor/go.etcd.io/bbolt/freelist.go:195 +0x20d go.etcd.io/bbolt.(*DB).freePages(0xc000156000) /src/vendor/go.etcd.io/bbolt/db.go:617 +0x15a go.etcd.io/bbolt.(*DB).beginRWTx(0xc000156000, 0x0, 0x0, 0x0) /src/vendor/go.etcd.io/bbolt/db.go:604 +0x102 go.etcd.io/bbolt.(*DB).Begin(0xc000156000, 0x1ab1d01, 0xc0047a1cb0, 0xc0047a1cb0, 0xc0051f56f8) /src/vendor/go.etcd.io/bbolt/db.go:536 +0x38 go.etcd.io/bbolt.(*DB).Update(0xc000156000, 0xc004020ed8, 0x0, 0x0) /src/vendor/go.etcd.io/bbolt/db.go:672 +0x3c github.com/moby/buildkit/cache/metadata.(*Store).Clear(0xc000136050, 0xc00a01bd61, 0x19, 0x0, 0x0) /src/cache/metadata/metadata.go:135 +0x6e github.com/moby/buildkit/cache.(*cacheRecord).remove(0xc0063d5600, 0x12cb700, 0xc00003a098, 0xc0017e6500, 0x0, 0x0) /src/cache/refs.go:197 +0x9d github.com/moby/buildkit/cache.(*cacheRecord).finalize.func1(0xc006cd7800, 0xc00abb42f0) /src/cache/refs.go:309 +0x9a created by github.com/moby/buildkit/cache.(*cacheRecord).finalize /src/cache/refs.go:306 +0x113

goroutine 20835 [select]: github.com/moby/buildkit/util/progress.(*progressReader).Read.func1(0xc00571fe00, 0x12cb700, 0xc00003a098, 0xc002be55c0) /src/util/progress/progress.go:101 +0xb7 created by github.com/moby/buildkit/util/progress.(*progressReader).Read /src/util/progress/progress.go:100 +0xdd

goroutine 14776 [sync.Cond.Wait]: runtime.goparkunlock(...) /usr/local/go/src/runtime/proc.go:307 sync.runtime_notifyListWait(0xc000df8b10, 0xc000000383) /usr/local/go/src/runtime/sema.go:510 +0xf9 sync.(*Cond).Wait(0xc000df8b00) /usr/local/go/src/sync/cond.go:56 +0x9e github.com/moby/buildkit/util/progress.(*progressReader).Read(0xc001c06f60, 0x12cb6c0, 0xc000df86c0, 0x0, 0x0, 0x0, 0x0, 0x0) /src/util/progress/progress.go:127 +0x10b github.com/moby/buildkit/solver.(*Job).Status(0xc000222000, 0x12cb6c0, 0xc000df86c0, 0xc003b103c0, 0x0, 0x0) /src/solver/progress.go:25 +0xd14 github.com/moby/buildkit/solver/llbsolver.(*Solver).Status(0xc000152230, 0x12cb6c0, 0xc000df86c0, 0xc0018aa380, 0x19, 0xc003b103c0, 0xc00090d788, 0xc00090d790) /src/solver/llbsolver/solver.go:283 +0xbe github.com/moby/buildkit/control.(*Controller).Status.func1(0x8, 0x1187000) /src/control/control.go:299 +0x5e golang.org/x/sync/errgroup.(*Group).Go.func1(0xc001c06ba0, 0xc001c06bd0) /src/vendor/golang.org/x/sync/errgroup/errgroup.go:58 +0x57 created by golang.org/x/sync/errgroup.(*Group).Go /src/vendor/golang.org/x/sync/errgroup/errgroup.go:55 +0x66

goroutine 14728 [select, 6 minutes]: main.unaryInterceptor.func1.1(0x12cb6c0, 0xc000f6c000, 0x12cb6c0, 0xc00049e040, 0xc001718030) /src/cmd/buildkitd/main.go:519 +0xd8 created by main.unaryInterceptor.func1 /src/cmd/buildkitd/main.go:518 +0x108

goroutine 14784 [sync.Cond.Wait]: runtime.goparkunlock(...) /usr/local/go/src/runtime/proc.go:307 sync.runtime_notifyListWait(0xc000f6c2d0, 0xc00000039f) /usr/local/go/src/runtime/sema.go:510 +0xf9 sync.(*Cond).Wait(0xc000f6c2c0) /usr/local/go/src/sync/cond.go:56 +0x9e github.com/moby/buildkit/util/progress.(*progressReader).Read(0xc002be55c0, 0x12cb700, 0xc00003a098, 0x0, 0x0, 0x0, 0x0, 0x0) /src/util/progress/progress.go:127 +0x10b github.com/moby/buildkit/util/progress.(*MultiReader).handle(0xc002be5bc0, 0xc001c06ff0, 0xc002be5bc0) /src/util/progress/multireader.go:56 +0x118 created by github.com/moby/buildkit/util/progress.(*MultiReader).Reader /src/util/progress/multireader.go:47 +0x1ea

goroutine 14777 [chan receive]: github.com/moby/buildkit/control.(*Controller).Status.func2(0x8, 0x1187000) /src/control/control.go:304 +0x782 golang.org/x/sync/errgroup.(*Group).Go.func1(0xc001c06ba0, 0xc003004520) /src/vendor/golang.org/x/sync/errgroup/errgroup.go:58 +0x57 created by golang.org/x/sync/errgroup.(*Group).Go /src/vendor/golang.org/x/sync/errgroup/errgroup.go:55 +0x66

goroutine 14772 [chan receive, 6 minutes]: github.com/moby/buildkit/session.(*Manager).handleConn(0xc0004e4660, 0x12cb6c0, 0xc000df8580, 0x12dae60, 0xc002902400, 0xc001c06690, 0x0, 0x0) /src/session/manager.go:144 +0x491 github.com/moby/buildkit/session.(*Manager).HandleConn(0xc0004e4660, 0x12cb6c0, 0xc000df8340, 0x12dae60, 0xc002902400, 0xc001c06690, 0x0, 0x0) /src/session/manager.go:97 +0x75 github.com/moby/buildkit/control.(*Controller).Session(0xc000172bd0, 0x12dab60, 0xc002f36280, 0x0, 0x0) /src/control/control.go:360 +0x19a github.com/moby/buildkit/api/services/control._Control_Session_Handler(0x109aa40, 0xc000172bd0, 0x12d3780, 0xc003004300, 0x10ecae0, 0x1b21718) /src/api/services/control/control.pb.go:1391 +0xad github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc.OpenTracingStreamServerInterceptor.func1(0x109aa40, 0xc000172bd0, 0x12d3c00, 0xc0012320c0, 0xc0030042a0, 0x11858b8, 0x0, 0x0) /src/vendor/github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc/server.go:114 +0x365 google.golang.org/grpc.(*Server).processStreamingRPC(0xc000084a80, 0x12dd8c0, 0xc002bde000, 0xc001020200, 0xc0001e9470, 0x1ab0e00, 0x0, 0x0, 0x0) /src/vendor/google.golang.org/grpc/server.go:1183 +0x462 google.golang.org/grpc.(*Server).handleStream(0xc000084a80, 0x12dd8c0, 0xc002bde000, 0xc001020200, 0x0) /src/vendor/google.golang.org/grpc/server.go:1256 +0xd3f google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc0038f0000, 0xc000084a80, 0x12dd8c0, 0xc002bde000, 0xc001020200) /src/vendor/google.golang.org/grpc/server.go:691 +0x9f created by google.golang.org/grpc.(*Server).serveStreams.func1 /src/vendor/google.golang.org/grpc/server.go:689 +0xa1

goroutine 14782 [chan receive, 6 minutes]: github.com/moby/buildkit/util/progress.pipe.func1(0x12cb6c0, 0xc000df8ac0, 0xc001c06f60) /src/util/progress/progress.go:167 +0x48 created by github.com/moby/buildkit/util/progress.pipe /src/util/progress/progress.go:166 +0x131

goroutine 14774 [select, 6 minutes]: google.golang.org/grpc.(*ccBalancerWrapper).watcher(0xc000df8540) /src/vendor/google.golang.org/grpc/balancer_conn_wrappers.go:115 +0x110 created by google.golang.org/grpc.newCCBalancerWrapper /src/vendor/google.golang.org/grpc/balancer_conn_wrappers.go:106 +0x14f

goroutine 14775 [select]: github.com/moby/buildkit/session.monitorHealth(0x12cb6c0, 0xc000df8580, 0xc00019d340, 0xc002f36400) /src/session/grpc.go:69 +0x189 created by github.com/moby/buildkit/session.grpcClientConn /src/session/grpc.go:55 +0x270

goroutine 14729 [chan receive, 6 minutes]: github.com/moby/buildkit/util/progress.pipe.func1(0x12cb6c0, 0xc000f6c280, 0xc002be55c0) /src/util/progress/progress.go:167 +0x48 created by github.com/moby/buildkit/util/progress.pipe /src/util/progress/progress.go:166 +0x131

closed time in 4 hours

bpaquet

pull request commentmoby/moby

logger/gelf: use compression level 0 by default

OK, some more testing showed that using gelf-compression-level: 0 is not making things better, dockerd CPU usage stays about the same, and the CPU profiles collected with pprof does not differ much from each other. Changing gelf-compression-type to none changes things dramatically (as described in earlier comments).

Apparently that happens since most of the CPU time is spent not on compression itself, but rather go runtime (allocation and garbage collection). Looks like gzipping a lot of small data objects is inefficient.

Given the fact that the support for uncompressed input has only made its way to logstash very recently (see https://github.com/moby/moby/pull/40101#issuecomment-553047340), we can't change the default to be gelf-compression-type: none without the risk of breaking users.

Seems like the only option left is to recommend disabling gelf compression in docs.

kolyshkin

comment created time in 4 hours

issue commentmoby/moby

Docker build should compute image digests

Has there been any progress on this?

phs

comment created time in 5 hours

startedmoby/moby

started time in 5 hours

pull request commentmoby/moby

logger/gelf: use compression level 0 by default

Took a while to figure out how to test it. Here's how:

LOGSTASH_VERSION=7.3.1
docker run --rm -it -p 127.0.0.1:12201:12201/udp docker.elastic.co/logstash/logstash:${LOGSTASH_VERSION} bin/logstash -e 'input { gelf {} }'
# in another terminal window
docker run --rm --log-driver=gelf --log-opt=gelf-address=udp://127.0.0.1:12201 --log-opt=gelf-compression-type=gzip --log-opt=gelf-compression-level=0 alpine echo hahaha

Using the above test, found out that

  • gelf-compression-type=none is not working with logstash < 7.4.0 (7.3.1 was tested)
  • gelf-compression-type=none works with logstash >= 7.4.0 (7.4.2 was tested)
  • gelf-compression-type=gzip --log-opt=gelf-compression-level=0 works with all versions (7.3.1 and 7.4.2 were tested)
kolyshkin

comment created time in 5 hours

issue openedmoby/buildkit

Docker build fails with volume mount error on Windows host when buildKit is enabled.

Description

docker build command fails with volume mount error output when buildKit is enabled.

Steps to reproduce the issue:

  1. Make sure to have Docker version >= 19.03
  2. Enable BuildKit by setting environment variable - DOCKER_BUILDKIT=1
  3. Create a ASP.NET project with docker support through visual studio (Or download Sample repro project
  4. Run command docker build -f "D:\source\repos\WebApplication4\WebApplication4\Dockerfile" --force-rm -t webapplication4:dev --target base "D:\source\repos\WebApplication4".
  5. Check the output of the build command.

Describe the results you received:

PS C:\Users\prsangli\source\repos\WebApplication4> docker build -f "C:\Users\prsangli\source\repos\WebApplication4\WebApplication4\Dockerfile" --force-rm -t webapplication4:dev --target base  "C:\Users\prsangli\source\repos\WebApplication4"                                                                                                                                                                                                            [+] Building 0.0s (2/2) FINISHED
 => [internal] load build definition from Dockerfile                                                                                                                                                                     0.0s
 => => transferring dockerfile: 32B                                                                                                                                                                                      0.0s
 => [internal] load .dockerignore                                                                                                                                                                                        0.0s
 => => transferring context: 35B                                                                                                                                                                                         0.0s
failed to solve with frontend dockerfile.v0: failed to read dockerfile: failed to mount C:\ProgramData\Docker\tmp\buildkit-mount414087051: [{Type:bind Source:C:\ProgramData\Docker\windowsfilter\lsbbs5t6dnt8eqc4ehyv38igy Options:[rbind ro]}]: invalid windows mount type: 'bind'

Describe the results you expected: The project builds successfully.

Additional information you deem important (e.g. issue happens only occasionally):

Output of docker version:

Client: Docker Engine - Community
 Version:           19.03.2
 API version:       1.40
 Go version:        go1.12.8
 Git commit:        6a30dfc
 Built:             Thu Aug 29 05:26:49 2019
 OS/Arch:           windows/amd64
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          19.03.2
  API version:      1.40 (minimum version 1.24)
  Go version:       go1.12.8
  Git commit:       6a30dfc
  Built:            Thu Aug 29 05:39:49 2019
  OS/Arch:          windows/amd64
  Experimental:     true

Output of docker info:

Client:
 Debug Mode: false
 Plugins:
  buildx: Build with BuildKit (Docker Inc., v0.3.0-5-g5b97415-tp-docker)
  app: Docker Application (Docker Inc., v0.8.0)

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 116
 Server Version: 19.03.2
 Storage Driver: windowsfilter (windows) lcow (linux)
  Windows:
  LCOW:
 Logging Driver: json-file
 Plugins:
  Volume: local
  Network: ics l2bridge l2tunnel nat null overlay transparent
  Log: awslogs etwlogs fluentd gcplogs gelf json-file local logentries splunk syslog
 Swarm: inactive
 Default Isolation: hyperv
 Kernel Version: 10.0 18362 (18362.1.amd64fre.19h1_release.190318-1202)
 Operating System: Windows 10 Enterprise Version 1903 (OS Build 18362.418)
 OSType: windows
 Architecture: x86_64
 CPUs: 12
 Total Memory: 31.85GiB
 Name: PRSANGLI-D1
 ID: N5KU:KGSR:E2K5:YXJ5:PXT4:CJ4D:PGLN:UWEE:7EPU:ONVV:VXDY:QT4M
 Docker Root Dir: C:\ProgramData\Docker
 Debug Mode: true
  File Descriptors: -1
  Goroutines: 72
  System Time: 2019-11-12T16:21:17.2515085-08:00
  EventsListeners: 3
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: true
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false
 Product License: Community Engine

Additional environment details (AWS, VirtualBox, physical, etc.): Physical machine. Running docker commands through Containers Tools for Visual Studio.

created time in 5 hours

startedmoby/moby

started time in 6 hours

startedmoby/moby

started time in 6 hours

issue commentmoby/moby

Hi @JohnStarich,

Is there an example like http://www.tresmundi.com/docker-logging-pitfalls-solving-multiline-stacktraces-and-docker-16-kb-message-split/ but uses partial_key and partial_value?

alexandru-ersenie

comment created time in 6 hours

issue commentmoby/moby

Make the log lines splitting configurable

@alexandru-ersenie Is there an example like the one you provided "http://www.tresmundi.com/docker-logging-pitfalls-solving-multiline-stacktraces-and-docker-16-kb-message-split/" but uses partial_key and partial_value for splitted logs?

crassirostris

comment created time in 6 hours

issue commentmoby/moby

Make the log lines splitting configurable

Is there an example like http://www.tresmundi.com/docker-logging-pitfalls-solving-multiline-stacktraces-and-docker-16-kb-message-split/ but uses partial_key and partial_value

crassirostris

comment created time in 6 hours

issue commentmoby/moby

Custom nat networks dissapear after reboot windows server 2019

Thanks @thaJeztah!

@javiertuya or any other Windows container users: Besides deploying multiple stacks, are there any other use-cases for needing multiple NAT networks considering you can customize the IP range of the default NAT network?

javiertuya

comment created time in 7 hours

issue commentmoby/buildkit

Cache pushed from one machine can not be reused on another machine

And that is even if you remove your local cache to make sure remote cache is used?

Yes, I am removing everything to be sure I use only remote cache.

I have noticed as well that ci servers can reuse cache built on ci servers. But my laptop can not reuse that cache.

kindritskyiMax

comment created time in 7 hours

issue commentmoby/buildkit

Cache pushed from one machine can not be reused on another machine

@kindritskyiMax And that is even if you remove your local cache to make sure remote cache is used? In https://gitlab.com/kindritskiy.m/docker-cache-issue/-/jobs/348012533 I also see that the remote cache was used, so is it that cache exported in ci only works when importing to ci machines(with fresh state).

Or does it have something to do with exporting cache that has already been imported like it happens in https://gitlab.com/kindritskiy.m/docker-cache-issue/-/jobs/348012533 ?

kindritskyiMax

comment created time in 7 hours

issue commentmoby/buildkit

Cache pushed from one machine can not be reused on another machine

I will try your image when will be at my laptop. Will let you know. Thank you.

kindritskyiMax

comment created time in 7 hours

issue commentmoby/buildkit

Cache pushed from one machine can not be reused on another machine

If image was build and pushed fro my machine and used only on my machine, then cache works. But when using different machines - cache not works

kindritskyiMax

comment created time in 7 hours

pull request commentmoby/moby

Bump hcsshim to 6c7177eae8be632af2e15e44b62d69ab18389ddb

I can add you both to the maintainers channel, which would give a more "quiet" channel to discuss if needed

vikramhh

comment created time in 7 hours

issue commentmoby/buildkit

Cache pushed from one machine can not be reused on another machine

@kindritskyiMax Yes, it should work if you switch machines. Did you try if my cache works for you? So are you saying that --cache-to works for you (even when switching machine) but does not work if you export from a specific machine?

kindritskyiMax

comment created time in 7 hours

issue commentmoby/buildkit

Cache pushed from one machine can not be reused on another machine

Thank you, I now know that tag and cache cant be same. I did no find any docs about that so its good to know it from you. Will try fix my builds. But one question is still bothering me - if i build and push image from my laptop to ci then i can reuse all the cache. It really works. But what about builfing on one machine (lets say ci server) and using that cache on another machine? Will it wotk?

kindritskyiMax

comment created time in 8 hours

issue commentmoby/buildkit

Cache pushed from one machine can not be reused on another machine

I can reproduce your case but if I push my own image with docker buildx build --cache-to type=registry,ref=tonistiigi/build-cache-issue:latest,mode=max . then now running docker buildx build --cache-from tonistiigi/build-cache-issue:latest . seems to work fine for whole build.

kindritskyiMax

comment created time in 8 hours

issue commentmoby/moby

Docker service update --image "could not accessed on a registry to record its digest"

I encounter this problem when first switch auth credentials (docker login) and then update a stack using docker stack deploy. It seems that when doing update on existing service, it do not use the new credentials, even with --with-registry-auth.

It only works when I update the stack via docker stack rm and then docker stack deploy (with --with registry-auth it once again.

sylvainmouquet

comment created time in 9 hours

issue commentmoby/moby

Docker push fails with EOF in private registry 2.0 behind NGINX Proxy

I still get the same whether or not using NGinx or going directly to the Docker registry and whether or not those NGinx proxy settings are there: time="2019-11-12T20:35:04.529028691Z" level=warning msg="invalid remote IP address: "unknown""

andrefreitas

comment created time in 9 hours

startedmoby/moby

started time in 9 hours

pull request commentmoby/moby

Bump hcsshim to 6c7177eae8be632af2e15e44b62d69ab18389ddb

@thaJeztah I'm on the Docker Community Slack. Is there a specific channel I should make sure I'm on?

@vikramhh It looks like the "Element not found" error can occur when running a container that exits immediately. There is a race condition that can cause Docker to still try to query for things like the stdio handles after the container has already exited and been cleaned up. I need to investigate further to determine what has changed to cause this.

vikramhh

comment created time in 10 hours

PR opened moby/moby

OpenRC: pass extra environment variables to configure http(s) proxy for daemon

<!-- Please make sure you've read and understood our contributing guidelines; https://github.com/moby/moby/blob/master/CONTRIBUTING.md

** Make sure all your commits include a signature generated with git commit -s **

For additional information on our contributing process, read our contributing guide https://docs.docker.com/opensource/code/

If this is a bug fix, make sure your description includes "fixes #xxxx", or "closes #xxxx"

Please provide the following information: -->

- What I did Implemented https://github.com/moby/moby/issues/40201

- How I did it Pass extra environment variables to docker daemon. In particular:

  • HTTP_PROXY
  • HTTPS_PROXY
  • NO_PROXY

Values for env vars can be defined in openrc conf.d file as:

  • DOCKER_HTTP_PROXY
  • DOCKER_HTTPS_PROXY
  • DOCKER_NO_PROXY

- How to verify it Install docker on openrc-enabled system. e.g. on Gentoo:

# emerge docker

After installation proxy servers can be configured in /etc/conf.d/docker file

- Description for the changelog <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> OpenRC: added support of http(s) proxy configuration for docker daemon

+7 -1

0 comment

2 changed files

pr created time in 10 hours

issue openedmoby/moby

OpenRC: add support of proxy configuration for docker daemon

For openrc init systems it would be good to have possibility to configure HTTP(S) proxy servers for docker daemon. e.g. for systemd there is documented approach https://docs.docker.com/config/daemon/systemd/#httphttps-proxy.

created time in 10 hours

PR opened moby/buildkit

cache: fix possible concurrent maps write on parent release

19.03 version of https://github.com/moby/buildkit/pull/1256

@tiborvass @andrewhsu

Signed-off-by: Tonis Tiigi tonistiigi@gmail.com

+6 -6

0 comment

2 changed files

pr created time in 10 hours

PR opened moby/buildkit

cache: fix possible concurrent maps write on parent release

fixes https://github.com/moby/buildkit/issues/1250

Signed-off-by: Tonis Tiigi tonistiigi@gmail.com

@bpaquet @tiborvass

+6 -1

0 comment

2 changed files

pr created time in 10 hours

issue commentmoby/moby

user namespaces problems

I've stumbled upon this a few years later (docker 18.6.1). I am fine with user namespaces not being enabled by default. But why can't you combine user namespaces and host networking?

ghost

comment created time in 11 hours

issue commentmoby/moby

Empty mount points after docker-compose up

Ah.. i cross checked it, it was my docker-compose.yml configuration issue. Thank you for the clarification :)

cmaessen

comment created time in 11 hours

pull request commentmoby/moby

logger/gelf: use compression level 0 by default

So, the "Compression: none" only works since logstash-input-gelf 3.3.0 (https://rubygems.org/gems/logstash-input-gelf/versions/3.3.0), which is included in logstash 7.4.0, released Oct 1, 2019 (https://www.elastic.co/guide/en/logstash/7.4/logstash-7-4-0.html).

This means we can't disable compression now since I'm afraid many users still haven't updated logstash to >= 7.4.

Now we need to check whether setting "compression-level: 0" works with older logstash

kolyshkin

comment created time in 11 hours

push eventmoby/buildkit

Tonis Tiigi

commit sha 565deba34208f779e8c99432d7a73b86722b2c6d

blobs: allow alternative compare-with-parent diff Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tõnis Tiigi

commit sha 2ceaa119e4fcc6c5529d4db813650996da334bef

Merge pull request #1248 from tonistiigi/parent-diff blobs: allow alternative compare-with-parent diff

view details

push time in 12 hours

PR merged moby/buildkit

blobs: allow alternative compare-with-parent diff

This adds an alternative differ method that can be used instead of regular containerd Compare. We will use this in docker where storage/diff is managed by moby layerstore.

Signed-off-by: Tonis Tiigi tonistiigi@gmail.com

+40 -26

0 comment

1 changed file

tonistiigi

pr closed time in 12 hours

push eventmoby/buildkit

Tonis Tiigi

commit sha 044271e0adf40556077d27c789c5a6777eb5178f

exporter: add canonical and dangling image naming Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tonis Tiigi

commit sha 6c70bacf8e9ddfd8c9088a7f3a48c26d2f8299ff

readme: document available options for image output Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tõnis Tiigi

commit sha e486c1193f105b73113024b7d88cf8533233094d

Merge pull request #1247 from tonistiigi/dangling-naming exporter: add canonical and dangling image naming

view details

push time in 12 hours

PR merged moby/buildkit

exporter: add canonical and dangling image naming

New exporter attrs that allow naming dangling images (without name, only prefix) and canonical references that also contain the image digest.

Signed-off-by: Tonis Tiigi tonistiigi@gmail.com

+61 -23

2 comments

2 changed files

tonistiigi

pr closed time in 12 hours

push eventmoby/moby

Olli Janatuinen

commit sha 447a840254410df3b9345c652b601f08447b8467

Windows: Use system specific parallelism value on containers restart Signed-off-by: Olli Janatuinen <olli.janatuinen@gmail.com>

view details

Sebastiaan van Stijn

commit sha c83188248e9c310b766942eac50fc84c533b7abe

Merge pull request #39733 from olljanat/win-restore-no-parallelism Windows: do not use parallelism on container restart

view details

push time in 12 hours

PR merged moby/moby

Windows: do not use parallelism on container restart platform/windows process/cherry-pick status/2-code-review

- What I did #38301 did set container restart/restore task parallelism limit to 128*NumCPU which is good limit for Linux containers. Especially when they are made correctly by following one process per container rule.

However Windows containers are much heavier and example Windows Server 2019 base image mcr.microsoft.com/windows/servercore:ltsc2019 it selves includes ~20 system processes which causes restoring to generate so high load to server and it cannot response anything else until restore is completed.

- How I did it Disabled restore parallelism from Windows platform.

- How to verify it I created 100 containers with restart policy:

for($i=1;$i -le 100;$i++) {
	docker run -d --restart always --network nat mcr.microsoft.com/windows/servercore:ltsc2019 ping -t 127.0.0.1
	start-sleep -seconds 10
}

On my 4 CPU test machine it they take about 8 minutes to restart with and without this changes. However there is big difference how server is able to response to other commands.

Without this change CPU load is constantly 100% and even typing text to notepad takes long time: without_patch_restore

After this change server still uses all CPU it have now it still responses to user input. docker_restart_parallel_1

- A picture of a cute animal (not mandatory but encouraged) image

+5 -2

29 comments

1 changed file

olljanat

pr closed time in 12 hours

pull request commentmoby/moby

Windows: do not use parallelism on container restart

let's merge 👍

olljanat

comment created time in 12 hours

pull request commentmoby/moby

Fix misspellings of "successfully" in error msgs

Meh. I'm not invested enough to jump through additional hoops for this. Whoever feels motivated to take it from here: go ahead.

dnnr

comment created time in 12 hours

pull request commentmoby/moby

Windows: do not use parallelism on container restart

This LGTM (not a maintainer)

olljanat

comment created time in 13 hours

issue commentmoby/buildkit

Cache pushed from one machine can not be reused on another machine

Do you mean this two lines

You can't use the same ref on --cache-* and -t because they are different objects and pushed separately. (The exception here would be inline cache that would not push a separate object but append metatdata to image config).

kindritskyiMax

comment created time in 13 hours

push eventmoby/buildkit

Akihiro Suda

commit sha 14d5f06ed28d24b1c941e14dd28f5ca7ee0fee57

examples/kubernetes: use Parallel mode for StatefulSet Parallel mode releaxes the pod creation order constraint. https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#parallel-pod-management Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>

view details

Tõnis Tiigi

commit sha 5afa48a5a6bf2c72ae9d2e93efdc9ff0b6d8c42d

Merge pull request #1255 from AkihiroSuda/statefulset-parallel examples/kubernetes: use Parallel mode for StatefulSet

view details

push time in 13 hours

PR merged moby/buildkit

examples/kubernetes: use Parallel mode for StatefulSet

Parallel mode releaxes the pod creation order constraint.

https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#parallel-pod-management

Signed-off-by: Akihiro Suda akihiro.suda.cz@hco.ntt.co.jp

+3 -1

0 comment

3 changed files

AkihiroSuda

pr closed time in 13 hours

push eventmoby/buildkit

Pablo Chico de Guzman

commit sha 7a9f5e7696850de8a5817e2b01d6c2f28738a878

Used by Okteto Cloud Signed-off-by: Pablo Chico de Guzman <pchico83@gmail.com>

view details

Tõnis Tiigi

commit sha 1642b9ce917cb648ad37d19271c42f9910a6c458

Merge pull request #1253 from pchico83/okteto Used by Okteto Cloud

view details

push time in 13 hours

PR merged moby/buildkit

Used by Okteto Cloud
+1 -0

1 comment

1 changed file

pchico83

pr closed time in 13 hours

pull request commentmoby/moby

Fix misspellings of "successfully" in error msgs

Please submit PR to https://github.com/containerd/go-runc and then run https://github.com/LK4D4/vndr with updated vendor.conf

dnnr

comment created time in 14 hours

issue commentmoby/moby

Docker Network bypasses Firewall, no option to disable

Since I'm in here any way - I'm actually running into a related issue now where this is an issue on platforms other than Linux too. Consider the following:

  • Project A has two containers, one for a database and one for an application.
  • Project B has two containers, one for a database and one for an application.
  • Both projects are isolated from each other (separate source repositories, configurations, etc)
  • Both projects are managed under Docker Compose
  • Both projects expose their Database Ports for local development purposes
  • Both projects use the same database server (postgres, mysql, etc)

Now suppose you want to run both projects locally - for instance to work on a shared library that both projects use so that you can easily pull it into their code for testing.

Under the current firewall interaction design - and what in part leads to the issues above about exposing the container to the public network without the user's knowledge - the database dockers of both projects cannot be run at the same time since they will be in contention for the exposed port for the database. Same if you wanted to expose both their application port and they both used the same port for the application server - common since HTTP-based APIs and applications are extremely common now especially in cloud oriented applications.

Sure you can hack your way to setting both up under one DB container; but you're not isolating them per your project design, and have to be even more careful about configurations, etc.

In a proper solution here is two fold:

  1. The containers would be only bound to their IPs and their exposed ports would be bound to their IPs alone within their respective Docker Networks, not on the system match all IPs (0.0.0.0, ::).
  2. Docker also wouldn't publicly expose the route to the Docker Networks off the system by default. Docker Networking can be utilized to establish inter-network (docker network to docker network) connections as currently designed, and then also allow local host connections by default.
  3. Users would then be on the hook for adding the appropriate firewall rules to expose the container to the outside world when and if desired - for example, by port forwarding port 443 to port 443 of their container of choice.

Again, this could be done gradually:

  • Release 1: Implement Steps 1 and 2; but add non-localhost routing too (temporarily) with a warning. Current behavior of first-come-first-serve for getting a port is maintained; and a warning about this behavior going away is issued.
  • Release 1+N: Drop the warning, and drop the non-localhost routing. Require Step 3 for users wanting Docker to expose the ports off system and make sure this is well documented.
BenjamenMeyer

comment created time in 14 hours

pull request commentmoby/moby

logger/gelf: use compression level 0 by default

you probably already have your answer via the private repository discussion, but I don't believe we can test this now since we've already moved to a logstash version that supports gelf without compression and have compression fully turned off.

kolyshkin

comment created time in 14 hours

fork oivindoh/hyperkit

A toolkit for embedding hypervisor capabilities in your application

fork in 15 hours

issue commentmoby/moby

Docker Network bypasses Firewall, no option to disable

@fredjohnston you can continue using UFW if you want. Profiles are stored in /etc/ufw. The issue here is that Docker won't show up b/c there's no app profile listed in /etc/ufw/applications.d (same for any other firewall tool and its configuration).

Disabling IPTables in Docker means you won't have much of any networking in Docker, just IP addresses and containers won't be able to talk to each other. DOCKER_USER is a hack to give you some control, but really doesn't solve the issue - which is really about not making the Docker containers public on the network by default, but locked to the IP address of the container.

For the moment, I do recommend you continue using whatever Firewall tool you're most comfortable with (ufw, etc) but be aware that Docker Containers will be public on your network.

BenjamenMeyer

comment created time in 16 hours

startedmoby/moby

started time in 16 hours

issue commentmoby/moby

Support COPY -a switch in Dockerfiles

I'm quite surprised with this. What is the status currently? With docker 19.03 i don't see any difference between COPY in build and docker cp, which behaves the same as docker cp -a. If i refer to the documentation, docker cp should take the ownership at the destination, not the source. But it's not what it's doing, unlike COPY. More about this here: https://github.com/moby/moby/issues/34096

nponeccop

comment created time in 16 hours

fork AdamPioneer/moby

Moby Project - a collaborative project for the container ecosystem to assemble container-based systems

https://mobyproject.org/

fork in 17 hours

issue commentmoby/moby

docker cp behavior changed

2 years later, i have the same problem with docker 19.03. When copying from host to docker with docker cp the uid/guid are those of the destination (0/0 ie root/root) but those of the source user (typically 1000/1000). The steps to reproduce are given above.

The behavior is not consistent with the documentation:

The cp command behaves like the Unix cp -a command in that directories are copied recursively with permissions preserved if possible. Ownership is set to the user and primary group at the destination. For example, files copied to a container are created with UID:GID of the root user. Files copied to the local machine are created with the UID:GID of the user which invoked the docker cp command.

Currently there seems to be no difference between this and the -a option:

However, if you specify the -a option, docker cp sets the ownership to the user and primary group at the source.

Note this works fine when using COPY in docker build. The two commands are not consistent.

chipironcin

comment created time in 18 hours

issue commentmoby/moby

restartmanger wait error: OCI runtime create failed

same here. happened after an OOM kill followed by a coupe of hours of restarting due to the process in the container crashing

<details> <summary>docker info</summary>

Containers: 38
 Running: 30
 Paused: 0
 Stopped: 8
Images: 254
Server Version: 18.03.1-ce
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 773c489c9c1b21a6d78b5c538cd395416ec50f88
runc version: 4fc53a81fb7c994640722ac585fa9ca548971871
init version: 949e6fa
Security Options:
 seccomp
  Profile: default
Kernel Version: 3.10.0-957.21.3.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.629GiB
Name: -
ID: YI7K:Z3VI:LNHD:SOVN:KTH6:OAR3:YSST:NCHW:B57N:WXG4:AZLO:VDW3
Docker Root Dir: /opt/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

</details>

<details> <summary>docker version</summary>

Client:
 Version:      18.03.1-ce
 API version:  1.37
 Go version:   go1.9.5
 Git commit:   9ee9f40
 Built:        Thu Apr 26 07:20:16 2018
 OS/Arch:      linux/amd64
 Experimental: false
 Orchestrator: swarm

Server:
 Engine:
  Version:      18.03.1-ce
  API version:  1.37 (minimum version 1.12)
  Go version:   go1.9.5
  Git commit:   9ee9f40
  Built:        Thu Apr 26 07:23:58 2018
  OS/Arch:      linux/amd64
  Experimental: false

</details>

<details> <summary>uname -a</summary>

Linux - 3.10.0-957.21.3.el7.x86_64 #1 SMP Tue Jun 18 16:35:19 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

</details>

jstoja

comment created time in 19 hours

startedmoby/buildkit

started time in 19 hours

PR opened moby/buildkit

examples/kubernetes: use Parallel mode for StatefulSet

Parallel mode releaxes the pod creation order constraint.

https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#parallel-pod-management

Signed-off-by: Akihiro Suda akihiro.suda.cz@hco.ntt.co.jp

+3 -1

0 comment

3 changed files

pr created time in 19 hours

issue openedmoby/buildkit

buildctl should support showing on-going builds

created time in 20 hours

startedmoby/moby

started time in 20 hours

pull request commentmoby/buildkit

Used by Okteto Cloud

<!-- AUTOMATED:POULE:DCO-EXPLANATION --> Please sign your commits following these rules: https://github.com/moby/moby/blob/master/CONTRIBUTING.md#sign-your-work The easiest way to do this is to amend the last commit:

$ git clone -b "okteto" git@github.com:pchico83/buildkit.git somewhere
$ cd somewhere
$ git commit --amend -s --no-edit
$ git push -f

Amending updates the existing PR. You DO NOT need to open a new one.

pchico83

comment created time in 20 hours

PR opened moby/buildkit

Used by Okteto Cloud
+1 -0

0 comment

1 changed file

pr created time in 20 hours

issue commentmoby/buildkit

fatal error: concurrent map writes

Another one: https://pastebin.com/Y8YiC5F6

bpaquet

comment created time in 21 hours

startedmoby/moby

started time in 21 hours

startedmoby/moby

started time in 21 hours

issue commentmoby/buildkit

Cache pushed from one machine can not be reused on another machine

Hi, thank you for the quick response.

I've set up a test project to reproduce the cache issue.

https://gitlab.com/kindritskiy.m/docker-cache-issue

It is not a multistage build. It is a GitLab CI (not Github one)

you are using the same reference for cache and your image I am not sure I understand what you mean. Do you mean this two lines

--cache-from=type=registry,ref=registry.gitlab.com/kindritskiy.m/docker-cache-issue:latest \
      --cache-to=type=registry,ref=registry.gitlab.com/kindritskiy.m/docker-cache-issue:latest,mode=max \

or this

-t registry.gitlab.com/kindritskiy.m/docker-cache-issue:latest
  1. the job, which builds and pushes image - https://gitlab.com/kindritskiy.m/docker-cache-issue/-/jobs/348012533
  2. I am trying to build an image locally with the cache
docker buildx build -t my-local-image -f Dockerfile --cache-from=type=registry,ref=registry,ref=registry.gitlab.com/kindritskiy.m/docker-cache-issue:latest --load .
[+] Building 28.2s (11/11) FINISHED                                                                     
 => importing cache manifest from registry.gitlab.com/kindritskiy.m/docker-cache-issue:latest      3.4s
 => [internal] load .dockerignore                                                                  0.1s
 => => transferring context: 2B                                                                    0.0s
 => [internal] load build definition from Dockerfile                                               0.1s
 => => transferring dockerfile: 127B                                                               0.0s
 => [internal] load metadata for docker.io/library/node:12-alpine                                  1.7s
 => [internal] load build context                                                                  0.1s
 => => transferring context: 462B                                                                  0.0s
 => [1/5] FROM docker.io/library/node:12-alpine@sha256:50ce309a948aaad30ee876fb07ccf35b62833b27de  0.0s
 => CACHED [2/5] WORKDIR /app                                                                     20.8s
 => => pulling sha256:e7c96db7181be991f19a9fb6975cdbbd73c65f4a2681348e63a141a2192a5f10             2.7s
 => => pulling sha256:7b373bfb6ac5ffc0602bd1033666f9138fc137d68f67c4d4726cd6ac0c6bc9ac            18.9s
 => => pulling sha256:fd38342e03373b2ef6a3c7f354adf8d165454628b1de8c8329e70b3ef6325710             1.3s
 => => pulling sha256:5269cc77d334b68485968973d8e40df41f5d712a0cc66580bf9d925e5da6b923             1.4s
 => => pulling sha256:eafd5e4882da62f3d333317fc4fa6322755c84c6cb978dc3b962180455760a98             0.6s
 => [3/5] COPY package.json .                                                                      0.9s
 => [4/5] COPY requirements.txt .                                                                  0.1s
 => [5/5] RUN npm i                                                                                2.8s
 => exporting to image                                                                             0.2s
 => => exporting layers                                                                            0.1s
 => => writing image sha256:32fbc089de4f0b350a0362b5866345f019032d2b69e8e29092f9ce1639f02c34       0.0s 

On CI I use https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.linux-amd64 binary. On my laptop I already have a bundled version of buildx - same on CI

docker buildx version
github.com/docker/buildx v0.3.1 6db68d029599c6710a32aa7adcba8e5a344795a7 
kindritskyiMax

comment created time in 21 hours

fork lv4hy/moby

Moby Project - a collaborative project for the container ecosystem to assemble container-based systems

https://mobyproject.org/

fork in 21 hours

startedmoby/moby

started time in 21 hours

issue commentmoby/buildkit

Rootless mode doesn't work on Google Container-Optimized OS kernel (CONFIG_SECURITY_CHROMIUMOS_NO_UNPRIVILEGED_UNSAFE_MOUNTS?)

I just tried with the COS nodes of 1.15.4-gke.18 and the regressions seems to be still there :(

AkihiroSuda

comment created time in 21 hours

PR opened moby/moby

Fix misspellings of "successfully" in error msgs

<!-- Please make sure you've read and understood our contributing guidelines; https://github.com/moby/moby/blob/master/CONTRIBUTING.md

** Make sure all your commits include a signature generated with git commit -s **

For additional information on our contributing process, read our contributing guide https://docs.docker.com/opensource/code/

If this is a bug fix, make sure your description includes "fixes #xxxx", or "closes #xxxx"

Please provide the following information: -->

- What I did

- How I did it

- How to verify it

- Description for the changelog <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: -->

- A picture of a cute animal (not mandatory but encouraged)

+6 -6

0 comment

1 changed file

pr created time in 21 hours

fork dnnr/moby

Moby Project - a collaborative project for the container ecosystem to assemble container-based systems

https://mobyproject.org/

fork in 21 hours

fork rageshkrishna/moby

Moby Project - a collaborative project for the container ecosystem to assemble container-based systems

https://mobyproject.org/

fork in 21 hours

startedmoby/moby

started time in a day

fork Michaelmichaeljensen/moby

Moby Project - a collaborative project for the container ecosystem to assemble container-based systems

https://mobyproject.org/

fork in a day

fork binbin-bin/moby

Moby Project - a collaborative project for the container ecosystem to assemble container-based systems

https://mobyproject.org/

fork in a day

issue commentmoby/moby

.Net ASP.Net Webapp in Container loose Primary Domain Trust randomly after some days runtime

@daBONDi Nope. We have interlock in front and it fully supports the Kerberos according to their documentation.

daBONDi

comment created time in a day

startedmoby/moby

started time in a day

pull request commentmoby/moby

Bump hcsshim to 6c7177eae8be632af2e15e44b62d69ab18389ddb

The above issue was a false positive and we were able to move on and get a repro of the original issue [0x490]. Issue was reproed with tracing enabled - @kevpar is taking a look.

vikramhh

comment created time in a day

pull request commentmoby/moby

Windows: do not use parallelism on container restart

ping @thaJeztah @tonistiigi @jterry75 PTAL

olljanat

comment created time in a day

more