profile
viewpoint
Moby moby https://mobyproject.org/ An open framework to assemble specialized container systems without reinventing the wheel.

moby/moby 57433

Moby Project - a collaborative project for the container ecosystem to assemble container-based systems

moby/buildkit 2698

concurrent, cache-efficient, and Dockerfile-agnostic builder toolkit

moby/hyperkit 2607

A toolkit for embedding hypervisor capabilities in your application

moby/libnetwork 1678

networking for containers

moby/datakit 920

Connect processes into powerful data pipelines with a simple git-like filesystem interface

moby/vpnkit 739

A toolkit for embedding VPN capabilities in your application

moby/tool 72

Temporary repository for the moby assembly tool used by the Moby project

moby/libentitlement 61

Entitlements library for high level control of container permissions

moby/ipvs 33

IPVS networking for containers (package derived from moby/libnetwork)

issue closedmoby/moby

Difference in behaviour between whitespace in front of comments and whitespace in front of empty escaped newlines

<!-- If you are reporting a new issue, make sure that we do not have any duplicates already open. You can ensure this by searching the issue list for this repository. If there is a duplicate, please close your issue and add a comment to the existing issue instead.

If you suspect your issue is a bug, please edit your issue description to include the BUG REPORT INFORMATION shown below. If you fail to provide this information within 7 days, we cannot debug your issue and will close it. We will, however, reopen it if you later provide the information.

For more information about reporting issues, see https://github.com/moby/moby/blob/master/CONTRIBUTING.md#reporting-other-issues


GENERAL SUPPORT INFORMATION

The GitHub issue tracker is for bug reports and feature requests. General support for docker can be found at the following locations:

  • Docker Support Forums - https://forums.docker.com
  • Slack - community.docker.com #general channel
  • Post a question on StackOverflow, using the Docker tag

General support for moby can be found at the following locations:

  • Moby Project Forums - https://forums.mobyproject.org
  • Slack - community.docker.com #moby-project channel
  • Post a question on StackOverflow, using the Moby tag

BUG REPORT INFORMATION

Use the commands below to provide key information from your environment: You do NOT have to include this information if this is a FEATURE REQUEST -->

Description docker build handles whitespace in front of comments differently from escaped newlines. Should they be the same for consistency?

<!-- Briefly describe the problem you are having in a few paragraphs. -->

Steps to reproduce the issue:

  1. Create the Dockerfile below.
FROM alpine
RUN echo a\
# comment
bc
RUN echo a\
    # comment
bc
RUN echo a\
\
bc
RUN echo a\
    \
bc
  1. Build it.
$ docker build --no-cache .
Sending build context to Docker daemon  38.15MB
Step 1/5 : FROM alpine
 ---> a187dde48cd2
Step 2/5 : RUN echo abc
 ---> Running in 79d27c4ca35a
abc
Removing intermediate container 79d27c4ca35a
 ---> ea38504cb466
Step 3/5 : RUN echo abc
 ---> Running in 98bbef8a9d93
abc
Removing intermediate container 98bbef8a9d93
 ---> b5fbc315217d
Step 4/5 : RUN echo abc
 ---> Running in 28f19ca7f662
abc
Removing intermediate container 28f19ca7f662
 ---> 61cc66e30b11
Step 5/5 : RUN echo a    bc
 ---> Running in 9a5dab9f881e
a bc
Removing intermediate container 9a5dab9f881e
 ---> 2c7e72509911
Successfully built 2c7e72509911
  1. Notice how the first three prints abc but the fourth one prints a bc.
  2. The issue does not appear to be restricted to shell expansions caused by RUN and so on. The same behaviour occurs for EXPOSE.
FROM alpine
EXPOSE 800\
# comment
1
EXPOSE 800\
    # comment
2
EXPOSE 800\
\
3
EXPOSE 800\
    \
4
  1. Building will display the same issue with it exposing both ports 800 and 4.
$ docker build --no-cache .
Sending build context to Docker daemon  38.15MB
Step 1/5 : FROM alpine
 ---> a187dde48cd2
Step 2/5 : EXPOSE 8001
 ---> Running in 5cd4f43fad2b
Removing intermediate container 5cd4f43fad2b
 ---> 45591e5432f7
Step 3/5 : EXPOSE 8002
 ---> Running in 451a5da6eae1
Removing intermediate container 451a5da6eae1
 ---> abcbe582bd6a
Step 4/5 : EXPOSE 8003
 ---> Running in 1148717c152e
Removing intermediate container 1148717c152e
 ---> 88c4e85bb4d4
Step 5/5 : EXPOSE 800    4
 ---> Running in d3b52318efde
Removing intermediate container d3b52318efde
 ---> 32f8bb075f06
Successfully built 32f8bb075f06
  1. You can verify this with docker inspect.
$ docker inspect --format='{{json .Config.ExposedPorts }}' 32f8bb075f06
{
  "4/tcp": {},
  "800/tcp": {},
  "8001/tcp": {},
  "8002/tcp": {},
  "8003/tcp": {}
}

Describe the results you received: I noticed that whitespace in front of comments were ignored but the ones in front of an escape character were not.

Describe the results you expected: I would have expected them to be the same. Either they both respect the whitespace or they both ignore them.

Output of docker version:

$ docker version
Client: Docker Engine - Community
 Version:           19.03.4
 API version:       1.40
 Go version:        go1.12.10
 Git commit:        9013bf583a
 Built:             Fri Oct 18 15:49:05 2019
 OS/Arch:           linux/amd64
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          19.03.4
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.10
  Git commit:       9013bf583a
  Built:            Fri Oct 18 15:55:51 2019
  OS/Arch:          linux/amd64
  Experimental:     true
 containerd:
  Version:          v1.2.10
  GitCommit:        b34a5c8af56e510852c35414db4c1f4fa6172339
 runc:
  Version:          1.0.0-rc8+dev
  GitCommit:        3e425f80a8c931f88e6d94a8c831b9d5aa481657
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

Output of docker info:

$ docker info
Client:
 Debug Mode: false
 Plugins:
  app: Docker Application (Docker Inc., v0.8.0)

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 26
 Server Version: 19.03.4
 Storage Driver: overlay2
  Backing Filesystem: xfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: b34a5c8af56e510852c35414db4c1f4fa6172339
 runc version: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
 init version: fec3683
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 4.4.0-174-generic
 Operating System: Alpine Linux v3.10 (containerized)
 OSType: linux
 Architecture: x86_64
 CPUs: 8
 Total Memory: 31.4GiB
 Name: node1
 ID: AGBI:2YO2:52ND:VWNP:OD56:LL2U:GLGG:QACQ:ABNV:3NOI:SRUI:6Q7A
 Docker Root Dir: /var/lib/docker
 Debug Mode: true
  File Descriptors: 23
  Goroutines: 43
  System Time: 2020-04-06T12:07:58.957983198Z
  EventsListeners: 0
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: true
 Insecure Registries:
  127.0.0.1
  127.0.0.0/8
 Live Restore Enabled: false
 Product License: Community Engine

WARNING: API is accessible on http://0.0.0.0:2375 without encryption.
         Access to the remote API is equivalent to root access on the host. Refer
         to the 'Docker daemon attack surface' section in the documentation for
         more information: https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
WARNING: No swap limit support
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled

Additional environment details (AWS, VirtualBox, physical, etc.): Tested online with PWD.

closed time in 7 minutes

rcjsuen

issue commentmoby/moby

Difference in behaviour between whitespace in front of comments and whitespace in front of empty escaped newlines

Let me close this one as https://github.com/docker/cli/pull/2617 was merged (I'll backport it to be published on the docs soon)

rcjsuen

comment created time in 7 minutes

fork josef-mende-diva-e/moby

Moby Project - a collaborative project for the container ecosystem to assemble container-based systems

https://mobyproject.org/

fork in 9 minutes

issue commentmoby/hyperkit

Memory Leaks in Docker for Mac stemming from Hyperkit

No software solution to this. Only switching to Linux. This ticket is just being ignored. Docker for Mac says "ask Hyperkit", Hyperkit says "ask Docker for Mac". To what @rn said when closing: guest OS uses some memory to operate and even if you don't run any payload, there are some OS tasks. And just by running those small OS tasks memory is consumed and not released back. It's not a memory leak in the traditional sense, it's just inefficiency.

iMerica

comment created time in 13 minutes

issue closedmoby/buildkit

buildkit copy step is slow

I switched to buildkit to use --cache-from and buildkit_inline_cache.

In my dockerfile I have a COPY step, which copies 19 files into the docker image. Below is the output, jenkins timestamps included.

In below output, you can see that this step takes more than 6 minutes. Most of the time spent seems to be after the last pulling. I was assuming that the copy step just needs to download the 19 hashes and compare them to the local ones. Unsure why it takes 6 minutes to figure out that the docker layer is used from (remote) cache.

12:00:07 #16 [10/10] COPY ./python /usr/src/app/ 12:00:07 #16 pulling sha256:ab1fc7e4bf9195e554669fafa47f69befe22053d7100f5f7002cb9254a36f37c 12:00:07 #16 pulling sha256:35fba333ff5209042e8925a73f3cbabf00ba725257bdba38ec3b415e5d7e6cc7 12:00:07 #16 pulling sha256:f0cb1fa13079687d5118e5cd7e3ce3c09dc483fa44f0577eca349de8d76e4e8c 12:00:09 #16 pulling sha256:f0cb1fa13079687d5118e5cd7e3ce3c09dc483fa44f0577eca349de8d76e4e8c 1.9s done 12:00:09 #16 pulling sha256:3d1dd648b5ade2bbdfe77fa94424b0314100b58fb5f6a98486538c2126e08e2f 12:00:09 #16 pulling sha256:35fba333ff5209042e8925a73f3cbabf00ba725257bdba38ec3b415e5d7e6cc7 2.4s done 12:00:09 #16 pulling sha256:49f7a0920ce12498040091232541e5f75306c09564f092b55c539d66b93aee7c 12:00:17 #16 pulling sha256:ab1fc7e4bf9195e554669fafa47f69befe22053d7100f5f7002cb9254a36f37c 9.3s done 12:00:17 #16 pulling sha256:e536bb402aee8d3f96bc92fd5663a81b6b5c4855b87097b9ca1dc007759e75d5 12:00:19 #16 pulling sha256:3d1dd648b5ade2bbdfe77fa94424b0314100b58fb5f6a98486538c2126e08e2f 10.3s done 12:00:19 #16 pulling sha256:9e6e95e225620c1046b390e9b21ec0fd1389ad9b7e04e1e761782d48ed471715 12:00:19 #16 pulling sha256:e536bb402aee8d3f96bc92fd5663a81b6b5c4855b87097b9ca1dc007759e75d5 3.3s done 12:00:19 #16 pulling sha256:4978adbd742fb91e43a4a2838dd46ec1ba24fbfb97dfcb6faca11412790b819f 12:00:20 #16 pulling sha256:4978adbd742fb91e43a4a2838dd46ec1ba24fbfb97dfcb6faca11412790b819f 0.6s done 12:00:20 #16 pulling sha256:8b6531c990eb229b8d38bd029bc45e848e3471fa26e52017a7d866b19f90b2f8 12:00:21 #16 pulling sha256:8b6531c990eb229b8d38bd029bc45e848e3471fa26e52017a7d866b19f90b2f8 1.0s done 12:00:21 #16 pulling sha256:e6b98b5af54a9a7c53b192b942b7593419c9a41afa7c82371122a985f7df9fd0 12:00:21 #16 pulling sha256:e6b98b5af54a9a7c53b192b942b7593419c9a41afa7c82371122a985f7df9fd0 0.3s done 12:00:21 #16 pulling sha256:01dae80b7e497a0327aed2572948c31e6b7b4185271829bf4df66d8a0ed89ab5 12:00:23 #16 pulling sha256:01dae80b7e497a0327aed2572948c31e6b7b4185271829bf4df66d8a0ed89ab5 1.6s done 12:00:23 #16 pulling sha256:13cce573f5c73e06fc4db0ea54d924dadb0aa8eea8babd9c50b74ff6d96c950e 12:00:23 #16 pulling sha256:13cce573f5c73e06fc4db0ea54d924dadb0aa8eea8babd9c50b74ff6d96c950e 0.3s done 12:00:23 #16 pulling sha256:9031aea3a7bcd92096c809101a64e9bc314ac6453ba9afd9ddfba450bc23c2bc 12:00:23 #16 pulling sha256:9e6e95e225620c1046b390e9b21ec0fd1389ad9b7e04e1e761782d48ed471715 4.2s done 12:00:23 #16 pulling sha256:93538d3c0bc56ebd6db24258910eb2b470189916d2383a4a84d9e5f6a24aaf47 12:00:27 #16 pulling sha256:93538d3c0bc56ebd6db24258910eb2b470189916d2383a4a84d9e5f6a24aaf47 4.0s done 12:00:27 #16 pulling sha256:03ee30128cd864b890f20ae7808c900cb4e51fd92c7b4be5591983741584bd07 12:00:28 #16 pulling sha256:03ee30128cd864b890f20ae7808c900cb4e51fd92c7b4be5591983741584bd07 0.6s done 12:00:28 #16 pulling sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1 12:00:29 #16 pulling sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1 0.8s done 12:00:29 #16 pulling sha256:9f7283bf8ec4b2deb2c63ba6c7f0255c10eb2a87f83460af501a50b6b3108ed9 12:00:30 #16 pulling sha256:9f7283bf8ec4b2deb2c63ba6c7f0255c10eb2a87f83460af501a50b6b3108ed9 1.3s done 12:00:30 #16 pulling sha256:19504cba801a8474af84765f0607efc4036eca509c3e48cdabd5ef6186331a6e 12:00:30 #16 pulling sha256:9031aea3a7bcd92096c809101a64e9bc314ac6453ba9afd9ddfba450bc23c2bc 6.9s done 12:00:30 #16 pulling sha256:4f7e9e5911f12916c0921ecde19093e1a7e0110756ef12c5f65874b8374846b6 12:00:30 #16 pulling sha256:4f7e9e5911f12916c0921ecde19093e1a7e0110756ef12c5f65874b8374846b6 0.4s done 12:00:30 #16 pulling sha256:6bf6cb1d498547a47edae50fb32b738b150cd16a44b47ff005265b49d5ee3827 12:00:31 #16 pulling sha256:19504cba801a8474af84765f0607efc4036eca509c3e48cdabd5ef6186331a6e 0.6s done 12:00:31 #16 pulling sha256:7e140ea1ecf8b95f07697e4de93ef8d4d8a83c076b98ea6cf4b999b2237cee5e 12:00:31 #16 pulling sha256:6bf6cb1d498547a47edae50fb32b738b150cd16a44b47ff005265b49d5ee3827 0.6s done 12:00:31 #16 pulling sha256:9adf5ee133d1d49cc8e6333d614b237208d8d7ea01551c6fc3111469e729b0ab 12:00:31 #16 pulling sha256:9adf5ee133d1d49cc8e6333d614b237208d8d7ea01551c6fc3111469e729b0ab 0.4s done 12:00:31 #16 pulling sha256:c9d548498377fdd24a5e138a6b424595bd43a219ce509cf4f7798b9ebea2b556 12:00:31 #16 pulling sha256:7e140ea1ecf8b95f07697e4de93ef8d4d8a83c076b98ea6cf4b999b2237cee5e 1.0s done 12:00:31 #16 pulling sha256:aceec53d66aa79e831c55f907a57332a88ca34fa38dd2c9b82e16d88c30f0275 12:00:32 #16 pulling sha256:c9d548498377fdd24a5e138a6b424595bd43a219ce509cf4f7798b9ebea2b556 0.3s done 12:00:32 #16 pulling sha256:aceec53d66aa79e831c55f907a57332a88ca34fa38dd2c9b82e16d88c30f0275 0.4s done 12:00:42 #16 pulling sha256:49f7a0920ce12498040091232541e5f75306c09564f092b55c539d66b93aee7c 32.3s done 12:06:18 #16 CACHED

I'm building on docker 19.03. The build is on jenkins slave with dind-daemon The build looks like this sh "DOCKER_BUILDKIT=1 docker build --pull --build-arg BUILDKIT_INLINE_CACHE=1 --cache-from redacted:latest -t redacted:latest ."

When building again on the same jenkins slave, it says CACHED immediately (local cache because of rebuild on same slave). When building on different slave: again 6 minutes.

Wondering if someone has a clou on this

closed time in 26 minutes

Timvissers

issue commentmoby/moby

Difference in behaviour between whitespace in front of comments and whitespace in front of empty escaped newlines

I opened docker/cli#2617 to improve the documentation; feel free to comment on that PR if you have suggestions 👍

It looks okay to me. Thank you for fleshing this out in the documentation, @thaJeztah!

rcjsuen

comment created time in 35 minutes

startedmoby/moby

started time in an hour

issue commentmoby/buildkit

buildkit copy step is slow

Update: I am now thinking this is a memory limit problem causing the performance problems. It seems after increasing our memory limit (jenkins agent are running as kubernetes pods), the performance problem is gone

Timvissers

comment created time in an hour

issue commentmoby/moby

`docker commit` causes layer to be cached in `docker build` when it was not successful

The --rm option is enabled by default, so when docker build is run and a step completes succesfully, the "intermediate" container is committed to an image, and the container is then removed. If a step fails, the build is aborted and the failing container is kept to allow the user to investigate why it failed (unless --force-rm is used).

The problem described here, is that despite that step failing, the user has manually committed that container to an image (effectively "ignoring" that it's a faulty step, and now marking it as an "ok" image). (for the classic builder) if it finds a cache-hit (image that satisfies the metadata), no container is ran for that step.

asottile

comment created time in 2 hours

startedmoby/hyperkit

started time in 3 hours

issue commentmoby/moby

`docker commit` causes layer to be cached in `docker build` when it was not successful

As far as i know, another container will run when docker build next time. May i ask what's the tagged image(intermediate container) used for here or why it shouldn't be removed? Originally i meant remove the intermediate container default like what --rm option does.

asottile

comment created time in 3 hours

issue commentmoby/moby

`docker commit` causes layer to be cached in `docker build` when it was not successful

The "intermediate" container in this case is a tagged image (it was manually tagged, so docker build should not remove it).

To prevent the "failing" container from being preserved when the original build failed, use --force-rm when building;

docker build -t foo --force-rm -<<EOF
FROM ubuntu:bionic
RUN echo hi > f
RUN echo hello > g && exit 1
EOF

# Removing intermediate container fcc41b449f53
# The command '/bin/sh -c echo hello > g && exit 1' returned a non-zero code: 1

docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
# (no containers)


asottile

comment created time in 4 hours

push eventmoby/hyperkit

David Scott

commit sha 3715de4896be9f37c84dc4a8f407cd60013cf34c

ocaml: update to mirage 4.0 Some of the dependencies have been renamed (e.g. cstruct.lwt to cstruct-lwt). Signed-off-by: David Scott <dave@recoil.org>

view details

David Scott

commit sha 6414a3c33383c011044ba1df8f6fd4965a52118b

ocaml: ensure the notifications fd is set to non-blocking Since some users have observed spinning, it could be that the system thinks the fd should be blocking but it is actually non-blocking and returning immediately. Signed-off-by: David Scott <dave@recoil.org>

view details

David Scott

commit sha e88ddf6badb17c44b357630b5d6511e3511843c1

ocaml: handle unexpected error cases from the notifications pipe The `Lwt_unix.read` is supposed to return 0 for EOF and >0 for number of bytes read. Error results from the syscall are supposed to be handled by the library. However there is evidence that an error `-1` is leaking through, so add some logging and exit the process instead of spinning. Signed-off-by: David Scott <dave@recoil.org>

view details

David Scott

commit sha c00e311fbc9094d8b611c82556500ef11b5d5c62

ocaml: bump to OCaml 4.10.0 Signed-off-by: David Scott <dave@recoil.org>

view details

David Scott

commit sha 0991dc7c24847a33ddc32a5e6a23813246dc839d

ocaml: remove out-of-date repo subset We should be able to rely on the version constraints in the hyperkit.opam instead. Signed-off-by: David Scott <dave@recoil.org>

view details

David Scott

commit sha 461b3f97b98a0cdaa4498f2351f67b9896eb343f

Merge pull request #286 from djs55/update-mirage ocaml: update dependencies and harden the notification loop

view details

push time in 4 hours

PR merged moby/hyperkit

ocaml: update dependencies and harden the notification loop
  • update to opam package manager version 2
  • refresh all the dependencies

There is some evidence that the notification loop is spinning, due to an Lwt_unix.read call returning -1: https://github.com/docker/for-mac/issues/3499#issuecomment-623960890 . This is not supposed to happen but perhaps it is possible if the fd is set to the wrong blocking mode, such that -1, EAGAIN are returned but not handled correctly.

This PR

  • ensures the non-blocking mode is set correctly
  • logs and exits if the call returns <0, which will highlight the problem and prevent spinning
+74 -5856

2 comments

390 changed files

djs55

pr closed time in 4 hours

issue openedmoby/buildkit

buildkit copy step is slow

I switched to buildkit to use --cache-from and buildkit_inline_cache.

In my dockerfile I have a COPY step, which copies 19 files into the docker image. Below is the output, jenkins timestamps included.

In below output, you can see that this step takes more than 6 minutes. Most of the time spent seems to be after the last pulling. I was assuming that the copy step just needs to download the 19 hashes and compare them to the local ones. Unsure why it takes 6 minutes to figure out that the docker layer is used from (remote) cache.

12:00:07 #16 [10/10] COPY ./python /usr/src/app/ 12:00:07 #16 pulling sha256:ab1fc7e4bf9195e554669fafa47f69befe22053d7100f5f7002cb9254a36f37c 12:00:07 #16 pulling sha256:35fba333ff5209042e8925a73f3cbabf00ba725257bdba38ec3b415e5d7e6cc7 12:00:07 #16 pulling sha256:f0cb1fa13079687d5118e5cd7e3ce3c09dc483fa44f0577eca349de8d76e4e8c 12:00:09 #16 pulling sha256:f0cb1fa13079687d5118e5cd7e3ce3c09dc483fa44f0577eca349de8d76e4e8c 1.9s done 12:00:09 #16 pulling sha256:3d1dd648b5ade2bbdfe77fa94424b0314100b58fb5f6a98486538c2126e08e2f 12:00:09 #16 pulling sha256:35fba333ff5209042e8925a73f3cbabf00ba725257bdba38ec3b415e5d7e6cc7 2.4s done 12:00:09 #16 pulling sha256:49f7a0920ce12498040091232541e5f75306c09564f092b55c539d66b93aee7c 12:00:17 #16 pulling sha256:ab1fc7e4bf9195e554669fafa47f69befe22053d7100f5f7002cb9254a36f37c 9.3s done 12:00:17 #16 pulling sha256:e536bb402aee8d3f96bc92fd5663a81b6b5c4855b87097b9ca1dc007759e75d5 12:00:19 #16 pulling sha256:3d1dd648b5ade2bbdfe77fa94424b0314100b58fb5f6a98486538c2126e08e2f 10.3s done 12:00:19 #16 pulling sha256:9e6e95e225620c1046b390e9b21ec0fd1389ad9b7e04e1e761782d48ed471715 12:00:19 #16 pulling sha256:e536bb402aee8d3f96bc92fd5663a81b6b5c4855b87097b9ca1dc007759e75d5 3.3s done 12:00:19 #16 pulling sha256:4978adbd742fb91e43a4a2838dd46ec1ba24fbfb97dfcb6faca11412790b819f 12:00:20 #16 pulling sha256:4978adbd742fb91e43a4a2838dd46ec1ba24fbfb97dfcb6faca11412790b819f 0.6s done 12:00:20 #16 pulling sha256:8b6531c990eb229b8d38bd029bc45e848e3471fa26e52017a7d866b19f90b2f8 12:00:21 #16 pulling sha256:8b6531c990eb229b8d38bd029bc45e848e3471fa26e52017a7d866b19f90b2f8 1.0s done 12:00:21 #16 pulling sha256:e6b98b5af54a9a7c53b192b942b7593419c9a41afa7c82371122a985f7df9fd0 12:00:21 #16 pulling sha256:e6b98b5af54a9a7c53b192b942b7593419c9a41afa7c82371122a985f7df9fd0 0.3s done 12:00:21 #16 pulling sha256:01dae80b7e497a0327aed2572948c31e6b7b4185271829bf4df66d8a0ed89ab5 12:00:23 #16 pulling sha256:01dae80b7e497a0327aed2572948c31e6b7b4185271829bf4df66d8a0ed89ab5 1.6s done 12:00:23 #16 pulling sha256:13cce573f5c73e06fc4db0ea54d924dadb0aa8eea8babd9c50b74ff6d96c950e 12:00:23 #16 pulling sha256:13cce573f5c73e06fc4db0ea54d924dadb0aa8eea8babd9c50b74ff6d96c950e 0.3s done 12:00:23 #16 pulling sha256:9031aea3a7bcd92096c809101a64e9bc314ac6453ba9afd9ddfba450bc23c2bc 12:00:23 #16 pulling sha256:9e6e95e225620c1046b390e9b21ec0fd1389ad9b7e04e1e761782d48ed471715 4.2s done 12:00:23 #16 pulling sha256:93538d3c0bc56ebd6db24258910eb2b470189916d2383a4a84d9e5f6a24aaf47 12:00:27 #16 pulling sha256:93538d3c0bc56ebd6db24258910eb2b470189916d2383a4a84d9e5f6a24aaf47 4.0s done 12:00:27 #16 pulling sha256:03ee30128cd864b890f20ae7808c900cb4e51fd92c7b4be5591983741584bd07 12:00:28 #16 pulling sha256:03ee30128cd864b890f20ae7808c900cb4e51fd92c7b4be5591983741584bd07 0.6s done 12:00:28 #16 pulling sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1 12:00:29 #16 pulling sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1 0.8s done 12:00:29 #16 pulling sha256:9f7283bf8ec4b2deb2c63ba6c7f0255c10eb2a87f83460af501a50b6b3108ed9 12:00:30 #16 pulling sha256:9f7283bf8ec4b2deb2c63ba6c7f0255c10eb2a87f83460af501a50b6b3108ed9 1.3s done 12:00:30 #16 pulling sha256:19504cba801a8474af84765f0607efc4036eca509c3e48cdabd5ef6186331a6e 12:00:30 #16 pulling sha256:9031aea3a7bcd92096c809101a64e9bc314ac6453ba9afd9ddfba450bc23c2bc 6.9s done 12:00:30 #16 pulling sha256:4f7e9e5911f12916c0921ecde19093e1a7e0110756ef12c5f65874b8374846b6 12:00:30 #16 pulling sha256:4f7e9e5911f12916c0921ecde19093e1a7e0110756ef12c5f65874b8374846b6 0.4s done 12:00:30 #16 pulling sha256:6bf6cb1d498547a47edae50fb32b738b150cd16a44b47ff005265b49d5ee3827 12:00:31 #16 pulling sha256:19504cba801a8474af84765f0607efc4036eca509c3e48cdabd5ef6186331a6e 0.6s done 12:00:31 #16 pulling sha256:7e140ea1ecf8b95f07697e4de93ef8d4d8a83c076b98ea6cf4b999b2237cee5e 12:00:31 #16 pulling sha256:6bf6cb1d498547a47edae50fb32b738b150cd16a44b47ff005265b49d5ee3827 0.6s done 12:00:31 #16 pulling sha256:9adf5ee133d1d49cc8e6333d614b237208d8d7ea01551c6fc3111469e729b0ab 12:00:31 #16 pulling sha256:9adf5ee133d1d49cc8e6333d614b237208d8d7ea01551c6fc3111469e729b0ab 0.4s done 12:00:31 #16 pulling sha256:c9d548498377fdd24a5e138a6b424595bd43a219ce509cf4f7798b9ebea2b556 12:00:31 #16 pulling sha256:7e140ea1ecf8b95f07697e4de93ef8d4d8a83c076b98ea6cf4b999b2237cee5e 1.0s done 12:00:31 #16 pulling sha256:aceec53d66aa79e831c55f907a57332a88ca34fa38dd2c9b82e16d88c30f0275 12:00:32 #16 pulling sha256:c9d548498377fdd24a5e138a6b424595bd43a219ce509cf4f7798b9ebea2b556 0.3s done 12:00:32 #16 pulling sha256:aceec53d66aa79e831c55f907a57332a88ca34fa38dd2c9b82e16d88c30f0275 0.4s done 12:00:42 #16 pulling sha256:49f7a0920ce12498040091232541e5f75306c09564f092b55c539d66b93aee7c 32.3s done 12:06:18 #16 CACHED

I'm building on docker 19.03. The build is on jenkins slave with dind-daemon The build looks like this sh "DOCKER_BUILDKIT=1 docker build --pull --build-arg BUILDKIT_INLINE_CACHE=1 --cache-from redacted:latest -t redacted:latest ."

When building again on the same jenkins slave, it says CACHED immediately (local cache because of rebuild on same slave). When building on different slave: again 6 minutes.

Wondering if someone has a clou on this

created time in 4 hours

startedmoby/moby

started time in 4 hours

startedmoby/moby

started time in 4 hours

issue commentmoby/moby

`docker commit` causes layer to be cached in `docker build` when it was not successful

@thaJeztah how about removing the intermediate container when docker build failed?

asottile

comment created time in 4 hours

issue commentmoby/moby

Docker build/push performance is inexplicably slow

Also here, a 3 min (almost fully cached) build that takes up to 30 min for being pushed. It's weird.

groman2

comment created time in 4 hours

startedmoby/datakit

started time in 4 hours

startedmoby/buildkit

started time in 5 hours

startedmoby/moby

started time in 5 hours

Pull request review commentmoby/sys

mountinfo.Mounted: optimize by adding fast paths

 $(BINDIR)/golangci-lint: $(BINDIR)  $(BINDIR): 	mkdir -p $(BINDIR)++.PHONY: cross+cross:+	for os in $(CROSS_OSES); do \+		for arch in $(CROSS_ARCHES); do \+			echo "$$os/$$arch" | grep -qE $(OS_ARCH_SKIP) && continue; \

Looks like the list is;

linux/amd64
linux/arm
linux/arm64
linux/ppc64le
linux/s390
freebsd/amd64
freebsd/arm
darwin/amd64
darwin/arm
darwin/arm64
windows/amd64
windows/arm

Wondering if we need:

  • freebsd/arm
  • darwin/arm (likely not, darwin won't be 32-bit)
  • windows/arm (not sure; is windows on 32-bit arm a thing, or are they working on 64 bit only?)

If we remove those from the list;

linux/amd64
linux/arm
linux/arm64
linux/ppc64le
linux/s390
freebsd/amd64
darwin/amd64
darwin/arm64
windows/amd64

Wondering if it would be better to just define that list (easier read, and easier to maintain as well)

If would require splitting to GOOS and GOARCH then, so something like

GOARCH=${CROSS#*/}
GOOS=${CROSS%/*}

Or handle it with Make;

.PHONY: linux/% darwin/% freebsd/% windows/%
linux/% darwin/% freebsd/% windows/%:
        echo GOOS=$(@D)
        echo GOARCH=$(@F)
kolyshkin

comment created time in 5 hours

pull request commentmoby/moby

update runc binary to v1.0.0-rc91

Vendor update can be another PR

AkihiroSuda

comment created time in 6 hours

pull request commentmoby/moby

update runc binary to v1.0.0-rc91

Do we want the vendoring updated as well (not for cherry-picking), or won't work yet until containerd is updated?

AkihiroSuda

comment created time in 6 hours

fork eyJhb/moby

Moby Project - a collaborative project for the container ecosystem to assemble container-based systems

https://mobyproject.org/

fork in 6 hours

issue commentmoby/moby

race conditions lead to duplicate docker networks

I suggest, that YES it is OK to have multiple networks with the same name, I do not care, might be useful (??). BUT! As stated in my issue https://github.com/moby/moby/issues/40901 , if creating a container with the ID specified it should continue to use that, instead of the name.

Any one have any suggestions for this? Please do comment in the thread. Maybe @thaJeztah you have some time for this?

rpeleg1970

comment created time in 6 hours

startedmoby/moby

started time in 6 hours

issue closedmoby/buildkit

Cache transfer

Still WIP, but opened tentatively for adding this to 2017Q4 github project: https://github.com/moby/buildkit/projects/1

Will update the issue description next week.

// Ref is a reference to cacheable objects.                
type Ref interface {                                       
        ID() string                                        
        Release(context.Context) error                     
        Size(ctx context.Context) (int64, error)           
        Metadata() *metadata.StorageItem // TODO: remove                  
}                                                          

type ImmutableRef interface {                              
        Ref                                                
        Finalize(ctx context.Context) error // Make sure reference is flushed to driver                                
        // TODO: ImmutableMeta                             
}                                                          

type LocalImmutableRef interface {                         
        ImmutableRef                                       
        Mountable                                          
        Extractable                                        
        Parent() LocalImmutableRef                         
}                                                          

// no implementation yet            
// non-mountable                       
type RemoteImmutableRef interface {                        
        ImmutableRef                                       
        Extractable                                        
        // copies the whole cache                          
        Pull(ctx context.Context, something TBD) (LocalImmutableRef, error)                                                           
}                                                          

type MutableRef interface {                                
        Ref                                                
        Mountable                                          
        // TODO: MutableMeta                               
        Commit(context.Context) (LocalImmutableRef, error) 
}                                                          

type Mountable interface {                                 
        Mount(ctx context.Context, readonly bool) ([]mount.Mount, error)                                               
}                                                          

type Extractable interface {                               
        // extract files within the cache.                 
        // toPath is a local filesystem path.              
        // srcPaths are paths within the cache.            
        // used for CopyOp                                 
        Extract(ctx context.Context, toPath string, srcPath ...string) error                                           
}

closed time in 6 hours

AkihiroSuda

issue commentmoby/buildkit

Cache transfer

Closing as we have Kubernetes driver in buildx now

AkihiroSuda

comment created time in 6 hours

issue closedmoby/buildkit

Cluster management (membership & cache query)

Design

Membership

  • Worker nodes periodically report the following info to the master: -- worker ID: unique string in the cluster. e.g. workerhost01-containerd-overlay -- connection info for connecting to the worker from the master: implementation-specific. probably, e.g. tcp://workerhost01:12345 or unix://run/buildkit/instance01.sock --- Support for UNIX socket should be useful for testing purpose --- not unique; can be shared among multiple workers. (master puts workerID to all request messages) -- performance stat: loadavg, disk quota usage, and so on -- annotations

e.g.

{
  "worker_id": "workerhost01-containerd-overlay",
  "connections":[
    {
      "type": "grpc.v0",
      "socket": "tcp://workerhost01.12345"
    }
  ],
  "stats":[
    {
      "type": "cpu.v0",
      "loadavg": [0.01, 0.02, 0.01]
    }
  ],
  "annotations": {
    "os": "linux",
    "arch": "amd64",
    "executor": "containerd",
    "snapshotter": "overlay",
    "com.example.userspecific": "blahblahblah",
  }
}

Cache query

  • With the connection info above, managers can ask a worker whether the worker has the cache for the CacheKey. -- the answer does not need to be 100% accurate. -- How to transfer the cache data is another topic: #224

Initial naive implementation

  • Stateless master -- When the master dies, the orchestrator (k8s/swarm) restarts the master (and membership info will be lost) -- Multiple masters could be started, but no connection between masters

  • Worker connects to the master using gRPC -- the master address(es) can be specified via the daemon CLI flag: --join tcp://master:12345

  • Master connects to all workers using gRPC for querying cache existence -- does not scale for dozens of nodes, but probably acceptable for the initial work

Future possible implementation

  • Use IPFS (or just libp2p DHT library) for querying cache existence (and also transfer)? -- Membership state can be saved to IPFS as well? -- or Infinite? (is it still active?)

closed time in 6 hours

AkihiroSuda

issue commentmoby/buildkit

Cluster management (membership & cache query)

Closing as we have Kubernetes driver in buildx now

AkihiroSuda

comment created time in 6 hours

startedmoby/moby

started time in 7 hours

pull request commentmoby/moby

solved the issue “restoring container from a custom checkpoint-dir is broken”

@cpuguy83 I found the code caused your error. https://github.com/containerd/containerd/blob/master/metadata/images.go#L59

I add some logs inside the code.

 func (s *imageStore) Get(ctx context.Context, name string) (images.Image, error) {
        var image images.Image
-
+       fmt.Println("imageStore Get view bolt")
        namespace, err := namespaces.NamespaceRequired(ctx)
        if err != nil {
                return images.Image{}, err
@@ -56,17 +56,17 @@ func (s *imageStore) Get(ctx context.Context, name string) (images.Image, error)
        if err := view(ctx, s.db, func(tx *bolt.Tx) error {
                bkt := getImagesBucket(tx, namespace)
                if bkt == nil {
-                       return errors.Wrapf(errdefs.ErrNotFound, "image %q", name)
+                       return errors.Wrapf(errdefs.ErrNotFound, "namespace %s image %q", namespace, name)
                }
 
                ibkt := bkt.Bucket([]byte(name))
                if ibkt == nil {
-                       return errors.Wrapf(errdefs.ErrNotFound, "image %q", name)
+                       return errors.Wrapf(errdefs.ErrNotFound, "name image %q", name)
                }
 
                image.Name = name
                if err := readImage(&image, ibkt); err != nil {
-                       return errors.Wrapf(err, "image %q", name)
+                       return errors.Wrapf(err, "readimage image %q", name)
                }
 
                return nil

And try to create a new checkpoint on the busybox looper2 container, print the these log:

root@22ed13d4f17d:~# docker checkpoint create --checkpoint-dir=/tmp looper2 checkpoint2
Error response from daemon: CreateCheckpoint Cannot checkpoint container looper2: namespace moby image "docker.io/library/busybox": not found
root@22ed13d4f17d:~# ctr -a /var/run/docker/containerd/containerd.sock ns ls          
NAME LABELS 
moby

But, there is a "moby" namespace stored in metadata as ctr command show.

XiaodongLoong

comment created time in 7 hours

issue commentmoby/buildkit

Question: auth for use as daemon

Hi, what was the resolution that caused the issue to be closed? Is there a linked issue or PR?

alexellis

comment created time in 8 hours

issue closedmoby/buildkit

Cache can't be exported to Quay.io

Similar to #1143

When exporting cache to quay.io, it fails with

error: failed to solve: rpc error: code = Unknown desc = error writing manifest blob: failed commit on ref "sha256:c2aba47e903ef19d459785c7e5750ef7da0f6f86657d9b40c329d5268dfe2185": unexpected status: 401 Unauthorized

The error is the same with both modes: mode=max or mode=min

buildctl" build \
    --progress=plain \
    --frontend=dockerfile.v0 \
    --local context="${context}" \
    --local dockerfile="$(dirname "${dockerfile}")" \
    --opt filename="$(basename "${dockerfile}")" \
    --output "type=image,\"name=${name}\",push=${push}" \
    --export-cache "type=registry,mode=max,ref=${image}:${tag}-buildcache" \
    --import-cache "type=registry,ref=${image}:${tag}-buildcache" \
    "${@}"

When I do not --export-cache, images are pushed properly to quay.io so the credentials are correct.

closed time in 8 hours

mbarbero

issue commentmoby/buildkit

Cache can't be exported to Quay.io

#1550

mbarbero

comment created time in 8 hours

fork BlessYimin/moby

Moby Project - a collaborative project for the container ecosystem to assemble container-based systems

https://mobyproject.org/

fork in 8 hours

startedmoby/moby

started time in 8 hours

pull request commentmoby/moby

info: improve "WARNING: Running in rootless-mode without cgroup"

@cpuguy83 PTAL

AkihiroSuda

comment created time in 9 hours

startedmoby/moby

started time in 9 hours

push eventmoby/buildkit

Akihiro Suda

commit sha d954b77f60d5ec02a747f834ff43811af6e30cc1

update runc binary to v1.0.0-rc91 release note: https://github.com/opencontainers/runc/releases/tag/v1.0.0-rc91 vendored library isn't updated in this commit (waiting for containerd to vendor runc rc91) Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>

view details

Tõnis Tiigi

commit sha ddfd87ec1fcb5aff3da34f77623e01da2d1e2193

Merge pull request #1553 from AkihiroSuda/runc-rc91 update runc binary to v1.0.0-rc91

view details

push time in 9 hours

PR merged moby/buildkit

update runc binary to v1.0.0-rc91

release note: https://github.com/opencontainers/runc/releases/tag/v1.0.0-rc91

vendored library isn't updated in this commit (waiting for containerd to vendor runc rc91)

+1 -1

0 comment

1 changed file

AkihiroSuda

pr closed time in 9 hours

issue commentmoby/buildkit

always display image hashes

So, how does docker cache the image then, cause if it didn't, you'd have to rebuild every time. Is that a special cache of buildkit then?

TrentonAdams

comment created time in 10 hours

issue commentmoby/buildkit

always display image hashes

lol, I was just going to ask that question, you beat me to it.

TrentonAdams

comment created time in 10 hours

issue commentmoby/buildkit

always display image hashes

@TrentonAdams That's non-BuildKit mode

TrentonAdams

comment created time in 10 hours

issue commentmoby/buildkit

always display image hashes

@tonistiigi That's not true, intermediate builds are cached, and available to "docker run". I've used them countless times to figure out what's wrong. In fact, docker wouldn't even work if they weren't, because docker re-uses them with every build. If they were not cached, docker would have to rebuild your image repeatedly, every time you run docker build

TrentonAdams

comment created time in 10 hours

issue openedmoby/moby

docker build error : error checking context: no permission to read from /config/sdcardfs/remove_userid

hello , i saw some similar issues but could solve my problem: I tried to install docker on Android and could after some modifications on kernel, so now docker client and server both are running, by sudo docker version. i need to build a sample image but after running i get this error: error checking context: no permission to read from /config/sdcardfs/remove_userid i don't know what to do because even this command shows me no permission to read , even with root access. I also modified the permissions of "remove_userid" to rwx, but no change happens. ( I searched and found out that this file is under somehow virtual file system, no idea about it. i ran it with root access and under root path even, but not working. I am working with android and do not have systemd and system ctl. although i am not sure if these can help me, Thanks

created time in 10 hours

issue commentmoby/buildkit

Add `Exec` to the gateway API.

how about

service LLBBridge {
  rpc NewContainer(NewContainerRequest) returns (NewContainerResponse);
  rpc ReleaseContainer(ReleaseContainerRequest) returns (ReleaseContainerResponse);
  rpc ExecProcess(stream ExecMessage) returns (stream ExecMessage);
}

message NewContainerRequest {
	string Ref = 1;
        // For mount input values we can use random identifiers passed with ref
	repeated pb.Mount mounts = 2;
	pb.NetMode network = 3;
	pb.SecurityMode security = 4
}

message ReleaseContainerRequest {
	string Ref = 1;
}


message ExecMessage {
	 oneof input {
                InitMessage init = 1;
		FdMessage file = 2;
		ResizeMessage resize = 3;
		StartedMessage started = 4;
		ExitMessage exit = 5;
	}
}

message InitMessage{
  pb.Meta Meta = 1;
  repeated uint32 fds = 2;
  bool tty = 3;
  // ?? way to control if this is PID1? probably not needed
}

message ExitMessage {
  uint32 Code = 1;
}

message FdMessage{
	uint32 fd = 1; // what fd the data was from
	bool eof = 2;  // true if eof was reached
	bytes data = 3;
}

message ResizeMessage{
	uint32 rows = 1;
	uint32 cols = 2;
	uint32 xpixel = 3;
	uint32 ypixel = 4;
}

I added "container" concept atm to support multiple exec processes. Not sure if needed initially but probably better to be safe for future ideas. The complex part of this is that it does not allow reusing the current executor directly, eg. runc needs to be invoked with create/start/exec calls instead of a single run. Or we mark one process as pid1 and then I think we can use run+exec.

Sending pb.Meta inside the giant one-off object is objectively ugly but this is grpc limitation that we can't pass the initial message on streamable endpoints (unless with unsafe context metadata).

tonistiigi

comment created time in 10 hours

PR opened moby/buildkit

update runc binary to v1.0.0-rc91

release note: https://github.com/opencontainers/runc/releases/tag/v1.0.0-rc91

vendored library isn't updated in this commit (waiting for containerd to vendor runc rc91)

+1 -1

0 comment

1 changed file

pr created time in 11 hours

pull request commentmoby/moby

update runc binary to v1.0.0-rc91

Ready to review/merge

@thaJeztah @cpuguy83 @tianon PTAL

AkihiroSuda

comment created time in 11 hours

startedmoby/moby

started time in 11 hours

fork slzzz/moby

Moby Project - a collaborative project for the container ecosystem to assemble container-based systems

https://mobyproject.org/

fork in 12 hours

startedmoby/moby

started time in 12 hours

issue closedmoby/buildkit

.dockerignore doesn't work if dockerfile path != context path

I'd like to be able to store my Dockerfile & .dockerignore separate from my context but it doesn't work. Luckily there is a painless workaround since Dockerfile & Dockerfile.dockerignore works. That being said, I assume the intention is for the default case to work.

docker run \
    -it \
    --rm \
    --privileged \
    -v /path/to/context:/tmp/context \
    -v /path/to/dockerfile:/tmp/dockerfile \
    --entrypoint buildctl-daemonless.sh \
    moby/buildkit:master \
        build \
        --frontend dockerfile.v0 \
        --local context=/tmp/context \
        --local dockerfile=/tmp/dockerfile
Works with context dir contents:
Dockerfile
.dockerignore

Works with context dir contents:
Dockerfile
Dockerfile.dockerignore

Doesn't work with dockerfile dir contents:
Dockerfile
.dockerignore

Works with dockerfile dir contents:
Dockerfile
Dockerfile.dockerignore

closed time in 13 hours

mjgallag

issue closedmoby/buildkit

Question: does manifest content name matter when create a image

When commit a image, one step is to write manifest, config content into disk, in buildkit, the bolb's ref name is the digest id idxDigest.String().

func (ic *ImageWriter) Commit(ctx context.Context, inp exporter.Source, oci bool) (*ocispec.Descriptor, error) {
...

if err := content.WriteBlob(ctx, ic.opt.ContentStore, idxDigest.String(), bytes.NewReader(idxBytes), idxDesc, content.WithLabels(labels)); err != nil { 
        return nil, idxDone(errors.Wrapf(err, "error writing manifest list blob %s", idxDigest)) 
    }
...

But in containerd, the ref name is created by function remotes.MakeRefKey, the ref name will add a type like manifest- before digest id.

ref := remotes.MakeRefKey(ctx, desc)
    if err := content.WriteBlob(ctx, c.contentStore, ref, bytes.NewReader(mb), desc, content.WithLabels(labels)); err != nil {
        return ocispec.Descriptor{}, errors.Wrap(err, "failed to write config")
    }   

I wonder whether this matter when we pull and push image ?

closed time in 13 hours

Ace-Tang

issue commentmoby/buildkit

Question: does manifest content name matter when create a image

Think we can close this. Feel free to still send a PR if you wish to normalize this.

Ace-Tang

comment created time in 13 hours

issue closedmoby/buildkit

Question: auth for use as daemon

Hi :+1:

When BuildKit is being used as a Daemon in a Kubernetes / Swarm cluster, what is the preferred way to control access?

With Kubernetes a NetworkPolicy may be sufficient to prevent tampering but I can't see an obvious option for this with Swarm / Docker?

Does the BuildKit daemon have any built-in capabilities and/or is there a way to enable something to "protect" the gRPC socket that is open?

Alex

Cc @johnmccabe @tonistiigi @AkihiroSuda

closed time in 13 hours

alexellis

issue closedmoby/buildkit

always display image hashes

It's tough to debug docker building when I can't just get into the previously successful intermediate build image and run the next command manually...

docker run -it --rm hash_id bash
# execute the next RUN line here manually.

I would therefore argue that image hashes should always display, just like they do in the current docker.

closed time in 13 hours

TrentonAdams

issue commentmoby/buildkit

always display image hashes

Could we get a somewhat official answer on which of the following are true, in the context of building with BuildKit:

"There is no image hash for an intermediate cache because it is not exported to the docker image store." is correct. Never doubt @AkihiroSuda

Closing in favor of #1472

TrentonAdams

comment created time in 13 hours

issue closedmoby/buildkit

[question] seeing some warnings/debug logs when building with BuildKit

More of a question / observation:

Seeing this on 19.03.2 and on master

docker rmi busybox || true
docker system prune -f

DOCKER_BUILDKIT=1 docker build -<<EOF
FROM busybox
RUN echo foo
EOF

Check the logs (full output shown below, but collapsed);

<details>

DEBU[2019-10-02T10:00:04.055982081Z] Calling HEAD /_ping                          
DEBU[2019-10-02T10:00:04.057846479Z] Calling POST /v1.40/build?buildargs=%7B%7D&buildid=feb189c98495cc7ed5a9dfcf1bdedd5f9d705909c55a7e5263d1b16848c54e9e&cachefrom=%5B%5D&cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=Dockerfile&labels=%7B%7D&memory=0&memswap=0&networkmode=default&remote=client-session&rm=1&session=ynd740mh42981436kbd0lpgi3&shmsize=0&target=&ulimits=null&version=2 
DEBU[2019-10-02T10:00:04.057974567Z] Calling POST /session                        
INFO[2019-10-02T10:00:04.058123633Z] parsed scheme: ""                             module=grpc
INFO[2019-10-02T10:00:04.058188908Z] scheme "" not registered, fallback to default scheme  module=grpc
INFO[2019-10-02T10:00:04.058205439Z] ccResolverWrapper: sending update to cc: {[{ 0  <nil>}] <nil>}  module=grpc
INFO[2019-10-02T10:00:04.058221209Z] ClientConn switching balancer to "pick_first"  module=grpc
DEBU[2019-10-02T10:00:04.064362881Z] new ref for local: cgemongemlyl2euat0ll24aw0 
DEBU[2019-10-02T10:00:04.065371854Z] new ref for local: t1f14bd9hjku3n6cq0vsi1i5e 
DEBU[2019-10-02T10:00:04.068363033Z] diffcopy took: 2.878106ms                    
DEBU[2019-10-02T10:00:04.070489423Z] diffcopy took: 5.958962ms                    
DEBU[2019-10-02T10:00:04.072071772Z] saved t1f14bd9hjku3n6cq0vsi1i5e as local.sharedKey:context:context-.dockerignore:a20365f530ee14621cbbe5378c5da4849cefbacc02ecd83ffceee57813bd9d64 
DEBU[2019-10-02T10:00:04.074167141Z] saved cgemongemlyl2euat0ll24aw0 as local.sharedKey:dockerfile:dockerfile:a20365f530ee14621cbbe5378c5da4849cefbacc02ecd83ffceee57813bd9d64 
DEBU[2019-10-02T10:00:04.192578772Z] resolving                                    
DEBU[2019-10-02T10:00:04.192665980Z] do request                                    request.headers="map[Accept:[application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.list.v2+json, application/vnd.oci.image.manifest.v1+json, application/vnd.oci.image.index.v1+json, *] User-Agent:[containerd/1.2.0+unknown]]" request.method=HEAD url="https://registry-1.docker.io/v2/library/busybox/manifests/latest"
DEBU[2019-10-02T10:00:04.589878284Z] fetch response received                       response.headers="map[Content-Length:[158] Content-Type:[application/json] Date:[Wed, 02 Oct 2019 10:00:04 GMT] Docker-Distribution-Api-Version:[registry/2.0] Strict-Transport-Security:[max-age=31536000] Www-Authenticate:[Bearer realm=\"https://auth.docker.io/token\",service=\"registry.docker.io\",scope=\"repository:library/busybox:pull\"]]" status="401 Unauthorized" url="https://registry-1.docker.io/v2/library/busybox/manifests/latest"
DEBU[2019-10-02T10:00:04.590023995Z] Unauthorized                                  header="Bearer realm=\"https://auth.docker.io/token\",service=\"registry.docker.io\",scope=\"repository:library/busybox:pull\""
DEBU[2019-10-02T10:00:05.021950236Z] do request                                    request.headers="map[Accept:[application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.list.v2+json, application/vnd.oci.image.manifest.v1+json, application/vnd.oci.image.index.v1+json, *] User-Agent:[containerd/1.2.0+unknown]]" request.method=HEAD url="https://registry-1.docker.io/v2/library/busybox/manifests/latest"
DEBU[2019-10-02T10:00:05.210394954Z] fetch response received                       response.headers="map[Content-Length:[1864] Content-Type:[application/vnd.docker.distribution.manifest.list.v2+json] Date:[Wed, 02 Oct 2019 10:00:05 GMT] Docker-Content-Digest:[sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e] Docker-Distribution-Api-Version:[registry/2.0] Etag:[\"sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e\"] Strict-Transport-Security:[max-age=31536000]]" status="200 OK" url="https://registry-1.docker.io/v2/library/busybox/manifests/latest"
DEBU[2019-10-02T10:00:05.210471908Z] resolved                                      desc.digest="sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e"
DEBU[2019-10-02T10:00:05.210547865Z] fetch                                         digest="sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e" mediatype=application/vnd.docker.distribution.manifest.list.v2+json size=1864
DEBU[2019-10-02T10:00:05.210754696Z] fetch                                         digest="sha256:dd97a3fe6d721c5cf03abac0f50e2848dc583f7c4e41bf39102ceb42edfd1808" mediatype=application/vnd.docker.distribution.manifest.v2+json size=527
DEBU[2019-10-02T10:00:05.210934362Z] fetch                                         digest="sha256:19485c79a9bbdca205fce4f791efeaa2a103e23431434696cc54fdd939e9198d" mediatype=application/vnd.docker.container.image.v1+json size=1497
DEBU[2019-10-02T10:00:05.212106884Z] fetch                                         digest="sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e" mediatype=application/vnd.docker.distribution.manifest.list.v2+json size=1864
DEBU[2019-10-02T10:00:05.212234162Z] fetch                                         digest="sha256:dd97a3fe6d721c5cf03abac0f50e2848dc583f7c4e41bf39102ceb42edfd1808" mediatype=application/vnd.docker.distribution.manifest.v2+json size=527
DEBU[2019-10-02T10:00:05.212409953Z] fetch                                         digest="sha256:19485c79a9bbdca205fce4f791efeaa2a103e23431434696cc54fdd939e9198d" mediatype=application/vnd.docker.container.image.v1+json size=1497
DEBU[2019-10-02T10:00:05.212980235Z] resolving                                    
DEBU[2019-10-02T10:00:05.213036641Z] do request                                    request.headers="map[Accept:[application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.list.v2+json, application/vnd.oci.image.manifest.v1+json, application/vnd.oci.image.index.v1+json, *] Authorization:[Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCIsIng1YyI6WyJNSUlDK2pDQ0FwK2dBd0lCQWdJQkFEQUtCZ2dxaGtqT1BRUURBakJHTVVRd1FnWURWUVFERXpzeVYwNVpPbFZMUzFJNlJFMUVVanBTU1U5Rk9reEhOa0U2UTFWWVZEcE5SbFZNT2tZelNFVTZOVkF5VlRwTFNqTkdPa05CTmxrNlNrbEVVVEFlRncweE9UQXhNVEl3TURJeU5EVmFGdzB5TURBeE1USXdNREl5TkRWYU1FWXhSREJDQmdOVkJBTVRPMUpMTkZNNlMwRkxVVHBEV0RWRk9rRTJSMVE2VTBwTVR6cFFNbEpMT2tOWlZVUTZTMEpEU0RwWFNVeE1Pa3hUU2xrNldscFFVVHBaVWxsRU1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBcjY2bXkveXpHN21VUzF3eFQ3dFplS2pqRzcvNnBwZFNMY3JCcko5VytwcndzMGtIUDVwUHRkMUpkcFdEWU1OZWdqQXhpUWtRUUNvd25IUnN2ODVUalBUdE5wUkdKVTRkeHJkeXBvWGc4TVhYUEUzL2lRbHhPS2VNU0prNlRKbG5wNGFtWVBHQlhuQXRoQzJtTlR5ak1zdFh2ZmNWN3VFYWpRcnlOVUcyUVdXQ1k1Ujl0a2k5ZG54Z3dCSEF6bG8wTzJCczFmcm5JbmJxaCtic3ZSZ1FxU3BrMWhxYnhSU3AyRlNrL2tBL1gyeUFxZzJQSUJxWFFMaTVQQ3krWERYZElJczV6VG9ZbWJUK0pmbnZaMzRLcG5mSkpNalpIRW4xUVJtQldOZXJZcVdtNVhkQVhUMUJrQU9aditMNFVwSTk3NFZFZ2ppY1JINVdBeWV4b1BFclRRSURBUUFCbzRHeU1JR3ZNQTRHQTFVZER3RUIvd1FFQXdJSGdEQVBCZ05WSFNVRUNEQUdCZ1JWSFNVQU1FUUdBMVVkRGdROUJEdFNTelJUT2t0QlMxRTZRMWcxUlRwQk5rZFVPbE5LVEU4NlVESlNTenBEV1ZWRU9rdENRMGc2VjBsTVREcE1VMHBaT2xwYVVGRTZXVkpaUkRCR0JnTlZIU01FUHpBOWdEc3lWMDVaT2xWTFMxSTZSRTFFVWpwU1NVOUZPa3hITmtFNlExVllWRHBOUmxWTU9rWXpTRVU2TlZBeVZUcExTak5HT2tOQk5sazZTa2xFVVRBS0JnZ3Foa2pPUFFRREFnTkpBREJHQWlFQXFOSXEwMFdZTmM5Z2tDZGdSUzRSWUhtNTRZcDBTa05Rd2lyMm5hSWtGd3dDSVFEMjlYdUl5TmpTa1cvWmpQaFlWWFB6QW9TNFVkRXNvUUhyUVZHMDd1N3ZsUT09Il19.eyJhY2Nlc3MiOlt7InR5cGUiOiJyZXBvc2l0b3J5IiwibmFtZSI6ImxpYnJhcnkvYnVzeWJveCIsImFjdGlvbnMiOlsicHVsbCJdfV0sImF1ZCI6InJlZ2lzdHJ5LmRvY2tlci5pbyIsImV4cCI6MTU3MDAxMDcwNCwiaWF0IjoxNTcwMDEwNDA0LCJpc3MiOiJhdXRoLmRvY2tlci5pbyIsImp0aSI6IkRHWHdRZHdURmQyYTZZZVlFbjhuIiwibmJmIjoxNTcwMDEwMTA0LCJzdWIiOiIifQ.j6w_j4ZnzlPrZc9KiPytAKaSqsvDTOR35fyShnAjITSh0jc4qeWE32MZWKWo6kNk0M8wCHVHRWlufsTJOgd-3d8tiWoU_U2DaIqzzUj8zupG-xg5CiB2CGBotbSD_E8VdQPVgUnaZnX18PC2tKJPPvaBEjcY7teMCfyBV0N1xnpHFzeQtWd7pAE2iDFPUG2x0rlXBckB1wObJhJbzqBkaK4Td0eh5IjslEPs4UwXIUHssn5xECzsEbnOn7zTpBq9FOaOuz7BCC09HX57GOfLgyqQfqqgjRpxH18S98JO8_0ZpaD50Iv8dEl84-kQX36z8KO7fd61J221zY5JFiAy8A] User-Agent:[containerd/1.2.0+unknown]]" request.method=HEAD url="https://registry-1.docker.io/v2/library/busybox/manifests/sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e"
DEBU[2019-10-02T10:00:05.409794381Z] fetch response received                       response.headers="map[Content-Length:[1864] Content-Type:[application/vnd.docker.distribution.manifest.list.v2+json] Date:[Wed, 02 Oct 2019 10:00:05 GMT] Docker-Content-Digest:[sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e] Docker-Distribution-Api-Version:[registry/2.0] Etag:[\"sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e\"] Strict-Transport-Security:[max-age=31536000]]" status="200 OK" url="https://registry-1.docker.io/v2/library/busybox/manifests/sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e"
DEBU[2019-10-02T10:00:05.409931590Z] resolved                                      desc.digest="sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e"
DEBU[2019-10-02T10:00:05.410018359Z] fetch                                         digest="sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e" mediatype=application/vnd.docker.distribution.manifest.list.v2+json size=1864
DEBU[2019-10-02T10:00:05.410301213Z] fetch                                         digest="sha256:dd97a3fe6d721c5cf03abac0f50e2848dc583f7c4e41bf39102ceb42edfd1808" mediatype=application/vnd.docker.distribution.manifest.v2+json size=527
DEBU[2019-10-02T10:00:05.410399636Z] fetch                                         digest="sha256:19485c79a9bbdca205fce4f791efeaa2a103e23431434696cc54fdd939e9198d" mediatype=application/vnd.docker.container.image.v1+json size=1497
DEBU[2019-10-02T10:00:05.418619964Z] do request                                    base="https://registry-1.docker.io/v2/library/busybox" digest="sha256:7c9d20b9b6cda1c58bc4f9d6c401386786f584437abbe87e58910f8a9a15386b" request.headers="map[Accept:[application/vnd.docker.image.rootfs.diff.tar.gzip, *]]" request.method=GET url="https://registry-1.docker.io/v2/library/busybox/blobs/sha256:7c9d20b9b6cda1c58bc4f9d6c401386786f584437abbe87e58910f8a9a15386b"
DEBU[2019-10-02T10:00:05.624773480Z] fetch response received                       base="https://registry-1.docker.io/v2/library/busybox" digest="sha256:7c9d20b9b6cda1c58bc4f9d6c401386786f584437abbe87e58910f8a9a15386b" response.headers="map[Accept-Ranges:[bytes] Age:[618217] Cache-Control:[public, max-age=14400] Cf-Cache-Status:[HIT] Cf-Ray:[51f5d3cb0ac7c775-AMS] Content-Length:[760770] Content-Type:[application/octet-stream] Date:[Wed, 02 Oct 2019 10:00:05 GMT] Etag:[\"4166ef0ced6549afb3ac160752b5636d\"] Expect-Ct:[max-age=604800, report-uri=\"https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct\"] Expires:[Wed, 02 Oct 2019 14:00:05 GMT] Last-Modified:[Wed, 04 Sep 2019 19:20:59 GMT] Server:[cloudflare] Set-Cookie:[__cfduid=d8ec370019f68f455aa3ef30944bfd0761570010405; expires=Thu, 01-Oct-20 10:00:05 GMT; path=/; domain=.production.cloudflare.docker.com; HttpOnly; Secure] Vary:[Accept-Encoding] X-Amz-Id-2:[Pmnorq/zIjCh+48lEOUWXI+/UcnyE4/s7TkyZHjdQa4caRdBfxCcsZzzrCoZot1D7RCSFn7/MN8=] X-Amz-Request-Id:[BCDC641C093E62B5] X-Amz-Version-Id:[FWSiFqYMfN_YqV4cGb1Cvkg3XulaRwOo]]" status="200 OK" url="https://registry-1.docker.io/v2/library/busybox/blobs/sha256:7c9d20b9b6cda1c58bc4f9d6c401386786f584437abbe87e58910f8a9a15386b"
DEBU[2019-10-02T10:00:05.757338074Z] Applying tar in /var/lib/docker/overlay2/a16b208e233721206fc40483b845fb142d6b9015307dea00d998674423bfa566/diff  storage-driver=overlay2
DEBU[2019-10-02T10:00:05.921026094Z] Applied tar sha256:6c0ea40aef9d2795f922f4e8642f0cd9ffb9404e6f3214693a1fd45489f38b44 to a16b208e233721206fc40483b845fb142d6b9015307dea00d998674423bfa566, size: 1219782 
DEBU[2019-10-02T10:00:05.945451158Z] Assigning addresses for endpoint unsrd7o9dv1ez1yl20qsdd7yr's interface on network bridge 
DEBU[2019-10-02T10:00:05.945537462Z] RequestAddress(LocalDefault/172.18.0.0/16, <nil>, map[]) 
DEBU[2019-10-02T10:00:05.945581791Z] Request address PoolID:172.18.0.0/16 App: ipam/default/data, ID: LocalDefault/172.18.0.0/16, DBIndex: 0x0, Bits: 65536, Unselected: 65532, Sequence: (0xe0000000, 1)->(0x0, 2046)->(0x1, 1)->end Curr:4 Serial:false PrefAddress:<nil>  
DEBU[2019-10-02T10:00:05.953181299Z] Assigning addresses for endpoint unsrd7o9dv1ez1yl20qsdd7yr's interface on network bridge 
DEBU[2019-10-02T10:00:05.953306432Z] e0cf451d4d0a419a9e1be556dab4358826e33dc5121305ad478be5534fb02add (61a5565).addSvcRecords(unsrd7o9dv1ez1yl20qsdd7yr, 172.18.0.3, <nil>, true) updateSvcRecord sid:e0cf451d4d0a419a9e1be556dab4358826e33dc5121305ad478be5534fb02add 
DEBU[2019-10-02T10:00:05.956655695Z] e0cf451d4d0a419a9e1be556dab4358826e33dc5121305ad478be5534fb02add (61a5565).addSvcRecords(unsrd7o9dv1ez1yl20qsdd7yr, 172.18.0.3, <nil>, true) updateSvcRecord sid:e0cf451d4d0a419a9e1be556dab4358826e33dc5121305ad478be5534fb02add 
DEBU[2019-10-02T10:00:05.958315916Z] Programming external connectivity on endpoint unsrd7o9dv1ez1yl20qsdd7yr (e0cf451d4d0a419a9e1be556dab4358826e33dc5121305ad478be5534fb02add) 
DEBU[2019-10-02T10:00:05.960006365Z] > creating 4oit3gsfctj4u0zv9k3jc8m2o [/bin/sh -c echo foo] 
DEBU[2019-10-02T10:00:06.268309040Z] sandbox set key processing took 202.531481ms for container unsrd7o9dv1ez1yl20qsdd7yr 
DEBU[2019-10-02T10:00:06.680168188Z] Revoking external connectivity on endpoint unsrd7o9dv1ez1yl20qsdd7yr (e0cf451d4d0a419a9e1be556dab4358826e33dc5121305ad478be5534fb02add) 
DEBU[2019-10-02T10:00:06.680945335Z] DeleteConntrackEntries purged ipv4:0, ipv6:0 
DEBU[2019-10-02T10:00:06.690674253Z] could not get checksum for "x128nsj79yzfx4j5h6em2w2on" with tar-split: "no tar-split file" 
DEBU[2019-10-02T10:00:06.690956506Z] Tar with options on /var/lib/docker/overlay2/x128nsj79yzfx4j5h6em2w2on/diff  storage-driver=overlay2
WARN[2019-10-02T10:00:06.717289104Z] grpc: addrConn.createTransport failed to connect to { 0  <nil>}. Err :connection error: desc = "transport: Error while dialing only one connection allowed". Reconnecting...  module=grpc
DEBU[2019-10-02T10:00:06.803332661Z] e0cf451d4d0a419a9e1be556dab4358826e33dc5121305ad478be5534fb02add (61a5565).deleteSvcRecords(unsrd7o9dv1ez1yl20qsdd7yr, 172.18.0.3, <nil>, true) updateSvcRecord sid:e0cf451d4d0a419a9e1be556dab4358826e33dc5121305ad478be5534fb02add  
DEBU[2019-10-02T10:00:06.944088856Z] Releasing addresses for endpoint unsrd7o9dv1ez1yl20qsdd7yr's interface on network bridge 
DEBU[2019-10-02T10:00:06.944149633Z] ReleaseAddress(LocalDefault/172.18.0.0/16, 172.18.0.3) 
DEBU[2019-10-02T10:00:06.944171514Z] Released address PoolID:LocalDefault/172.18.0.0/16, Address:172.18.0.3 Sequence:App: ipam/default/data, ID: LocalDefault/172.18.0.0/16, DBIndex: 0x0, Bits: 65536, Unselected: 65531, Sequence: (0xf0000000, 1)->(0x0, 2046)->(0x1, 1)->end Curr:4 

</details>

These entries stood out to me.

Wondered about this warning; is this a configuration issue in our code?

WARN[2019-10-02T10:00:06.717289104Z] grpc: addrConn.createTransport failed to connect to { 0  <nil>}. Err :connection error: desc = "transport: Error while dialing only one connection allowed". Reconnecting...  module=grpc

This one is logged as "DEBUG", so perhaps not important. The "could not get checksum" part stood out to me though, so wondering if it's indeed expected, and if it is, perhaps we should add some extra logging (e.g. "falling back to ...")

DEBU[2019-10-02T10:00:06.690674253Z] could not get checksum for "x128nsj79yzfx4j5h6em2w2on" with tar-split: "no tar-split file" 

Note that the above log entry seems to relate to the RUN step; without the RUN step, the "could not get checksum" doesn't occur;

docker rmi busybox || true
docker system prune -f

DOCKER_BUILDKIT=1 docker build -<<EOF
FROM busybox
EOF

<details>

DEBU[2019-10-02T10:09:38.789184587Z] Calling HEAD /_ping                          
DEBU[2019-10-02T10:09:38.791206291Z] Calling POST /session                        
DEBU[2019-10-02T10:09:38.791301150Z] Calling POST /v1.40/build?buildargs=%7B%7D&buildid=c8fbee5fa752172a7bd2f5a8a45a2afdcf169383a944daf338de6561b9fd4e98&cachefrom=%5B%5D&cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=Dockerfile&labels=%7B%7D&memory=0&memswap=0&networkmode=default&remote=client-session&rm=1&session=pl08wxmjetn49t7es4dkyowzo&shmsize=0&target=&ulimits=null&version=2 
INFO[2019-10-02T10:09:38.791336961Z] parsed scheme: ""                             module=grpc
INFO[2019-10-02T10:09:38.791402489Z] scheme "" not registered, fallback to default scheme  module=grpc
INFO[2019-10-02T10:09:38.791419447Z] ccResolverWrapper: sending update to cc: {[{ 0  <nil>}] <nil>}  module=grpc
INFO[2019-10-02T10:09:38.791447313Z] ClientConn switching balancer to "pick_first"  module=grpc
DEBU[2019-10-02T10:09:38.797677006Z] new ref for local: psx5i2dg0qe9sdq6qbg01u951 
DEBU[2019-10-02T10:09:38.799469722Z] new ref for local: 3cdm3kmp3x1viqtrbwp7thzyz 
DEBU[2019-10-02T10:09:38.803893751Z] diffcopy took: 4.224827ms                    
DEBU[2019-10-02T10:09:38.805578599Z] diffcopy took: 7.686343ms                    
DEBU[2019-10-02T10:09:38.806876249Z] saved 3cdm3kmp3x1viqtrbwp7thzyz as local.sharedKey:dockerfile:dockerfile:a20365f530ee14621cbbe5378c5da4849cefbacc02ecd83ffceee57813bd9d64 
DEBU[2019-10-02T10:09:38.809041305Z] saved psx5i2dg0qe9sdq6qbg01u951 as local.sharedKey:context:context-.dockerignore:a20365f530ee14621cbbe5378c5da4849cefbacc02ecd83ffceee57813bd9d64 
DEBU[2019-10-02T10:09:38.918243562Z] resolving                                    
DEBU[2019-10-02T10:09:38.918396160Z] do request                                    request.headers="map[Accept:[application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.list.v2+json, application/vnd.oci.image.manifest.v1+json, application/vnd.oci.image.index.v1+json, *] User-Agent:[containerd/1.2.0+unknown]]" request.method=HEAD url="https://registry-1.docker.io/v2/library/busybox/manifests/latest"
DEBU[2019-10-02T10:09:39.326196585Z] fetch response received                       response.headers="map[Content-Length:[158] Content-Type:[application/json] Date:[Wed, 02 Oct 2019 10:09:39 GMT] Docker-Distribution-Api-Version:[registry/2.0] Strict-Transport-Security:[max-age=31536000] Www-Authenticate:[Bearer realm=\"https://auth.docker.io/token\",service=\"registry.docker.io\",scope=\"repository:library/busybox:pull\"]]" status="401 Unauthorized" url="https://registry-1.docker.io/v2/library/busybox/manifests/latest"
DEBU[2019-10-02T10:09:39.326269426Z] Unauthorized                                  header="Bearer realm=\"https://auth.docker.io/token\",service=\"registry.docker.io\",scope=\"repository:library/busybox:pull\""
DEBU[2019-10-02T10:09:39.747936008Z] do request                                    request.headers="map[Accept:[application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.list.v2+json, application/vnd.oci.image.manifest.v1+json, application/vnd.oci.image.index.v1+json, *] User-Agent:[containerd/1.2.0+unknown]]" request.method=HEAD url="https://registry-1.docker.io/v2/library/busybox/manifests/latest"
DEBU[2019-10-02T10:09:39.897229617Z] fetch response received                       response.headers="map[Content-Length:[1864] Content-Type:[application/vnd.docker.distribution.manifest.list.v2+json] Date:[Wed, 02 Oct 2019 10:09:39 GMT] Docker-Content-Digest:[sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e] Docker-Distribution-Api-Version:[registry/2.0] Etag:[\"sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e\"] Strict-Transport-Security:[max-age=31536000]]" status="200 OK" url="https://registry-1.docker.io/v2/library/busybox/manifests/latest"
DEBU[2019-10-02T10:09:39.897328087Z] resolved                                      desc.digest="sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e"
DEBU[2019-10-02T10:09:39.897380651Z] fetch                                         digest="sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e" mediatype=application/vnd.docker.distribution.manifest.list.v2+json size=1864
DEBU[2019-10-02T10:09:39.897656009Z] fetch                                         digest="sha256:dd97a3fe6d721c5cf03abac0f50e2848dc583f7c4e41bf39102ceb42edfd1808" mediatype=application/vnd.docker.distribution.manifest.v2+json size=527
DEBU[2019-10-02T10:09:39.897905589Z] fetch                                         digest="sha256:19485c79a9bbdca205fce4f791efeaa2a103e23431434696cc54fdd939e9198d" mediatype=application/vnd.docker.container.image.v1+json size=1497
DEBU[2019-10-02T10:09:39.899053825Z] fetch                                         digest="sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e" mediatype=application/vnd.docker.distribution.manifest.list.v2+json size=1864
DEBU[2019-10-02T10:09:39.899231759Z] fetch                                         digest="sha256:dd97a3fe6d721c5cf03abac0f50e2848dc583f7c4e41bf39102ceb42edfd1808" mediatype=application/vnd.docker.distribution.manifest.v2+json size=527
DEBU[2019-10-02T10:09:39.899355095Z] fetch                                         digest="sha256:19485c79a9bbdca205fce4f791efeaa2a103e23431434696cc54fdd939e9198d" mediatype=application/vnd.docker.container.image.v1+json size=1497
DEBU[2019-10-02T10:09:39.899862619Z] resolving                                    
DEBU[2019-10-02T10:09:39.899921121Z] do request                                    request.headers="map[Accept:[application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.list.v2+json, application/vnd.oci.image.manifest.v1+json, application/vnd.oci.image.index.v1+json, *] Authorization:[Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCIsIng1YyI6WyJNSUlDK2pDQ0FwK2dBd0lCQWdJQkFEQUtCZ2dxaGtqT1BRUURBakJHTVVRd1FnWURWUVFERXpzeVYwNVpPbFZMUzFJNlJFMUVVanBTU1U5Rk9reEhOa0U2UTFWWVZEcE5SbFZNT2tZelNFVTZOVkF5VlRwTFNqTkdPa05CTmxrNlNrbEVVVEFlRncweE9UQXhNVEl3TURJeU5EVmFGdzB5TURBeE1USXdNREl5TkRWYU1FWXhSREJDQmdOVkJBTVRPMUpMTkZNNlMwRkxVVHBEV0RWRk9rRTJSMVE2VTBwTVR6cFFNbEpMT2tOWlZVUTZTMEpEU0RwWFNVeE1Pa3hUU2xrNldscFFVVHBaVWxsRU1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBcjY2bXkveXpHN21VUzF3eFQ3dFplS2pqRzcvNnBwZFNMY3JCcko5VytwcndzMGtIUDVwUHRkMUpkcFdEWU1OZWdqQXhpUWtRUUNvd25IUnN2ODVUalBUdE5wUkdKVTRkeHJkeXBvWGc4TVhYUEUzL2lRbHhPS2VNU0prNlRKbG5wNGFtWVBHQlhuQXRoQzJtTlR5ak1zdFh2ZmNWN3VFYWpRcnlOVUcyUVdXQ1k1Ujl0a2k5ZG54Z3dCSEF6bG8wTzJCczFmcm5JbmJxaCtic3ZSZ1FxU3BrMWhxYnhSU3AyRlNrL2tBL1gyeUFxZzJQSUJxWFFMaTVQQ3krWERYZElJczV6VG9ZbWJUK0pmbnZaMzRLcG5mSkpNalpIRW4xUVJtQldOZXJZcVdtNVhkQVhUMUJrQU9aditMNFVwSTk3NFZFZ2ppY1JINVdBeWV4b1BFclRRSURBUUFCbzRHeU1JR3ZNQTRHQTFVZER3RUIvd1FFQXdJSGdEQVBCZ05WSFNVRUNEQUdCZ1JWSFNVQU1FUUdBMVVkRGdROUJEdFNTelJUT2t0QlMxRTZRMWcxUlRwQk5rZFVPbE5LVEU4NlVESlNTenBEV1ZWRU9rdENRMGc2VjBsTVREcE1VMHBaT2xwYVVGRTZXVkpaUkRCR0JnTlZIU01FUHpBOWdEc3lWMDVaT2xWTFMxSTZSRTFFVWpwU1NVOUZPa3hITmtFNlExVllWRHBOUmxWTU9rWXpTRVU2TlZBeVZUcExTak5HT2tOQk5sazZTa2xFVVRBS0JnZ3Foa2pPUFFRREFnTkpBREJHQWlFQXFOSXEwMFdZTmM5Z2tDZGdSUzRSWUhtNTRZcDBTa05Rd2lyMm5hSWtGd3dDSVFEMjlYdUl5TmpTa1cvWmpQaFlWWFB6QW9TNFVkRXNvUUhyUVZHMDd1N3ZsUT09Il19.eyJhY2Nlc3MiOlt7InR5cGUiOiJyZXBvc2l0b3J5IiwibmFtZSI6ImxpYnJhcnkvYnVzeWJveCIsImFjdGlvbnMiOlsicHVsbCJdfV0sImF1ZCI6InJlZ2lzdHJ5LmRvY2tlci5pbyIsImV4cCI6MTU3MDAxMTI3OSwiaWF0IjoxNTcwMDEwOTc5LCJpc3MiOiJhdXRoLmRvY2tlci5pbyIsImp0aSI6IlA5c3FsYTEtUlRaYl9yWjc0c3RKIiwibmJmIjoxNTcwMDEwNjc5LCJzdWIiOiIifQ.accohigXUIcwdXsGSrdhDEzgA_mViM27H1y7-S08HyZXfO6hlCb-n_Q-hWX6465-AQDq6CYkJvZPlGQweHeTgJ6IFrYE8fcwZP9XDEZVCwk2guHEgXIyW4Ah9hC2xG8yBkDtU6kXALz6wDix4W9v_eoabCMTUGiNWrcIY2dc4Q2W1zDNZr3Oq-sy2JON1p0vBzf74qCkAKXPBXEbseopmVwJTXcyFkYOkilgzRcX-s-dck8G1l769PYZm-X8mVf6WZPhMcW3jACWz6q2LFayUaTnTcUUn_dvFT15x3aXt9skVikTEvYc9FbrFDvPyq_YjLyH4DiYMfISx7lyV5ebuA] User-Agent:[containerd/1.2.0+unknown]]" request.method=HEAD url="https://registry-1.docker.io/v2/library/busybox/manifests/sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e"
DEBU[2019-10-02T10:09:40.064996129Z] fetch response received                       response.headers="map[Content-Length:[1864] Content-Type:[application/vnd.docker.distribution.manifest.list.v2+json] Date:[Wed, 02 Oct 2019 10:09:39 GMT] Docker-Content-Digest:[sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e] Docker-Distribution-Api-Version:[registry/2.0] Etag:[\"sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e\"] Strict-Transport-Security:[max-age=31536000]]" status="200 OK" url="https://registry-1.docker.io/v2/library/busybox/manifests/sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e"
DEBU[2019-10-02T10:09:40.065108676Z] resolved                                      desc.digest="sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e"
DEBU[2019-10-02T10:09:40.065182653Z] fetch                                         digest="sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e" mediatype=application/vnd.docker.distribution.manifest.list.v2+json size=1864
DEBU[2019-10-02T10:09:40.065458168Z] fetch                                         digest="sha256:dd97a3fe6d721c5cf03abac0f50e2848dc583f7c4e41bf39102ceb42edfd1808" mediatype=application/vnd.docker.distribution.manifest.v2+json size=527
DEBU[2019-10-02T10:09:40.065576472Z] fetch                                         digest="sha256:19485c79a9bbdca205fce4f791efeaa2a103e23431434696cc54fdd939e9198d" mediatype=application/vnd.docker.container.image.v1+json size=1497
DEBU[2019-10-02T10:09:40.075623606Z] do request                                    base="https://registry-1.docker.io/v2/library/busybox" digest="sha256:7c9d20b9b6cda1c58bc4f9d6c401386786f584437abbe87e58910f8a9a15386b" request.headers="map[Accept:[application/vnd.docker.image.rootfs.diff.tar.gzip, *]]" request.method=GET url="https://registry-1.docker.io/v2/library/busybox/blobs/sha256:7c9d20b9b6cda1c58bc4f9d6c401386786f584437abbe87e58910f8a9a15386b"
DEBU[2019-10-02T10:09:40.281150925Z] fetch response received                       base="https://registry-1.docker.io/v2/library/busybox" digest="sha256:7c9d20b9b6cda1c58bc4f9d6c401386786f584437abbe87e58910f8a9a15386b" response.headers="map[Accept-Ranges:[bytes] Age:[618792] Cache-Control:[public, max-age=14400] Cf-Cache-Status:[HIT] Cf-Ray:[51f5e1d28fd8d915-AMS] Content-Length:[760770] Content-Type:[application/octet-stream] Date:[Wed, 02 Oct 2019 10:09:40 GMT] Etag:[\"4166ef0ced6549afb3ac160752b5636d\"] Expect-Ct:[max-age=604800, report-uri=\"https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct\"] Expires:[Wed, 02 Oct 2019 14:09:40 GMT] Last-Modified:[Wed, 04 Sep 2019 19:20:59 GMT] Server:[cloudflare] Set-Cookie:[__cfduid=dea69cbf25e3a29d5e5ea13f0d5282c911570010980; expires=Thu, 01-Oct-20 10:09:40 GMT; path=/; domain=.production.cloudflare.docker.com; HttpOnly; Secure] Vary:[Accept-Encoding] X-Amz-Id-2:[Pmnorq/zIjCh+48lEOUWXI+/UcnyE4/s7TkyZHjdQa4caRdBfxCcsZzzrCoZot1D7RCSFn7/MN8=] X-Amz-Request-Id:[BCDC641C093E62B5] X-Amz-Version-Id:[FWSiFqYMfN_YqV4cGb1Cvkg3XulaRwOo]]" status="200 OK" url="https://registry-1.docker.io/v2/library/busybox/blobs/sha256:7c9d20b9b6cda1c58bc4f9d6c401386786f584437abbe87e58910f8a9a15386b"
DEBU[2019-10-02T10:09:40.419049106Z] Applying tar in /var/lib/docker/overlay2/23affe92dc14f25734884f31d0844312f129ca15dfae7a51854c67a709584bae/diff  storage-driver=overlay2
DEBU[2019-10-02T10:09:40.601672430Z] Applied tar sha256:6c0ea40aef9d2795f922f4e8642f0cd9ffb9404e6f3214693a1fd45489f38b44 to 23affe92dc14f25734884f31d0844312f129ca15dfae7a51854c67a709584bae, size: 1219782 
WARN[2019-10-02T10:09:40.631686795Z] grpc: addrConn.createTransport failed to connect to { 0  <nil>}. Err :connection error: desc = "transport: Error while dialing only one connection allowed". Reconnecting...  module=grpc

</details>

closed time in 13 hours

thaJeztah

issue closedmoby/buildkit

Add documentation (/support?) changing dockerfile name with dockerfile frontend

I fail to use a differently-named dockerfile (other.dockerfile), I tried different flags from reading the code, but all attempts failed:

Command:

buildctl-daemonless.sh build --progress=plain \
      --frontend=dockerfile.v0 ...

--local dockerfile=. --local filename=other.dockerfile
-> error: other.dockerfile not a directory

--local dockerfile=other.dockerfile
-> error: other.dockerfile not a directory

--local dockerfile=. --local dockerfilekey=other.dockerfile
-> error: other.dockerfile not a directory

--local dockerfile=. --local defaultDockerfileName=other.dockerfile
-> error: other.dockerfile not a directory

Using moby/buildkit:latest - buildctl github.com/moby/buildkit v0.6.3 928f3b480d7460aacb401f68610058ffdb549aca

closed time in 13 hours

TeNNoX

issue closedmoby/buildkit

Are cache mounts shared across base images?

We just ran into an issue that seems to indicate that cache mounts are shared across different base images. Is that the case?

Here's the stacktrace from running a test in a python:3.8.3-slim-buster image:

Traceback (most recent call last):
  File \"myapp.py\", line 27, in main
    config = json.loads(id_, object_hook=my_json.loadTimestamps)    
  File \"/usr/local/lib/python3.8/json/__init__.py\", line 370, in loads
    return cls(**kw).decode(s)
  File \"/usr/local/lib/python3.8/json/decoder.py\", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File \"/usr/local/lib/python3.8/json/decoder.py\", line 355, in raw_decode
    raise JSONDecodeError(\"Expecting value\", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File \"/usr/src/app/myapp.py\", line 96, in _init
    connector.init(source_api,
  File \"/usr/src/app/myapp/remote_sql.py\", line 225, in init
    with RemoteSqlEngine(ext, source_options, source_auth) as engine:
  File \"/usr/src/app/myapp/remote_sql.py\", line 149, in __init__
    engine = create_engine(self.create_conn(ext, source_options, source_auth['password']), connect_args=connect_args,
  File \"/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/__init__.py\", line 479, in create_engine
    return strategy.create(*args, **kwargs)
  File \"/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/strategies.py\", line 87, in create
    dbapi = dialect_cls.dbapi(**dbapi_args)
  File \"/usr/local/lib/python3.8/site-packages/sqlalchemy/dialects/postgresql/psycopg2.py\", line 737, in dbapi
    import psycopg2
  File \"/usr/local/lib/python3.8/site-packages/psycopg2/__init__.py\", line 50, in <module>
    from psycopg2._psycopg import (                     # noqa
ImportError: libc.musl-x86_64.so.1: cannot open shared object file: No such file or directory"}

Notice that it's trying to load musl but that should only be in the python:3.8.3-alpine3.11 images that we build for another image.

closed time in 13 hours

daveisfera

issue closedmoby/buildkit

Base image no longer pulled/tagged

With standard docker build, then base image was pulled and tagged, so you could easily see the list of the base images and then pull updates on a schedule. With buildkit, this doesn't happen and I believe this is intentional (because intermediate layers aren't tracked anymore), but is there a way to list the base images that are on a machine so they can be pulled to update them on a schedule?

closed time in 13 hours

daveisfera

issue commentmoby/buildkit

Base image no longer pulled/tagged

So is there a way to get the list of base images that were pulled or that are currently available from buildkit?

No, buildkit does not track/store them as images. Just pulls layers and tracks them as build cache.

daveisfera

comment created time in 13 hours

issue closedmoby/buildkit

How to use cache mount for running process?

This page documents how to use a cache mount when building an image, but how can you use that same cache mount when running a process?

closed time in 13 hours

daveisfera

issue closedmoby/buildkit

multiple export point

Hi,

Does buildkit support multiple output points something like below lines? --output type=image,name=prod.docker.io/username/image,push=true
--output type=image,name=test.docker.io/username/image,push=true \

When I try to do add multiple outputs it gives the error.

error: currently only single Exports can be specified

Is there any other way to do that?

--output type=image," name=prod.docker.io/abc:3, name=test.docker.io/abc:3"

exports only prod.docker.io

Thanks

closed time in 13 hours

kenotsolutions

issue closedmoby/buildkit

Unable to `go get` locally

Hey guys,

I wanted to play around with the LLB, BuildKit and frontends. Unfortunately, I've failed at the very beginning:

$ go mod init my-fancy-project
go: creating new go.mod: module my-fancy-project
$ go get github.com/moby/buildkit/client/llb
go: found github.com/moby/buildkit/client/llb in github.com/moby/buildkit v0.7.1
go get: github.com/moby/buildkit@v0.7.1 requires
	github.com/containerd/containerd@v1.4.0-0: reading github.com/containerd/containerd/go.mod at revision v1.4.0-0: unknown revision v1.4.0-0

Unfortunately, it also fails for any v0.7.x and v0.6.x versions. Fails on master (v0.7.1-0.20200623231744-95010be66d7f), too.

Any ideas how to fix it?

closed time in 13 hours

maciej-gol

pull request commentmoby/buildkit

session: track sessions with a group construct

Found an issue with ResolveImageConfig and Resolver cache that needs some refactoring.

tonistiigi

comment created time in 14 hours

pull request commentmoby/buildkit

session: track sessions with a group construct

So to be clear, currently when multiple sessions share a vertex, if one session drops the solve is cancelled for everyone?

If a session drops while it is being used and it was the session that was randomly chosen for the op then the whole op fails. The race window is actually quite small, even my reproducer isn't very effective.

A somewhat more interesting case is also what happens to the puller. We reuse the resolver to avoid making new registry connections but this means CacheKey and Exec may have a different active session for getting the credentials. So there is a need to "update" the resolver with the new authentication backend.

I looked closely at the changes, and it LGTM. I can see how various functions that depend on sessions now can choose Any of the session in the session.Group.

Yes, and Any automatically retries the next session, should the previous one fail.

I didn't see any cases where multiple sessions are added to a group, but I think that's what you alluded to in the PR comment above.

Multiple session are added to the group automatically with the jobs mechanism. This existed before. The difference is that when passing to ops, previously solver would pick a random one and pass it as string, and now it passes a callback (masked in Group interface) for op to get access to all the valid sessions when needed.

tonistiigi

comment created time in 14 hours

Pull request review commentmoby/buildkit

session: track sessions with a group construct

 func (g *cacheRefGetter) getRefCacheDirNoCache(ctx context.Context, key string, 	return mRef, nil } -func (e *execOp) getSSHMountable(ctx context.Context, m *pb.Mount) (cache.Mountable, error) {-	sessionID := session.FromContext(ctx)-	if sessionID == "" {-		return nil, errors.New("could not access local files without session")-	}--	timeoutCtx, cancel := context.WithTimeout(ctx, 5*time.Second)-	defer cancel()--	caller, err := e.sm.Get(timeoutCtx, sessionID)-	if err != nil {-		return nil, err-	}--	if err := sshforward.CheckSSHID(ctx, caller, m.SSHOpt.ID); err != nil {-		if m.SSHOpt.Optional {-			return nil, nil-		}-		if grpcerrors.Code(err) == codes.Unimplemented {-			return nil, errors.Errorf("no SSH key %q forwarded from the client", m.SSHOpt.ID)+func (e *execOp) getSSHMountable(ctx context.Context, m *pb.Mount, g session.Group) (cache.Mountable, error) {+	var caller session.Caller+	err := e.sm.Any(ctx, g, func(ctx context.Context, _ string, c session.Caller) error {+		if err := sshforward.CheckSSHID(ctx, c, m.SSHOpt.ID); err != nil {+			if m.SSHOpt.Optional {+				return nil+			}+			if grpcerrors.Code(err) == codes.Unimplemented {+				return errors.Errorf("no SSH key %q forwarded from the client", m.SSHOpt.ID)+			}+			return err 		}+		caller = c+		return nil+	})+	if err != nil { 		return nil, err 	}-+	// because ssh socket remains active, to actually handle session disconnecting ssh error+	// should restart the whole exec with new session

If we have 2 builds both running the same exec() with ssh mounted the ssh remains active for the whole duration of the exec. So if one session goes away there is no way to switch this ssh socket to another session as it might be in unknown state. Atm we ignore this and only validate session works when exec starts. But if we would want to handle this case then when connection would drop from ssh, we could just restart the whole exec with the new session. If you look at the "local source" implementation now, then this is what I do there. If a transfer fails it will check if we can attempt a new transfer from another session before failing the build.

tonistiigi

comment created time in 14 hours

Pull request review commentmoby/buildkit

session: track sessions with a group construct

+package session++import (+	"context"+	"time"++	"github.com/pkg/errors"+)++type Group interface {+	SessionIterator() Iterator+}+type Iterator interface {+	NextSession() string+}++func NewGroup(ids ...string) Group {

Vertexes use their own implementation of Group defined in jobs.go. This is the group passed to ops/subbuild etc.

tonistiigi

comment created time in 15 hours

Pull request review commentmoby/buildkit

session: track sessions with a group construct

 func ResolveCacheImporterFunc(sm *session.Manager) remotecache.ResolveCacheImpor 	} } -func getContentStore(ctx context.Context, sm *session.Manager, storeID string) (content.Store, error) {-	sessionID := session.FromContext(ctx)+func getContentStore(ctx context.Context, sm *session.Manager, g session.Group, storeID string) (content.Store, error) {+	// TODO: to ensure correct session is detected, new api for finding if storeID is supported is needed

When there are multiple sessions daemon could send "detect" request to know which one supports the current storeID. If a specific session does not know about storeID then the next one is tried.

tonistiigi

comment created time in 15 hours

startedmoby/ipvs

started time in 15 hours

startedmoby/moby

started time in 15 hours

startedmoby/buildkit

started time in 16 hours

Pull request review commentmoby/buildkit

session: track sessions with a group construct

 func (g *cacheRefGetter) getRefCacheDirNoCache(ctx context.Context, key string, 	return mRef, nil } -func (e *execOp) getSSHMountable(ctx context.Context, m *pb.Mount) (cache.Mountable, error) {-	sessionID := session.FromContext(ctx)-	if sessionID == "" {-		return nil, errors.New("could not access local files without session")-	}--	timeoutCtx, cancel := context.WithTimeout(ctx, 5*time.Second)-	defer cancel()--	caller, err := e.sm.Get(timeoutCtx, sessionID)-	if err != nil {-		return nil, err-	}--	if err := sshforward.CheckSSHID(ctx, caller, m.SSHOpt.ID); err != nil {-		if m.SSHOpt.Optional {-			return nil, nil-		}-		if grpcerrors.Code(err) == codes.Unimplemented {-			return nil, errors.Errorf("no SSH key %q forwarded from the client", m.SSHOpt.ID)+func (e *execOp) getSSHMountable(ctx context.Context, m *pb.Mount, g session.Group) (cache.Mountable, error) {+	var caller session.Caller+	err := e.sm.Any(ctx, g, func(ctx context.Context, _ string, c session.Caller) error {+		if err := sshforward.CheckSSHID(ctx, c, m.SSHOpt.ID); err != nil {+			if m.SSHOpt.Optional {+				return nil+			}+			if grpcerrors.Code(err) == codes.Unimplemented {+				return errors.Errorf("no SSH key %q forwarded from the client", m.SSHOpt.ID)+			}+			return err 		}+		caller = c+		return nil+	})+	if err != nil { 		return nil, err 	}-+	// because ssh socket remains active, to actually handle session disconnecting ssh error+	// should restart the whole exec with new session

Can you expand on this? I don't understand.

tonistiigi

comment created time in 16 hours

Pull request review commentmoby/buildkit

session: track sessions with a group construct

+package session++import (+	"context"+	"time"++	"github.com/pkg/errors"+)++type Group interface {+	SessionIterator() Iterator+}+type Iterator interface {+	NextSession() string+}++func NewGroup(ids ...string) Group {

I don't see this being called with more than one session ID in this PR. How will multiple sessions come together in a group in the future?

tonistiigi

comment created time in 17 hours

Pull request review commentmoby/buildkit

session: track sessions with a group construct

 func NewRegistryConfig(m map[string]config.RegistryConfig) docker.RegistryHosts 	) } -func New(ctx context.Context, hosts docker.RegistryHosts, sm *session.Manager) remotes.Resolver {+type SessionAuthenticator struct {+	sm    *session.Manager+	g     session.Group+	mu    sync.Mutex+	cache map[string]credentials+}++type credentials struct {+	user   string+	secret string+}++func NewSessionAuthenticator(sm *session.Manager, g session.Group) *SessionAuthenticator {+	return &SessionAuthenticator{sm: sm, g: g, cache: map[string]credentials{}}+}++func (a *SessionAuthenticator) credentials(h string) (string, string, error) {+	a.mu.Lock()+	c, ok := a.cache[h]+	if ok {+		a.mu.Unlock()+		return c.user, c.secret, nil+	}+	g := a.g+	a.mu.Unlock()

Seems like a RWMutex is more appropriate here?

tonistiigi

comment created time in 17 hours

Pull request review commentmoby/buildkit

session: track sessions with a group construct

 type Exporter interface {  type ExporterInstance interface { 	Name() string-	Export(context.Context, Source) (map[string]string, error)+	Export(context.Context, Source, string) (map[string]string, error)

Prefer to name these arguments.

tonistiigi

comment created time in 20 hours

Pull request review commentmoby/buildkit

session: track sessions with a group construct

 func (e *execOp) getSecretMountable(ctx context.Context, m *pb.Mount) (cache.Mou 	if id == "" { 		return nil, errors.Errorf("secret ID missing from mount options") 	}--	sessionID := session.FromContext(ctx)-	if sessionID == "" {-		return nil, errors.New("could not access local files without session")-	}--	timeoutCtx, cancel := context.WithTimeout(ctx, 5*time.Second)-	defer cancel()--	caller, err := e.sm.Get(timeoutCtx, sessionID)+	var dt []byte+	var err error+	err = e.sm.Any(ctx, g, func(ctx context.Context, _ string, caller session.Caller) error {+		dt, err = secrets.GetSecret(ctx, caller, id)+		if err != nil {+			if errors.Is(err, secrets.ErrNotFound) && m.SecretOpt.Optional {+				return nil+			}+			return err+		}+		return nil+	}) 	if err != nil {

Can be simplified to:

if err != nil || dt == nil {
	return nil, err
}
tonistiigi

comment created time in 18 hours

Pull request review commentmoby/buildkit

session: track sessions with a group construct

 func ResolveCacheImporterFunc(sm *session.Manager) remotecache.ResolveCacheImpor 	} } -func getContentStore(ctx context.Context, sm *session.Manager, storeID string) (content.Store, error) {-	sessionID := session.FromContext(ctx)+func getContentStore(ctx context.Context, sm *session.Manager, g session.Group, storeID string) (content.Store, error) {+	// TODO: to ensure correct session is detected, new api for finding if storeID is supported is needed

What does it mean by storeID is supported?

tonistiigi

comment created time in 20 hours

pull request commentmoby/sys

mountinfo.Mounted: optimize by adding fast paths

OK we need replace in go.mod for such tests, and once replace is there, we can ditch the for p in $(PACKAGES) in the top Makefile.

kolyshkin

comment created time in 18 hours

pull request commentmoby/sys

mountinfo.Mounted: optimize by adding fast paths

Will fix

Starting to wonder if it would be cleaner to have separate Makefiles in each module

It will result in more code (same/similar for in the top makefile, plus go test in every sub-makefile).

kolyshkin

comment created time in 18 hours

pull request commentmoby/sys

mountinfo.Mounted: optimize by adding fast paths

Cross is failing;

for os in linux freebsd darwin windows; do \
	for arch in amd64 arm arm64 ppc64le s390x; do \
		echo "$os/$arch" | grep -qE '^((freebsd|darwin|windows)/(ppc64le|s390x)|windows/arm64|freebsd/s390x)$' && continue; \
		echo "# building for $os/$arch"; \
		GOOS=$os GOARCH=$arch go build ./...; \
	done; \
done
# building for linux/amd64
##[error]mount/mount.go:9:2: cannot find package "github.com/moby/sys/mountinfo" in any of:
	/opt/hostedtoolcache/go/1.13.12/x64/src/github.com/moby/sys/mountinfo (from $GOROOT)
	/home/runner/go/src/github.com/moby/sys/mountinfo (from $GOPATH)
##[error]mount/flags_linux.go:4:2: cannot find package "golang.org/x/sys/unix" in any of:
	/opt/hostedtoolcache/go/1.13.12/x64/src/golang.org/x/sys/unix (from $GOROOT)
	/home/runner/go/src/golang.org/x/sys/unix (from $GOPATH)
##[error]Makefile:33: recipe for target 'cross' failed
make: *** [cross] Error 1
##[error]Process completed with exit code 2.

Does it need the same trickery as we use for test? https://github.com/moby/sys/blob/fb9f8cf904285111be4c0db77af0ee5b3b4fdba3/Makefile#L10-L12

Starting to wonder if it would be cleaner to have separate Makefiles in each module, and then call those targets from the main Makefile (also beginning to feel the pain of the submodules 😓 - do you think it causes more issues to have them in the same repository than to have to maintain two repositories?)

kolyshkin

comment created time in 18 hours

pull request commentmoby/moby

Upgrading the versions of images in Dockerfile.

Thanks, @wanghuaiqing2010 !

wanghuaiqing2010

comment created time in 18 hours

issue commentmoby/buildkit

[Bug] --export-cache is generating a malformed v2 schema manifest. Missing platform property

@tonistiigi Given that the OCI artifacts spec is expressly designed to address these kinds of custom pushed resources, I'd suggest we circle back on whether this should be using an Index vs a Manifest, as Quay will likely be following the specification and only allow artifacts as manifests until such time as the spec is extended.

Craga89

comment created time in 19 hours

startedmoby/moby

started time in 19 hours

issue commentmoby/buildkit

[Bug] --export-cache is generating a malformed v2 schema manifest. Missing platform property

What would setting is to true do in this context?

It would just replace "docker" string in the mediatype values to "oci". No changes to the actual objects. We don't set it by default so that more registries that don't know about oci would be supported. This would allow us to be compatible with the spec(by switching the spec document). Looks like the spec docs from 2016 do not explicitly mark the platform field as optional although all the implementations of docker/distribution and hub have always done that. The pattern of using descriptor lists like this is nothing novel to buildkit, same thing is used bu cnab-oci, contained snapshots etc.

Craga89

comment created time in 19 hours

issue commentmoby/buildkit

[Bug] --export-cache is generating a malformed v2 schema manifest. Missing platform property

@Craga89 Well, and the media type of the produced list is "application/vnd.docker.distribution.manifest.list.v2+json", but it contains tar-gzipped layers (instead of manifests) as well as a cache config, neither which is (AFAIK) valid in Schema 2 Manifest List

Craga89

comment created time in 19 hours

issue commentmoby/buildkit

[Bug] --export-cache is generating a malformed v2 schema manifest. Missing platform property

If I understand the problem correctly, it seems the problem is that Quay doesn't support the OCI format at current, which is why the manifest upload is failing.

If you want we could add the oci-mediatypes=true option for cache export as we do for images. Not ready to have it default yet as older registries don't support them.

Would this setting solve anything given the above issue? What would setting is to true do in this context?

Craga89

comment created time in 20 hours

startedmoby/hyperkit

started time in 20 hours

issue commentmoby/moby

Cache not used if docker build is run on different hosts

Have you tried specifying the multiple --cache-from entries separately?

docker build \
  --build-arg BUILDKIT_INLINE_CACHE=1 \
  --cache-from image:builder \
  --cache-from image:latest \
  -t image:latest
romangrothausmann

comment created time in 20 hours

push eventmoby/moby

Sebastiaan van Stijn

commit sha a9569f524360c354fc192c50fbc0869ec4ffa916

vendor: opencontainers/selinux v1.5.2 full diff: https://github.com/opencontainers/selinux/compare/v1.5.1...v1.5.2 - Implement FormatMountLabel unconditionally Implementing FormatMountLabel on situations built without selinux should be possible; the context will be ignored if no SELinux is available. - Remote potential race condition, where mcs label is freed Theorectially if you do not change the MCS Label then we free it and two commands later reserve it. If some other process was grabbing MCS Labels at the same time, the other process could get the same label. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

view details

Brian Goff

commit sha 3b4cfa97237a8e1fb5eb985e4a7c0717cd14f5c8

Merge pull request #41029 from thaJeztah/bump_selinux vendor: opencontainers/selinux v1.5.2

view details

push time in 20 hours

PR merged moby/moby

vendor: opencontainers/selinux v1.5.2 area/security/selinux status/2-code-review

full diff: https://github.com/opencontainers/selinux/compare/v1.5.1...v1.5.2

  • Implement FormatMountLabel unconditionally Implementing FormatMountLabel on situations built without selinux should be possible; the context will be ignored if no SELinux is available.
  • Remote potential race condition, where mcs label is freed Theorectially if you do not change the MCS Label then we free it and two commands later reserve it. If some other process was grabbing MCS Labels at the same time, the other process could get the same label.

- Description for the changelog <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: -->

- A picture of a cute animal (not mandatory but encouraged)

+32 -30

2 comments

4 changed files

thaJeztah

pr closed time in 20 hours

push eventmoby/moby

wanghuaiqing

commit sha 228d74842fd1ac97b5c8d11fd6a3c313eae5c051

Upgrading the versions of images in Dockerfile. In order to run tests at mips64el device. Now official-images has supported the following images for mips64el. buildpack-deps:stretch buildpack-deps:buster debian:stretch debian:buster But official-images does not support the following images for mips64el. debian:jessie buildpack-deps:jessie Signed-off-by: wanghuaiqing <wanghuaiqing@loongson.cn>

view details

Tianon Gravi

commit sha 7932d4adecf3f5554b53517c6222b3493d086e0e

Merge pull request #41145 from wanghuaiqing2010/master Upgrading the versions of images in Dockerfile.

view details

push time in 20 hours

PR merged moby/moby

Upgrading the versions of images in Dockerfile. area/testing status/2-code-review

In order to run tests at mips64el device. Now official-images has supported the following images for mips64el. buildpack-deps:stretch buildpack-deps:buster debian:stretch debian:buster

But official-images does not support the following images for mips64el. debian:jessie buildpack-deps:jessie

Signed-off-by: wanghuaiqing wanghuaiqing@loongson.cn

<!-- Please make sure you've read and understood our contributing guidelines; https://github.com/moby/moby/blob/master/CONTRIBUTING.md

** Make sure all your commits include a signature generated with git commit -s **

For additional information on our contributing process, read our contributing guide https://docs.docker.com/opensource/code/

If this is a bug fix, make sure your description includes "fixes #xxxx", or "closes #xxxx"

Please provide the following information: -->

- What I did

- How I did it

- How to verify it

- Description for the changelog <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: -->

- A picture of a cute animal (not mandatory but encouraged)

+40 -40

10 comments

16 changed files

wanghuaiqing2010

pr closed time in 20 hours

push eventmoby/moby

Jintao Zhang

commit sha 85e3dddccdf74892dac74aa47de8cd2561147e08

update containerd to v1.3.6 Signed-off-by: Jintao Zhang <zhangjintao9020@gmail.com>

view details

Brian Goff

commit sha 534e219ad56bad60644359212ab6e33aa0c27ad3

Merge pull request #41169 from tao12345666333/update-containerd-v1.3.6

view details

push time in 20 hours

PR merged moby/moby

update containerd to v1.3.6

Full diff https://github.com/containerd/containerd/compare/v1.3.5...v1.3.6

The sixth patch release for containerd 1.3 includes a release process fix to not require the latest libseccomp. Prior releases in this series were pinned to libseccomp 2.3.3 and this update corrects the error in the v1.3.5 release which linked to the latest libseccomp.

Notable Updates

Pin libseccomp to 2.3.3 for the GH Actions-based release.yml containerd/containerd#4352


There are no substantive changes in this upgrade. :)

+1 -1

0 comment

1 changed file

tao12345666333

pr closed time in 20 hours

more