profile
viewpoint
Tõnis Tiigi tonistiigi Docker San Francisco

moby/buildkit 2703

concurrent, cache-efficient, and Dockerfile-agnostic builder toolkit

aacebedo/dnsdock 544

DNS service discovery for Docker containers

tonistiigi/audiosprite 534

Jukebox/Howler/CreateJS compatible audio sprite generator

tonistiigi/buildkit-pack 79

buildkit frontend for buildpacks

containerd/fifo 56

fifo pkg for Go

carlosedp/riscv-bringup 52

Risc-V journey thru containers and new projects

dominictarr/kv 37

simple kv store for streams

docker/go 13

Go packages with small patches autogenerated (used for canonical/json)

icecrime/docker-api 5

Docker Remote API

push eventtonistiigi/buildkit

Tonis Tiigi

commit sha a8c2137598c2147e4bc4861d739cc3cac032c207

resolver: add credentials cache Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

push time in 5 hours

push eventtonistiigi/buildkit

Tonis Tiigi

commit sha dd304ede339a31c4a90de9fb17dee20d6ed68029

solver: fix marking already cached vertex as cancelled Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

push time in 5 hours

pull request commentmoby/buildkit

solver: fix marking already cached vertex as cancelled

Do not set cancelled error when vertex already has error

Having error without a completion time shouldn't really appear but I left in a condition for future compatibility to make sure we never override an actual error.

tonistiigi

comment created time in 5 hours

Pull request review commentmoby/buildkit

session: track sessions with a group construct

 func NewRegistryConfig(m map[string]config.RegistryConfig) docker.RegistryHosts 	) } -func New(ctx context.Context, hosts docker.RegistryHosts, sm *session.Manager) remotes.Resolver {+type SessionAuthenticator struct {+	sm      *session.Manager+	groups  []session.Group+	mu      sync.RWMutex+	cache   map[string]credentials+	cacheMu sync.RWMutex+}++type credentials struct {+	user    string+	secret  string+	created time.Time+}++func NewSessionAuthenticator(sm *session.Manager, g session.Group) *SessionAuthenticator {+	return &SessionAuthenticator{sm: sm, groups: []session.Group{g}, cache: map[string]credentials{}}+}++func (a *SessionAuthenticator) credentials(h string) (string, string, error) {+	a.cacheMu.RLock()+	c, ok := a.cache[h]+	if ok && time.Since(c.created) < time.Minute {

Yes. I guess I could add a named constant for it so it is clearer.

tonistiigi

comment created time in 5 hours

PR opened moby/buildkit

solver: fix marking already cached vertex as cancelled

When a vertex was running and progress stream ends it is marked as canceled. This is not correct if it was actually marked cached and might have been started by another parallel job. This PR records if vertex was marked as cached before and avoids setting the cancellation error then.

Signed-off-by: Tonis Tiigi tonistiigi@gmail.com

+26 -3

0 comment

1 changed file

pr created time in 17 hours

create barnchtonistiigi/buildkit

branch : mark-cached

created branch time in 17 hours

push eventtonistiigi/buildkit

Tonis Tiigi

commit sha 2e9987ad1664cd89c04deaa528a7e56111c0c378

session: track sessions with a group construct Avoid hidden session passing and allow one session to drop when multiple builds share a vertex. Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tonis Tiigi

commit sha 214aa5dbcf68f3eb5fc66b1f5198a84c6e2505e2

pull: allow separate sessions for different parts of pull Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tonis Tiigi

commit sha 05013a663b6a717f8cab87d64ad8ea98b577cb9e

pull: fix session updating on resolver Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tonis Tiigi

commit sha ed60a2f4e11493fc83b45742bc860cc448a60cbb

resolver: add credentials cache Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

push time in 17 hours

issue closedmoby/buildkit

Buildkit mount=type=cache not working as expected

Hi,

I'm trying to user the --mount=type=cache to cache to the output of pip builds but it isn't working as expected.

Below is an example Dockerfile using yum, this results in an empty cache folder.

# syntax = docker/dockerfile:experimental
FROM centos:8 AS stagea

RUN --mount=type=cache,target=/var/cache/yum \
     --mount=type=cache,target=/var/cache/dnf \
     yum makecache --timer \
     && yum -y install \
     which


FROM centos:8 AS stageb

RUN --mount=type=cache,from=stagea,source=/var/cache/yum,target=/var/cache/yum \
     --mount=type=cache,from=stagea,source=/var/cache/yum,target=/var/cache/dnf \
     ls -l /var/cache/yum; ls -l /var/cache/dnf; \
     yum makecache --timer; \
     yum -y install \
     which

To try and get around this I've tried not caching stagea, but I get the error "failed to solve with frontend dockerfile.v0: failed to solve with frontend gateway.v0: rpc error: code = Unknown desc = failed to build LLB: failed to compute cache key: "/var/cache/yum" not found: not found".

An example of this is:

# syntax = docker/dockerfile:experimental
FROM centos:8 AS stagea

RUN  yum makecache --timer \
     && yum -y install \
     which


FROM centos:8 AS stageb

RUN --mount=type=cache,from=stagea,source=/var/cache/yum,target=/var/cache/yum \
     --mount=type=cache,from=stagea,source=/var/cache/yum,target=/var/cache/dnf \
     ls -l /var/cache/yum; ls -l /var/cache/dnf; \
     yum makecache --timer; \
     yum -y install \
     which

In the below Docker file I managed to fudge the desired result by removing the from and source parameter from the --mount=type=cache and adding an additional COPY --from=stagea so that the first stage isn't skipped. This isn't ideal as it adds an extra layer and file into the final image.

# syntax = docker/dockerfile:experimental
FROM centos:8 AS stagea

RUN --mount=type=cache,target=/var/cache/yum \
     --mount=type=cache,target=/var/cache/dnf \
     yum makecache --timer \
     && yum -y install \
     which

FROM centos:8 AS stageb

COPY --from=stagea /usr/bin/which /usr/bin/which
RUN --mount=type=cache,target=/var/cache/yum \
     --mount=type=cache,target=/var/cache/dnf \
     ls -l /var/cache/yum; ls -l /var/cache/dnf; \
     yum makecache --timer; \
     yum -y install \
     which

closed time in a day

Maddog2050

issue commentdocker/buildx

ERROR merging manifest list

Can't see the error. Maybe try --progress=plain in case it got hidden by some terminal override.

baosong818

comment created time in 3 days

issue commentdocker/buildx

allow loading current builder config from a file

Somewhat related: initially, I also thought about loading builders from current working directory. So you could have it connected to the project and not (only) your home directory. Maybe even make --use work only within the project directory in that case.

errordeveloper

comment created time in 3 days

issue commentdocker/buildx

ssh context timed out (workaround: ControlPersist)

Is it just that making a new connection takes longer than the timeout or something else? One way to try would be to make the timeout configurable.

AkihiroSuda

comment created time in 3 days

issue closeddocker/buildx

bake with multiple outputs

The manifest currently has Outputs as a a slice:

https://github.com/docker/buildx/blob/f3111bcbef8ce7e3933711358419fa18294b3daf/bake/bake.go#L350

However, trying to set multiple outputs results in an error:

multiple outputs currently unsupported

https://github.com/docker/buildx/blob/6db68d029599c6710a32aa7adcba8e5a344795a7/build/build.go#L356

The message suggests that this may be implemented in the future, but I couldn't find an existing issue so opening one.

My use-case would be to use something like this:

      "output": [
        "type=image,push=true",
        "type=docker,dest=image.oci"
      ]

I'd like to push to a registry and store a tarball as CI artefact as well.

closed time in 3 days

errordeveloper

issue commentdocker/buildx

bake with multiple outputs

moved to https://github.com/moby/buildkit/issues/1555

errordeveloper

comment created time in 3 days

issue openedmoby/buildkit

support multiple exporters for build

Eg. allow pushing and loading to local tarball with the same request.

moved from https://github.com/docker/buildx/issues/316

created time in 3 days

issue closedmoby/buildkit

Docker exporter - specify multiple tags during build

Hi,

Is it possible to specify more than one name during docker build like docker cli does:

docker build -t name:tag1 -t name:tag2

closed time in 3 days

opopops

push eventmoby/moby

Akihiro Suda

commit sha 97708281eba312f3483a27cbb2bd45cf2eac5661

info: improve "WARNING: Running in rootless-mode without cgroup" The cgroup v2 mode uses systemd driver by default. Suggesting to set exec-opt "native.cgroupdriver=systemd" isn't meaningful. Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>

view details

Tõnis Tiigi

commit sha 2b1bd643107d97ec7058598498cfc0cab4e23846

Merge pull request #41157 from AkihiroSuda/improve-info-warn info: improve "WARNING: Running in rootless-mode without cgroup"

view details

push time in 3 days

PR merged moby/moby

info: improve "WARNING: Running in rootless-mode without cgroup" area/rootless kind/enhancement

The cgroup v2 mode uses systemd driver by default. Suggesting to set exec-opt "native.cgroupdriver=systemd" isn't meaningful.

+2 -2

1 comment

1 changed file

AkihiroSuda

pr closed time in 3 days

pull request commentmoby/buildkit

dockerfile: add --chmod support for COPY/ADD command

The current ADD --chown does set the permissions on the archive (before decompressing),

This is not quite correct. The real behavior is that chown/chmod only works for file inputs and not archive inputs. This somewhat makes sense because archives already contain uid info per file(unlike context files that are normalized to root). It's also completely separate code path and historically these flags only existed on COPY that doesn't support archives.

We can just make it clear in docs that this is the expected behavior.

I don't see a reason to not allow a behavior change in here as well. It's not very likely that people today are already using the flags on archive inputs but expect the current behavior (although there is a case with multiple inputs). But atm, for the --chmod flag we should just keep the behavior consistent with --chown. If someone wants to look into supporting archives, I think this would be a LLB level change.

heychenbin

comment created time in 3 days

issue closedmoby/buildkit

dockerfile: experimental ADD/COPY --chmod=...

I know there's a rich history for adding an ADD/COPY --chmod=... flag similar to the --chown=... flag to the Dockerfile syntax, but I don't see a feature request here in this repo where it seems to belong — could we amend the Dockerfile experimental frontend syntax to add a --chmod=0755 style flag to ADD and COPY commands which populates the mode in the associated FileOp (from #809)? I'd be happy to prototype an implementation if there were appetite.

The main use case is building thin images by adding binaries directly from external sources which don't have a correct mode, like:

FROM scratch
ADD --chmod=0755 https://github.com/pganalyze/collector/releases/download/v0.21.0/pganalyze-collector-linux-amd64 /pganalyze-collector
COMMAND /pganalyze-collector

Currently, the only ways to really do this is download the file outside the Dockerfile, set modes, then copy it in, or to use an intermediate container with a whole lot of extra weight just to use a chmod command and then copy to a subsequent FROM scratch stage.

closed time in 3 days

sj26

issue commentmoby/buildkit

dockerfile: experimental ADD/COPY --chmod=...

https://github.com/moby/buildkit/pull/1492

sj26

comment created time in 3 days

issue commentmoby/buildkit

Question: auth for use as daemon

answered in https://github.com/moby/buildkit/issues/649#issuecomment-425364795 afaics

alexellis

comment created time in 3 days

issue closedmoby/buildkit

Cache can't be exported to Quay.io

Similar to #1143

When exporting cache to quay.io, it fails with

error: failed to solve: rpc error: code = Unknown desc = error writing manifest blob: failed commit on ref "sha256:c2aba47e903ef19d459785c7e5750ef7da0f6f86657d9b40c329d5268dfe2185": unexpected status: 401 Unauthorized

The error is the same with both modes: mode=max or mode=min

buildctl" build \
    --progress=plain \
    --frontend=dockerfile.v0 \
    --local context="${context}" \
    --local dockerfile="$(dirname "${dockerfile}")" \
    --opt filename="$(basename "${dockerfile}")" \
    --output "type=image,\"name=${name}\",push=${push}" \
    --export-cache "type=registry,mode=max,ref=${image}:${tag}-buildcache" \
    --import-cache "type=registry,ref=${image}:${tag}-buildcache" \
    "${@}"

When I do not --export-cache, images are pushed properly to quay.io so the credentials are correct.

closed time in 4 days

mbarbero

issue commentmoby/buildkit

Cache can't be exported to Quay.io

#1550

mbarbero

comment created time in 4 days

push eventmoby/buildkit

Akihiro Suda

commit sha d954b77f60d5ec02a747f834ff43811af6e30cc1

update runc binary to v1.0.0-rc91 release note: https://github.com/opencontainers/runc/releases/tag/v1.0.0-rc91 vendored library isn't updated in this commit (waiting for containerd to vendor runc rc91) Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>

view details

Tõnis Tiigi

commit sha ddfd87ec1fcb5aff3da34f77623e01da2d1e2193

Merge pull request #1553 from AkihiroSuda/runc-rc91 update runc binary to v1.0.0-rc91

view details

push time in 4 days

PR merged moby/buildkit

update runc binary to v1.0.0-rc91

release note: https://github.com/opencontainers/runc/releases/tag/v1.0.0-rc91

vendored library isn't updated in this commit (waiting for containerd to vendor runc rc91)

+1 -1

0 comment

1 changed file

AkihiroSuda

pr closed time in 4 days

issue commentmoby/buildkit

Add `Exec` to the gateway API.

how about

service LLBBridge {
  rpc NewContainer(NewContainerRequest) returns (NewContainerResponse);
  rpc ReleaseContainer(ReleaseContainerRequest) returns (ReleaseContainerResponse);
  rpc ExecProcess(stream ExecMessage) returns (stream ExecMessage);
}

message NewContainerRequest {
	string Ref = 1;
        // For mount input values we can use random identifiers passed with ref
	repeated pb.Mount mounts = 2;
	pb.NetMode network = 3;
	pb.SecurityMode security = 4
}

message ReleaseContainerRequest {
	string Ref = 1;
}


message ExecMessage {
	 oneof input {
                InitMessage init = 1;
		FdMessage file = 2;
		ResizeMessage resize = 3;
		StartedMessage started = 4;
		ExitMessage exit = 5;
	}
}

message InitMessage{
  pb.Meta Meta = 1;
  repeated uint32 fds = 2;
  bool tty = 3;
  // ?? way to control if this is PID1? probably not needed
}

message ExitMessage {
  uint32 Code = 1;
}

message FdMessage{
	uint32 fd = 1; // what fd the data was from
	bool eof = 2;  // true if eof was reached
	bytes data = 3;
}

message ResizeMessage{
	uint32 rows = 1;
	uint32 cols = 2;
	uint32 xpixel = 3;
	uint32 ypixel = 4;
}

I added "container" concept atm to support multiple exec processes. Not sure if needed initially but probably better to be safe for future ideas. The complex part of this is that it does not allow reusing the current executor directly, eg. runc needs to be invoked with create/start/exec calls instead of a single run. Or we mark one process as pid1 and then I think we can use run+exec.

Sending pb.Meta inside the giant one-off object is objectively ugly but this is grpc limitation that we can't pass the initial message on streamable endpoints (unless with unsafe context metadata).

tonistiigi

comment created time in 4 days

issue closedmoby/buildkit

.dockerignore doesn't work if dockerfile path != context path

I'd like to be able to store my Dockerfile & .dockerignore separate from my context but it doesn't work. Luckily there is a painless workaround since Dockerfile & Dockerfile.dockerignore works. That being said, I assume the intention is for the default case to work.

docker run \
    -it \
    --rm \
    --privileged \
    -v /path/to/context:/tmp/context \
    -v /path/to/dockerfile:/tmp/dockerfile \
    --entrypoint buildctl-daemonless.sh \
    moby/buildkit:master \
        build \
        --frontend dockerfile.v0 \
        --local context=/tmp/context \
        --local dockerfile=/tmp/dockerfile
Works with context dir contents:
Dockerfile
.dockerignore

Works with context dir contents:
Dockerfile
Dockerfile.dockerignore

Doesn't work with dockerfile dir contents:
Dockerfile
.dockerignore

Works with dockerfile dir contents:
Dockerfile
Dockerfile.dockerignore

closed time in 4 days

mjgallag

issue closedmoby/buildkit

Question: does manifest content name matter when create a image

When commit a image, one step is to write manifest, config content into disk, in buildkit, the bolb's ref name is the digest id idxDigest.String().

func (ic *ImageWriter) Commit(ctx context.Context, inp exporter.Source, oci bool) (*ocispec.Descriptor, error) {
...

if err := content.WriteBlob(ctx, ic.opt.ContentStore, idxDigest.String(), bytes.NewReader(idxBytes), idxDesc, content.WithLabels(labels)); err != nil { 
        return nil, idxDone(errors.Wrapf(err, "error writing manifest list blob %s", idxDigest)) 
    }
...

But in containerd, the ref name is created by function remotes.MakeRefKey, the ref name will add a type like manifest- before digest id.

ref := remotes.MakeRefKey(ctx, desc)
    if err := content.WriteBlob(ctx, c.contentStore, ref, bytes.NewReader(mb), desc, content.WithLabels(labels)); err != nil {
        return ocispec.Descriptor{}, errors.Wrap(err, "failed to write config")
    }   

I wonder whether this matter when we pull and push image ?

closed time in 4 days

Ace-Tang

issue commentmoby/buildkit

Question: does manifest content name matter when create a image

Think we can close this. Feel free to still send a PR if you wish to normalize this.

Ace-Tang

comment created time in 4 days

issue closedmoby/buildkit

Question: auth for use as daemon

Hi :+1:

When BuildKit is being used as a Daemon in a Kubernetes / Swarm cluster, what is the preferred way to control access?

With Kubernetes a NetworkPolicy may be sufficient to prevent tampering but I can't see an obvious option for this with Swarm / Docker?

Does the BuildKit daemon have any built-in capabilities and/or is there a way to enable something to "protect" the gRPC socket that is open?

Alex

Cc @johnmccabe @tonistiigi @AkihiroSuda

closed time in 4 days

alexellis

issue closedmoby/buildkit

always display image hashes

It's tough to debug docker building when I can't just get into the previously successful intermediate build image and run the next command manually...

docker run -it --rm hash_id bash
# execute the next RUN line here manually.

I would therefore argue that image hashes should always display, just like they do in the current docker.

closed time in 4 days

TrentonAdams

issue commentmoby/buildkit

always display image hashes

Could we get a somewhat official answer on which of the following are true, in the context of building with BuildKit:

"There is no image hash for an intermediate cache because it is not exported to the docker image store." is correct. Never doubt @AkihiroSuda

Closing in favor of #1472

TrentonAdams

comment created time in 4 days

issue closedmoby/buildkit

[question] seeing some warnings/debug logs when building with BuildKit

More of a question / observation:

Seeing this on 19.03.2 and on master

docker rmi busybox || true
docker system prune -f

DOCKER_BUILDKIT=1 docker build -<<EOF
FROM busybox
RUN echo foo
EOF

Check the logs (full output shown below, but collapsed);

<details>

DEBU[2019-10-02T10:00:04.055982081Z] Calling HEAD /_ping                          
DEBU[2019-10-02T10:00:04.057846479Z] Calling POST /v1.40/build?buildargs=%7B%7D&buildid=feb189c98495cc7ed5a9dfcf1bdedd5f9d705909c55a7e5263d1b16848c54e9e&cachefrom=%5B%5D&cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=Dockerfile&labels=%7B%7D&memory=0&memswap=0&networkmode=default&remote=client-session&rm=1&session=ynd740mh42981436kbd0lpgi3&shmsize=0&target=&ulimits=null&version=2 
DEBU[2019-10-02T10:00:04.057974567Z] Calling POST /session                        
INFO[2019-10-02T10:00:04.058123633Z] parsed scheme: ""                             module=grpc
INFO[2019-10-02T10:00:04.058188908Z] scheme "" not registered, fallback to default scheme  module=grpc
INFO[2019-10-02T10:00:04.058205439Z] ccResolverWrapper: sending update to cc: {[{ 0  <nil>}] <nil>}  module=grpc
INFO[2019-10-02T10:00:04.058221209Z] ClientConn switching balancer to "pick_first"  module=grpc
DEBU[2019-10-02T10:00:04.064362881Z] new ref for local: cgemongemlyl2euat0ll24aw0 
DEBU[2019-10-02T10:00:04.065371854Z] new ref for local: t1f14bd9hjku3n6cq0vsi1i5e 
DEBU[2019-10-02T10:00:04.068363033Z] diffcopy took: 2.878106ms                    
DEBU[2019-10-02T10:00:04.070489423Z] diffcopy took: 5.958962ms                    
DEBU[2019-10-02T10:00:04.072071772Z] saved t1f14bd9hjku3n6cq0vsi1i5e as local.sharedKey:context:context-.dockerignore:a20365f530ee14621cbbe5378c5da4849cefbacc02ecd83ffceee57813bd9d64 
DEBU[2019-10-02T10:00:04.074167141Z] saved cgemongemlyl2euat0ll24aw0 as local.sharedKey:dockerfile:dockerfile:a20365f530ee14621cbbe5378c5da4849cefbacc02ecd83ffceee57813bd9d64 
DEBU[2019-10-02T10:00:04.192578772Z] resolving                                    
DEBU[2019-10-02T10:00:04.192665980Z] do request                                    request.headers="map[Accept:[application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.list.v2+json, application/vnd.oci.image.manifest.v1+json, application/vnd.oci.image.index.v1+json, *] User-Agent:[containerd/1.2.0+unknown]]" request.method=HEAD url="https://registry-1.docker.io/v2/library/busybox/manifests/latest"
DEBU[2019-10-02T10:00:04.589878284Z] fetch response received                       response.headers="map[Content-Length:[158] Content-Type:[application/json] Date:[Wed, 02 Oct 2019 10:00:04 GMT] Docker-Distribution-Api-Version:[registry/2.0] Strict-Transport-Security:[max-age=31536000] Www-Authenticate:[Bearer realm=\"https://auth.docker.io/token\",service=\"registry.docker.io\",scope=\"repository:library/busybox:pull\"]]" status="401 Unauthorized" url="https://registry-1.docker.io/v2/library/busybox/manifests/latest"
DEBU[2019-10-02T10:00:04.590023995Z] Unauthorized                                  header="Bearer realm=\"https://auth.docker.io/token\",service=\"registry.docker.io\",scope=\"repository:library/busybox:pull\""
DEBU[2019-10-02T10:00:05.021950236Z] do request                                    request.headers="map[Accept:[application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.list.v2+json, application/vnd.oci.image.manifest.v1+json, application/vnd.oci.image.index.v1+json, *] User-Agent:[containerd/1.2.0+unknown]]" request.method=HEAD url="https://registry-1.docker.io/v2/library/busybox/manifests/latest"
DEBU[2019-10-02T10:00:05.210394954Z] fetch response received                       response.headers="map[Content-Length:[1864] Content-Type:[application/vnd.docker.distribution.manifest.list.v2+json] Date:[Wed, 02 Oct 2019 10:00:05 GMT] Docker-Content-Digest:[sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e] Docker-Distribution-Api-Version:[registry/2.0] Etag:[\"sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e\"] Strict-Transport-Security:[max-age=31536000]]" status="200 OK" url="https://registry-1.docker.io/v2/library/busybox/manifests/latest"
DEBU[2019-10-02T10:00:05.210471908Z] resolved                                      desc.digest="sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e"
DEBU[2019-10-02T10:00:05.210547865Z] fetch                                         digest="sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e" mediatype=application/vnd.docker.distribution.manifest.list.v2+json size=1864
DEBU[2019-10-02T10:00:05.210754696Z] fetch                                         digest="sha256:dd97a3fe6d721c5cf03abac0f50e2848dc583f7c4e41bf39102ceb42edfd1808" mediatype=application/vnd.docker.distribution.manifest.v2+json size=527
DEBU[2019-10-02T10:00:05.210934362Z] fetch                                         digest="sha256:19485c79a9bbdca205fce4f791efeaa2a103e23431434696cc54fdd939e9198d" mediatype=application/vnd.docker.container.image.v1+json size=1497
DEBU[2019-10-02T10:00:05.212106884Z] fetch                                         digest="sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e" mediatype=application/vnd.docker.distribution.manifest.list.v2+json size=1864
DEBU[2019-10-02T10:00:05.212234162Z] fetch                                         digest="sha256:dd97a3fe6d721c5cf03abac0f50e2848dc583f7c4e41bf39102ceb42edfd1808" mediatype=application/vnd.docker.distribution.manifest.v2+json size=527
DEBU[2019-10-02T10:00:05.212409953Z] fetch                                         digest="sha256:19485c79a9bbdca205fce4f791efeaa2a103e23431434696cc54fdd939e9198d" mediatype=application/vnd.docker.container.image.v1+json size=1497
DEBU[2019-10-02T10:00:05.212980235Z] resolving                                    
DEBU[2019-10-02T10:00:05.213036641Z] do request                                    request.headers="map[Accept:[application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.list.v2+json, application/vnd.oci.image.manifest.v1+json, application/vnd.oci.image.index.v1+json, *] Authorization:[Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCIsIng1YyI6WyJNSUlDK2pDQ0FwK2dBd0lCQWdJQkFEQUtCZ2dxaGtqT1BRUURBakJHTVVRd1FnWURWUVFERXpzeVYwNVpPbFZMUzFJNlJFMUVVanBTU1U5Rk9reEhOa0U2UTFWWVZEcE5SbFZNT2tZelNFVTZOVkF5VlRwTFNqTkdPa05CTmxrNlNrbEVVVEFlRncweE9UQXhNVEl3TURJeU5EVmFGdzB5TURBeE1USXdNREl5TkRWYU1FWXhSREJDQmdOVkJBTVRPMUpMTkZNNlMwRkxVVHBEV0RWRk9rRTJSMVE2VTBwTVR6cFFNbEpMT2tOWlZVUTZTMEpEU0RwWFNVeE1Pa3hUU2xrNldscFFVVHBaVWxsRU1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBcjY2bXkveXpHN21VUzF3eFQ3dFplS2pqRzcvNnBwZFNMY3JCcko5VytwcndzMGtIUDVwUHRkMUpkcFdEWU1OZWdqQXhpUWtRUUNvd25IUnN2ODVUalBUdE5wUkdKVTRkeHJkeXBvWGc4TVhYUEUzL2lRbHhPS2VNU0prNlRKbG5wNGFtWVBHQlhuQXRoQzJtTlR5ak1zdFh2ZmNWN3VFYWpRcnlOVUcyUVdXQ1k1Ujl0a2k5ZG54Z3dCSEF6bG8wTzJCczFmcm5JbmJxaCtic3ZSZ1FxU3BrMWhxYnhSU3AyRlNrL2tBL1gyeUFxZzJQSUJxWFFMaTVQQ3krWERYZElJczV6VG9ZbWJUK0pmbnZaMzRLcG5mSkpNalpIRW4xUVJtQldOZXJZcVdtNVhkQVhUMUJrQU9aditMNFVwSTk3NFZFZ2ppY1JINVdBeWV4b1BFclRRSURBUUFCbzRHeU1JR3ZNQTRHQTFVZER3RUIvd1FFQXdJSGdEQVBCZ05WSFNVRUNEQUdCZ1JWSFNVQU1FUUdBMVVkRGdROUJEdFNTelJUT2t0QlMxRTZRMWcxUlRwQk5rZFVPbE5LVEU4NlVESlNTenBEV1ZWRU9rdENRMGc2VjBsTVREcE1VMHBaT2xwYVVGRTZXVkpaUkRCR0JnTlZIU01FUHpBOWdEc3lWMDVaT2xWTFMxSTZSRTFFVWpwU1NVOUZPa3hITmtFNlExVllWRHBOUmxWTU9rWXpTRVU2TlZBeVZUcExTak5HT2tOQk5sazZTa2xFVVRBS0JnZ3Foa2pPUFFRREFnTkpBREJHQWlFQXFOSXEwMFdZTmM5Z2tDZGdSUzRSWUhtNTRZcDBTa05Rd2lyMm5hSWtGd3dDSVFEMjlYdUl5TmpTa1cvWmpQaFlWWFB6QW9TNFVkRXNvUUhyUVZHMDd1N3ZsUT09Il19.eyJhY2Nlc3MiOlt7InR5cGUiOiJyZXBvc2l0b3J5IiwibmFtZSI6ImxpYnJhcnkvYnVzeWJveCIsImFjdGlvbnMiOlsicHVsbCJdfV0sImF1ZCI6InJlZ2lzdHJ5LmRvY2tlci5pbyIsImV4cCI6MTU3MDAxMDcwNCwiaWF0IjoxNTcwMDEwNDA0LCJpc3MiOiJhdXRoLmRvY2tlci5pbyIsImp0aSI6IkRHWHdRZHdURmQyYTZZZVlFbjhuIiwibmJmIjoxNTcwMDEwMTA0LCJzdWIiOiIifQ.j6w_j4ZnzlPrZc9KiPytAKaSqsvDTOR35fyShnAjITSh0jc4qeWE32MZWKWo6kNk0M8wCHVHRWlufsTJOgd-3d8tiWoU_U2DaIqzzUj8zupG-xg5CiB2CGBotbSD_E8VdQPVgUnaZnX18PC2tKJPPvaBEjcY7teMCfyBV0N1xnpHFzeQtWd7pAE2iDFPUG2x0rlXBckB1wObJhJbzqBkaK4Td0eh5IjslEPs4UwXIUHssn5xECzsEbnOn7zTpBq9FOaOuz7BCC09HX57GOfLgyqQfqqgjRpxH18S98JO8_0ZpaD50Iv8dEl84-kQX36z8KO7fd61J221zY5JFiAy8A] User-Agent:[containerd/1.2.0+unknown]]" request.method=HEAD url="https://registry-1.docker.io/v2/library/busybox/manifests/sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e"
DEBU[2019-10-02T10:00:05.409794381Z] fetch response received                       response.headers="map[Content-Length:[1864] Content-Type:[application/vnd.docker.distribution.manifest.list.v2+json] Date:[Wed, 02 Oct 2019 10:00:05 GMT] Docker-Content-Digest:[sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e] Docker-Distribution-Api-Version:[registry/2.0] Etag:[\"sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e\"] Strict-Transport-Security:[max-age=31536000]]" status="200 OK" url="https://registry-1.docker.io/v2/library/busybox/manifests/sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e"
DEBU[2019-10-02T10:00:05.409931590Z] resolved                                      desc.digest="sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e"
DEBU[2019-10-02T10:00:05.410018359Z] fetch                                         digest="sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e" mediatype=application/vnd.docker.distribution.manifest.list.v2+json size=1864
DEBU[2019-10-02T10:00:05.410301213Z] fetch                                         digest="sha256:dd97a3fe6d721c5cf03abac0f50e2848dc583f7c4e41bf39102ceb42edfd1808" mediatype=application/vnd.docker.distribution.manifest.v2+json size=527
DEBU[2019-10-02T10:00:05.410399636Z] fetch                                         digest="sha256:19485c79a9bbdca205fce4f791efeaa2a103e23431434696cc54fdd939e9198d" mediatype=application/vnd.docker.container.image.v1+json size=1497
DEBU[2019-10-02T10:00:05.418619964Z] do request                                    base="https://registry-1.docker.io/v2/library/busybox" digest="sha256:7c9d20b9b6cda1c58bc4f9d6c401386786f584437abbe87e58910f8a9a15386b" request.headers="map[Accept:[application/vnd.docker.image.rootfs.diff.tar.gzip, *]]" request.method=GET url="https://registry-1.docker.io/v2/library/busybox/blobs/sha256:7c9d20b9b6cda1c58bc4f9d6c401386786f584437abbe87e58910f8a9a15386b"
DEBU[2019-10-02T10:00:05.624773480Z] fetch response received                       base="https://registry-1.docker.io/v2/library/busybox" digest="sha256:7c9d20b9b6cda1c58bc4f9d6c401386786f584437abbe87e58910f8a9a15386b" response.headers="map[Accept-Ranges:[bytes] Age:[618217] Cache-Control:[public, max-age=14400] Cf-Cache-Status:[HIT] Cf-Ray:[51f5d3cb0ac7c775-AMS] Content-Length:[760770] Content-Type:[application/octet-stream] Date:[Wed, 02 Oct 2019 10:00:05 GMT] Etag:[\"4166ef0ced6549afb3ac160752b5636d\"] Expect-Ct:[max-age=604800, report-uri=\"https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct\"] Expires:[Wed, 02 Oct 2019 14:00:05 GMT] Last-Modified:[Wed, 04 Sep 2019 19:20:59 GMT] Server:[cloudflare] Set-Cookie:[__cfduid=d8ec370019f68f455aa3ef30944bfd0761570010405; expires=Thu, 01-Oct-20 10:00:05 GMT; path=/; domain=.production.cloudflare.docker.com; HttpOnly; Secure] Vary:[Accept-Encoding] X-Amz-Id-2:[Pmnorq/zIjCh+48lEOUWXI+/UcnyE4/s7TkyZHjdQa4caRdBfxCcsZzzrCoZot1D7RCSFn7/MN8=] X-Amz-Request-Id:[BCDC641C093E62B5] X-Amz-Version-Id:[FWSiFqYMfN_YqV4cGb1Cvkg3XulaRwOo]]" status="200 OK" url="https://registry-1.docker.io/v2/library/busybox/blobs/sha256:7c9d20b9b6cda1c58bc4f9d6c401386786f584437abbe87e58910f8a9a15386b"
DEBU[2019-10-02T10:00:05.757338074Z] Applying tar in /var/lib/docker/overlay2/a16b208e233721206fc40483b845fb142d6b9015307dea00d998674423bfa566/diff  storage-driver=overlay2
DEBU[2019-10-02T10:00:05.921026094Z] Applied tar sha256:6c0ea40aef9d2795f922f4e8642f0cd9ffb9404e6f3214693a1fd45489f38b44 to a16b208e233721206fc40483b845fb142d6b9015307dea00d998674423bfa566, size: 1219782 
DEBU[2019-10-02T10:00:05.945451158Z] Assigning addresses for endpoint unsrd7o9dv1ez1yl20qsdd7yr's interface on network bridge 
DEBU[2019-10-02T10:00:05.945537462Z] RequestAddress(LocalDefault/172.18.0.0/16, <nil>, map[]) 
DEBU[2019-10-02T10:00:05.945581791Z] Request address PoolID:172.18.0.0/16 App: ipam/default/data, ID: LocalDefault/172.18.0.0/16, DBIndex: 0x0, Bits: 65536, Unselected: 65532, Sequence: (0xe0000000, 1)->(0x0, 2046)->(0x1, 1)->end Curr:4 Serial:false PrefAddress:<nil>  
DEBU[2019-10-02T10:00:05.953181299Z] Assigning addresses for endpoint unsrd7o9dv1ez1yl20qsdd7yr's interface on network bridge 
DEBU[2019-10-02T10:00:05.953306432Z] e0cf451d4d0a419a9e1be556dab4358826e33dc5121305ad478be5534fb02add (61a5565).addSvcRecords(unsrd7o9dv1ez1yl20qsdd7yr, 172.18.0.3, <nil>, true) updateSvcRecord sid:e0cf451d4d0a419a9e1be556dab4358826e33dc5121305ad478be5534fb02add 
DEBU[2019-10-02T10:00:05.956655695Z] e0cf451d4d0a419a9e1be556dab4358826e33dc5121305ad478be5534fb02add (61a5565).addSvcRecords(unsrd7o9dv1ez1yl20qsdd7yr, 172.18.0.3, <nil>, true) updateSvcRecord sid:e0cf451d4d0a419a9e1be556dab4358826e33dc5121305ad478be5534fb02add 
DEBU[2019-10-02T10:00:05.958315916Z] Programming external connectivity on endpoint unsrd7o9dv1ez1yl20qsdd7yr (e0cf451d4d0a419a9e1be556dab4358826e33dc5121305ad478be5534fb02add) 
DEBU[2019-10-02T10:00:05.960006365Z] > creating 4oit3gsfctj4u0zv9k3jc8m2o [/bin/sh -c echo foo] 
DEBU[2019-10-02T10:00:06.268309040Z] sandbox set key processing took 202.531481ms for container unsrd7o9dv1ez1yl20qsdd7yr 
DEBU[2019-10-02T10:00:06.680168188Z] Revoking external connectivity on endpoint unsrd7o9dv1ez1yl20qsdd7yr (e0cf451d4d0a419a9e1be556dab4358826e33dc5121305ad478be5534fb02add) 
DEBU[2019-10-02T10:00:06.680945335Z] DeleteConntrackEntries purged ipv4:0, ipv6:0 
DEBU[2019-10-02T10:00:06.690674253Z] could not get checksum for "x128nsj79yzfx4j5h6em2w2on" with tar-split: "no tar-split file" 
DEBU[2019-10-02T10:00:06.690956506Z] Tar with options on /var/lib/docker/overlay2/x128nsj79yzfx4j5h6em2w2on/diff  storage-driver=overlay2
WARN[2019-10-02T10:00:06.717289104Z] grpc: addrConn.createTransport failed to connect to { 0  <nil>}. Err :connection error: desc = "transport: Error while dialing only one connection allowed". Reconnecting...  module=grpc
DEBU[2019-10-02T10:00:06.803332661Z] e0cf451d4d0a419a9e1be556dab4358826e33dc5121305ad478be5534fb02add (61a5565).deleteSvcRecords(unsrd7o9dv1ez1yl20qsdd7yr, 172.18.0.3, <nil>, true) updateSvcRecord sid:e0cf451d4d0a419a9e1be556dab4358826e33dc5121305ad478be5534fb02add  
DEBU[2019-10-02T10:00:06.944088856Z] Releasing addresses for endpoint unsrd7o9dv1ez1yl20qsdd7yr's interface on network bridge 
DEBU[2019-10-02T10:00:06.944149633Z] ReleaseAddress(LocalDefault/172.18.0.0/16, 172.18.0.3) 
DEBU[2019-10-02T10:00:06.944171514Z] Released address PoolID:LocalDefault/172.18.0.0/16, Address:172.18.0.3 Sequence:App: ipam/default/data, ID: LocalDefault/172.18.0.0/16, DBIndex: 0x0, Bits: 65536, Unselected: 65531, Sequence: (0xf0000000, 1)->(0x0, 2046)->(0x1, 1)->end Curr:4 

</details>

These entries stood out to me.

Wondered about this warning; is this a configuration issue in our code?

WARN[2019-10-02T10:00:06.717289104Z] grpc: addrConn.createTransport failed to connect to { 0  <nil>}. Err :connection error: desc = "transport: Error while dialing only one connection allowed". Reconnecting...  module=grpc

This one is logged as "DEBUG", so perhaps not important. The "could not get checksum" part stood out to me though, so wondering if it's indeed expected, and if it is, perhaps we should add some extra logging (e.g. "falling back to ...")

DEBU[2019-10-02T10:00:06.690674253Z] could not get checksum for "x128nsj79yzfx4j5h6em2w2on" with tar-split: "no tar-split file" 

Note that the above log entry seems to relate to the RUN step; without the RUN step, the "could not get checksum" doesn't occur;

docker rmi busybox || true
docker system prune -f

DOCKER_BUILDKIT=1 docker build -<<EOF
FROM busybox
EOF

<details>

DEBU[2019-10-02T10:09:38.789184587Z] Calling HEAD /_ping                          
DEBU[2019-10-02T10:09:38.791206291Z] Calling POST /session                        
DEBU[2019-10-02T10:09:38.791301150Z] Calling POST /v1.40/build?buildargs=%7B%7D&buildid=c8fbee5fa752172a7bd2f5a8a45a2afdcf169383a944daf338de6561b9fd4e98&cachefrom=%5B%5D&cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=Dockerfile&labels=%7B%7D&memory=0&memswap=0&networkmode=default&remote=client-session&rm=1&session=pl08wxmjetn49t7es4dkyowzo&shmsize=0&target=&ulimits=null&version=2 
INFO[2019-10-02T10:09:38.791336961Z] parsed scheme: ""                             module=grpc
INFO[2019-10-02T10:09:38.791402489Z] scheme "" not registered, fallback to default scheme  module=grpc
INFO[2019-10-02T10:09:38.791419447Z] ccResolverWrapper: sending update to cc: {[{ 0  <nil>}] <nil>}  module=grpc
INFO[2019-10-02T10:09:38.791447313Z] ClientConn switching balancer to "pick_first"  module=grpc
DEBU[2019-10-02T10:09:38.797677006Z] new ref for local: psx5i2dg0qe9sdq6qbg01u951 
DEBU[2019-10-02T10:09:38.799469722Z] new ref for local: 3cdm3kmp3x1viqtrbwp7thzyz 
DEBU[2019-10-02T10:09:38.803893751Z] diffcopy took: 4.224827ms                    
DEBU[2019-10-02T10:09:38.805578599Z] diffcopy took: 7.686343ms                    
DEBU[2019-10-02T10:09:38.806876249Z] saved 3cdm3kmp3x1viqtrbwp7thzyz as local.sharedKey:dockerfile:dockerfile:a20365f530ee14621cbbe5378c5da4849cefbacc02ecd83ffceee57813bd9d64 
DEBU[2019-10-02T10:09:38.809041305Z] saved psx5i2dg0qe9sdq6qbg01u951 as local.sharedKey:context:context-.dockerignore:a20365f530ee14621cbbe5378c5da4849cefbacc02ecd83ffceee57813bd9d64 
DEBU[2019-10-02T10:09:38.918243562Z] resolving                                    
DEBU[2019-10-02T10:09:38.918396160Z] do request                                    request.headers="map[Accept:[application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.list.v2+json, application/vnd.oci.image.manifest.v1+json, application/vnd.oci.image.index.v1+json, *] User-Agent:[containerd/1.2.0+unknown]]" request.method=HEAD url="https://registry-1.docker.io/v2/library/busybox/manifests/latest"
DEBU[2019-10-02T10:09:39.326196585Z] fetch response received                       response.headers="map[Content-Length:[158] Content-Type:[application/json] Date:[Wed, 02 Oct 2019 10:09:39 GMT] Docker-Distribution-Api-Version:[registry/2.0] Strict-Transport-Security:[max-age=31536000] Www-Authenticate:[Bearer realm=\"https://auth.docker.io/token\",service=\"registry.docker.io\",scope=\"repository:library/busybox:pull\"]]" status="401 Unauthorized" url="https://registry-1.docker.io/v2/library/busybox/manifests/latest"
DEBU[2019-10-02T10:09:39.326269426Z] Unauthorized                                  header="Bearer realm=\"https://auth.docker.io/token\",service=\"registry.docker.io\",scope=\"repository:library/busybox:pull\""
DEBU[2019-10-02T10:09:39.747936008Z] do request                                    request.headers="map[Accept:[application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.list.v2+json, application/vnd.oci.image.manifest.v1+json, application/vnd.oci.image.index.v1+json, *] User-Agent:[containerd/1.2.0+unknown]]" request.method=HEAD url="https://registry-1.docker.io/v2/library/busybox/manifests/latest"
DEBU[2019-10-02T10:09:39.897229617Z] fetch response received                       response.headers="map[Content-Length:[1864] Content-Type:[application/vnd.docker.distribution.manifest.list.v2+json] Date:[Wed, 02 Oct 2019 10:09:39 GMT] Docker-Content-Digest:[sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e] Docker-Distribution-Api-Version:[registry/2.0] Etag:[\"sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e\"] Strict-Transport-Security:[max-age=31536000]]" status="200 OK" url="https://registry-1.docker.io/v2/library/busybox/manifests/latest"
DEBU[2019-10-02T10:09:39.897328087Z] resolved                                      desc.digest="sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e"
DEBU[2019-10-02T10:09:39.897380651Z] fetch                                         digest="sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e" mediatype=application/vnd.docker.distribution.manifest.list.v2+json size=1864
DEBU[2019-10-02T10:09:39.897656009Z] fetch                                         digest="sha256:dd97a3fe6d721c5cf03abac0f50e2848dc583f7c4e41bf39102ceb42edfd1808" mediatype=application/vnd.docker.distribution.manifest.v2+json size=527
DEBU[2019-10-02T10:09:39.897905589Z] fetch                                         digest="sha256:19485c79a9bbdca205fce4f791efeaa2a103e23431434696cc54fdd939e9198d" mediatype=application/vnd.docker.container.image.v1+json size=1497
DEBU[2019-10-02T10:09:39.899053825Z] fetch                                         digest="sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e" mediatype=application/vnd.docker.distribution.manifest.list.v2+json size=1864
DEBU[2019-10-02T10:09:39.899231759Z] fetch                                         digest="sha256:dd97a3fe6d721c5cf03abac0f50e2848dc583f7c4e41bf39102ceb42edfd1808" mediatype=application/vnd.docker.distribution.manifest.v2+json size=527
DEBU[2019-10-02T10:09:39.899355095Z] fetch                                         digest="sha256:19485c79a9bbdca205fce4f791efeaa2a103e23431434696cc54fdd939e9198d" mediatype=application/vnd.docker.container.image.v1+json size=1497
DEBU[2019-10-02T10:09:39.899862619Z] resolving                                    
DEBU[2019-10-02T10:09:39.899921121Z] do request                                    request.headers="map[Accept:[application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.list.v2+json, application/vnd.oci.image.manifest.v1+json, application/vnd.oci.image.index.v1+json, *] Authorization:[Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCIsIng1YyI6WyJNSUlDK2pDQ0FwK2dBd0lCQWdJQkFEQUtCZ2dxaGtqT1BRUURBakJHTVVRd1FnWURWUVFERXpzeVYwNVpPbFZMUzFJNlJFMUVVanBTU1U5Rk9reEhOa0U2UTFWWVZEcE5SbFZNT2tZelNFVTZOVkF5VlRwTFNqTkdPa05CTmxrNlNrbEVVVEFlRncweE9UQXhNVEl3TURJeU5EVmFGdzB5TURBeE1USXdNREl5TkRWYU1FWXhSREJDQmdOVkJBTVRPMUpMTkZNNlMwRkxVVHBEV0RWRk9rRTJSMVE2VTBwTVR6cFFNbEpMT2tOWlZVUTZTMEpEU0RwWFNVeE1Pa3hUU2xrNldscFFVVHBaVWxsRU1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBcjY2bXkveXpHN21VUzF3eFQ3dFplS2pqRzcvNnBwZFNMY3JCcko5VytwcndzMGtIUDVwUHRkMUpkcFdEWU1OZWdqQXhpUWtRUUNvd25IUnN2ODVUalBUdE5wUkdKVTRkeHJkeXBvWGc4TVhYUEUzL2lRbHhPS2VNU0prNlRKbG5wNGFtWVBHQlhuQXRoQzJtTlR5ak1zdFh2ZmNWN3VFYWpRcnlOVUcyUVdXQ1k1Ujl0a2k5ZG54Z3dCSEF6bG8wTzJCczFmcm5JbmJxaCtic3ZSZ1FxU3BrMWhxYnhSU3AyRlNrL2tBL1gyeUFxZzJQSUJxWFFMaTVQQ3krWERYZElJczV6VG9ZbWJUK0pmbnZaMzRLcG5mSkpNalpIRW4xUVJtQldOZXJZcVdtNVhkQVhUMUJrQU9aditMNFVwSTk3NFZFZ2ppY1JINVdBeWV4b1BFclRRSURBUUFCbzRHeU1JR3ZNQTRHQTFVZER3RUIvd1FFQXdJSGdEQVBCZ05WSFNVRUNEQUdCZ1JWSFNVQU1FUUdBMVVkRGdROUJEdFNTelJUT2t0QlMxRTZRMWcxUlRwQk5rZFVPbE5LVEU4NlVESlNTenBEV1ZWRU9rdENRMGc2VjBsTVREcE1VMHBaT2xwYVVGRTZXVkpaUkRCR0JnTlZIU01FUHpBOWdEc3lWMDVaT2xWTFMxSTZSRTFFVWpwU1NVOUZPa3hITmtFNlExVllWRHBOUmxWTU9rWXpTRVU2TlZBeVZUcExTak5HT2tOQk5sazZTa2xFVVRBS0JnZ3Foa2pPUFFRREFnTkpBREJHQWlFQXFOSXEwMFdZTmM5Z2tDZGdSUzRSWUhtNTRZcDBTa05Rd2lyMm5hSWtGd3dDSVFEMjlYdUl5TmpTa1cvWmpQaFlWWFB6QW9TNFVkRXNvUUhyUVZHMDd1N3ZsUT09Il19.eyJhY2Nlc3MiOlt7InR5cGUiOiJyZXBvc2l0b3J5IiwibmFtZSI6ImxpYnJhcnkvYnVzeWJveCIsImFjdGlvbnMiOlsicHVsbCJdfV0sImF1ZCI6InJlZ2lzdHJ5LmRvY2tlci5pbyIsImV4cCI6MTU3MDAxMTI3OSwiaWF0IjoxNTcwMDEwOTc5LCJpc3MiOiJhdXRoLmRvY2tlci5pbyIsImp0aSI6IlA5c3FsYTEtUlRaYl9yWjc0c3RKIiwibmJmIjoxNTcwMDEwNjc5LCJzdWIiOiIifQ.accohigXUIcwdXsGSrdhDEzgA_mViM27H1y7-S08HyZXfO6hlCb-n_Q-hWX6465-AQDq6CYkJvZPlGQweHeTgJ6IFrYE8fcwZP9XDEZVCwk2guHEgXIyW4Ah9hC2xG8yBkDtU6kXALz6wDix4W9v_eoabCMTUGiNWrcIY2dc4Q2W1zDNZr3Oq-sy2JON1p0vBzf74qCkAKXPBXEbseopmVwJTXcyFkYOkilgzRcX-s-dck8G1l769PYZm-X8mVf6WZPhMcW3jACWz6q2LFayUaTnTcUUn_dvFT15x3aXt9skVikTEvYc9FbrFDvPyq_YjLyH4DiYMfISx7lyV5ebuA] User-Agent:[containerd/1.2.0+unknown]]" request.method=HEAD url="https://registry-1.docker.io/v2/library/busybox/manifests/sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e"
DEBU[2019-10-02T10:09:40.064996129Z] fetch response received                       response.headers="map[Content-Length:[1864] Content-Type:[application/vnd.docker.distribution.manifest.list.v2+json] Date:[Wed, 02 Oct 2019 10:09:39 GMT] Docker-Content-Digest:[sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e] Docker-Distribution-Api-Version:[registry/2.0] Etag:[\"sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e\"] Strict-Transport-Security:[max-age=31536000]]" status="200 OK" url="https://registry-1.docker.io/v2/library/busybox/manifests/sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e"
DEBU[2019-10-02T10:09:40.065108676Z] resolved                                      desc.digest="sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e"
DEBU[2019-10-02T10:09:40.065182653Z] fetch                                         digest="sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e" mediatype=application/vnd.docker.distribution.manifest.list.v2+json size=1864
DEBU[2019-10-02T10:09:40.065458168Z] fetch                                         digest="sha256:dd97a3fe6d721c5cf03abac0f50e2848dc583f7c4e41bf39102ceb42edfd1808" mediatype=application/vnd.docker.distribution.manifest.v2+json size=527
DEBU[2019-10-02T10:09:40.065576472Z] fetch                                         digest="sha256:19485c79a9bbdca205fce4f791efeaa2a103e23431434696cc54fdd939e9198d" mediatype=application/vnd.docker.container.image.v1+json size=1497
DEBU[2019-10-02T10:09:40.075623606Z] do request                                    base="https://registry-1.docker.io/v2/library/busybox" digest="sha256:7c9d20b9b6cda1c58bc4f9d6c401386786f584437abbe87e58910f8a9a15386b" request.headers="map[Accept:[application/vnd.docker.image.rootfs.diff.tar.gzip, *]]" request.method=GET url="https://registry-1.docker.io/v2/library/busybox/blobs/sha256:7c9d20b9b6cda1c58bc4f9d6c401386786f584437abbe87e58910f8a9a15386b"
DEBU[2019-10-02T10:09:40.281150925Z] fetch response received                       base="https://registry-1.docker.io/v2/library/busybox" digest="sha256:7c9d20b9b6cda1c58bc4f9d6c401386786f584437abbe87e58910f8a9a15386b" response.headers="map[Accept-Ranges:[bytes] Age:[618792] Cache-Control:[public, max-age=14400] Cf-Cache-Status:[HIT] Cf-Ray:[51f5e1d28fd8d915-AMS] Content-Length:[760770] Content-Type:[application/octet-stream] Date:[Wed, 02 Oct 2019 10:09:40 GMT] Etag:[\"4166ef0ced6549afb3ac160752b5636d\"] Expect-Ct:[max-age=604800, report-uri=\"https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct\"] Expires:[Wed, 02 Oct 2019 14:09:40 GMT] Last-Modified:[Wed, 04 Sep 2019 19:20:59 GMT] Server:[cloudflare] Set-Cookie:[__cfduid=dea69cbf25e3a29d5e5ea13f0d5282c911570010980; expires=Thu, 01-Oct-20 10:09:40 GMT; path=/; domain=.production.cloudflare.docker.com; HttpOnly; Secure] Vary:[Accept-Encoding] X-Amz-Id-2:[Pmnorq/zIjCh+48lEOUWXI+/UcnyE4/s7TkyZHjdQa4caRdBfxCcsZzzrCoZot1D7RCSFn7/MN8=] X-Amz-Request-Id:[BCDC641C093E62B5] X-Amz-Version-Id:[FWSiFqYMfN_YqV4cGb1Cvkg3XulaRwOo]]" status="200 OK" url="https://registry-1.docker.io/v2/library/busybox/blobs/sha256:7c9d20b9b6cda1c58bc4f9d6c401386786f584437abbe87e58910f8a9a15386b"
DEBU[2019-10-02T10:09:40.419049106Z] Applying tar in /var/lib/docker/overlay2/23affe92dc14f25734884f31d0844312f129ca15dfae7a51854c67a709584bae/diff  storage-driver=overlay2
DEBU[2019-10-02T10:09:40.601672430Z] Applied tar sha256:6c0ea40aef9d2795f922f4e8642f0cd9ffb9404e6f3214693a1fd45489f38b44 to 23affe92dc14f25734884f31d0844312f129ca15dfae7a51854c67a709584bae, size: 1219782 
WARN[2019-10-02T10:09:40.631686795Z] grpc: addrConn.createTransport failed to connect to { 0  <nil>}. Err :connection error: desc = "transport: Error while dialing only one connection allowed". Reconnecting...  module=grpc

</details>

closed time in 4 days

thaJeztah

issue closedmoby/buildkit

Add documentation (/support?) changing dockerfile name with dockerfile frontend

I fail to use a differently-named dockerfile (other.dockerfile), I tried different flags from reading the code, but all attempts failed:

Command:

buildctl-daemonless.sh build --progress=plain \
      --frontend=dockerfile.v0 ...

--local dockerfile=. --local filename=other.dockerfile
-> error: other.dockerfile not a directory

--local dockerfile=other.dockerfile
-> error: other.dockerfile not a directory

--local dockerfile=. --local dockerfilekey=other.dockerfile
-> error: other.dockerfile not a directory

--local dockerfile=. --local defaultDockerfileName=other.dockerfile
-> error: other.dockerfile not a directory

Using moby/buildkit:latest - buildctl github.com/moby/buildkit v0.6.3 928f3b480d7460aacb401f68610058ffdb549aca

closed time in 4 days

TeNNoX

issue closedmoby/buildkit

Are cache mounts shared across base images?

We just ran into an issue that seems to indicate that cache mounts are shared across different base images. Is that the case?

Here's the stacktrace from running a test in a python:3.8.3-slim-buster image:

Traceback (most recent call last):
  File \"myapp.py\", line 27, in main
    config = json.loads(id_, object_hook=my_json.loadTimestamps)    
  File \"/usr/local/lib/python3.8/json/__init__.py\", line 370, in loads
    return cls(**kw).decode(s)
  File \"/usr/local/lib/python3.8/json/decoder.py\", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File \"/usr/local/lib/python3.8/json/decoder.py\", line 355, in raw_decode
    raise JSONDecodeError(\"Expecting value\", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File \"/usr/src/app/myapp.py\", line 96, in _init
    connector.init(source_api,
  File \"/usr/src/app/myapp/remote_sql.py\", line 225, in init
    with RemoteSqlEngine(ext, source_options, source_auth) as engine:
  File \"/usr/src/app/myapp/remote_sql.py\", line 149, in __init__
    engine = create_engine(self.create_conn(ext, source_options, source_auth['password']), connect_args=connect_args,
  File \"/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/__init__.py\", line 479, in create_engine
    return strategy.create(*args, **kwargs)
  File \"/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/strategies.py\", line 87, in create
    dbapi = dialect_cls.dbapi(**dbapi_args)
  File \"/usr/local/lib/python3.8/site-packages/sqlalchemy/dialects/postgresql/psycopg2.py\", line 737, in dbapi
    import psycopg2
  File \"/usr/local/lib/python3.8/site-packages/psycopg2/__init__.py\", line 50, in <module>
    from psycopg2._psycopg import (                     # noqa
ImportError: libc.musl-x86_64.so.1: cannot open shared object file: No such file or directory"}

Notice that it's trying to load musl but that should only be in the python:3.8.3-alpine3.11 images that we build for another image.

closed time in 4 days

daveisfera

issue closedmoby/buildkit

Base image no longer pulled/tagged

With standard docker build, then base image was pulled and tagged, so you could easily see the list of the base images and then pull updates on a schedule. With buildkit, this doesn't happen and I believe this is intentional (because intermediate layers aren't tracked anymore), but is there a way to list the base images that are on a machine so they can be pulled to update them on a schedule?

closed time in 4 days

daveisfera

issue commentmoby/buildkit

Base image no longer pulled/tagged

So is there a way to get the list of base images that were pulled or that are currently available from buildkit?

No, buildkit does not track/store them as images. Just pulls layers and tracks them as build cache.

daveisfera

comment created time in 4 days

issue closedmoby/buildkit

How to use cache mount for running process?

This page documents how to use a cache mount when building an image, but how can you use that same cache mount when running a process?

closed time in 4 days

daveisfera

issue closedmoby/buildkit

multiple export point

Hi,

Does buildkit support multiple output points something like below lines? --output type=image,name=prod.docker.io/username/image,push=true
--output type=image,name=test.docker.io/username/image,push=true \

When I try to do add multiple outputs it gives the error.

error: currently only single Exports can be specified

Is there any other way to do that?

--output type=image," name=prod.docker.io/abc:3, name=test.docker.io/abc:3"

exports only prod.docker.io

Thanks

closed time in 4 days

kenotsolutions

issue closedmoby/buildkit

Unable to `go get` locally

Hey guys,

I wanted to play around with the LLB, BuildKit and frontends. Unfortunately, I've failed at the very beginning:

$ go mod init my-fancy-project
go: creating new go.mod: module my-fancy-project
$ go get github.com/moby/buildkit/client/llb
go: found github.com/moby/buildkit/client/llb in github.com/moby/buildkit v0.7.1
go get: github.com/moby/buildkit@v0.7.1 requires
	github.com/containerd/containerd@v1.4.0-0: reading github.com/containerd/containerd/go.mod at revision v1.4.0-0: unknown revision v1.4.0-0

Unfortunately, it also fails for any v0.7.x and v0.6.x versions. Fails on master (v0.7.1-0.20200623231744-95010be66d7f), too.

Any ideas how to fix it?

closed time in 4 days

maciej-gol

pull request commentmoby/buildkit

session: track sessions with a group construct

Found an issue with ResolveImageConfig and Resolver cache that needs some refactoring.

tonistiigi

comment created time in 4 days

pull request commentmoby/buildkit

session: track sessions with a group construct

So to be clear, currently when multiple sessions share a vertex, if one session drops the solve is cancelled for everyone?

If a session drops while it is being used and it was the session that was randomly chosen for the op then the whole op fails. The race window is actually quite small, even my reproducer isn't very effective.

A somewhat more interesting case is also what happens to the puller. We reuse the resolver to avoid making new registry connections but this means CacheKey and Exec may have a different active session for getting the credentials. So there is a need to "update" the resolver with the new authentication backend.

I looked closely at the changes, and it LGTM. I can see how various functions that depend on sessions now can choose Any of the session in the session.Group.

Yes, and Any automatically retries the next session, should the previous one fail.

I didn't see any cases where multiple sessions are added to a group, but I think that's what you alluded to in the PR comment above.

Multiple session are added to the group automatically with the jobs mechanism. This existed before. The difference is that when passing to ops, previously solver would pick a random one and pass it as string, and now it passes a callback (masked in Group interface) for op to get access to all the valid sessions when needed.

tonistiigi

comment created time in 4 days

Pull request review commentmoby/buildkit

session: track sessions with a group construct

 func (g *cacheRefGetter) getRefCacheDirNoCache(ctx context.Context, key string, 	return mRef, nil } -func (e *execOp) getSSHMountable(ctx context.Context, m *pb.Mount) (cache.Mountable, error) {-	sessionID := session.FromContext(ctx)-	if sessionID == "" {-		return nil, errors.New("could not access local files without session")-	}--	timeoutCtx, cancel := context.WithTimeout(ctx, 5*time.Second)-	defer cancel()--	caller, err := e.sm.Get(timeoutCtx, sessionID)-	if err != nil {-		return nil, err-	}--	if err := sshforward.CheckSSHID(ctx, caller, m.SSHOpt.ID); err != nil {-		if m.SSHOpt.Optional {-			return nil, nil-		}-		if grpcerrors.Code(err) == codes.Unimplemented {-			return nil, errors.Errorf("no SSH key %q forwarded from the client", m.SSHOpt.ID)+func (e *execOp) getSSHMountable(ctx context.Context, m *pb.Mount, g session.Group) (cache.Mountable, error) {+	var caller session.Caller+	err := e.sm.Any(ctx, g, func(ctx context.Context, _ string, c session.Caller) error {+		if err := sshforward.CheckSSHID(ctx, c, m.SSHOpt.ID); err != nil {+			if m.SSHOpt.Optional {+				return nil+			}+			if grpcerrors.Code(err) == codes.Unimplemented {+				return errors.Errorf("no SSH key %q forwarded from the client", m.SSHOpt.ID)+			}+			return err 		}+		caller = c+		return nil+	})+	if err != nil { 		return nil, err 	}-+	// because ssh socket remains active, to actually handle session disconnecting ssh error+	// should restart the whole exec with new session

If we have 2 builds both running the same exec() with ssh mounted the ssh remains active for the whole duration of the exec. So if one session goes away there is no way to switch this ssh socket to another session as it might be in unknown state. Atm we ignore this and only validate session works when exec starts. But if we would want to handle this case then when connection would drop from ssh, we could just restart the whole exec with the new session. If you look at the "local source" implementation now, then this is what I do there. If a transfer fails it will check if we can attempt a new transfer from another session before failing the build.

tonistiigi

comment created time in 4 days

Pull request review commentmoby/buildkit

session: track sessions with a group construct

+package session++import (+	"context"+	"time"++	"github.com/pkg/errors"+)++type Group interface {+	SessionIterator() Iterator+}+type Iterator interface {+	NextSession() string+}++func NewGroup(ids ...string) Group {

Vertexes use their own implementation of Group defined in jobs.go. This is the group passed to ops/subbuild etc.

tonistiigi

comment created time in 4 days

Pull request review commentmoby/buildkit

session: track sessions with a group construct

 func ResolveCacheImporterFunc(sm *session.Manager) remotecache.ResolveCacheImpor 	} } -func getContentStore(ctx context.Context, sm *session.Manager, storeID string) (content.Store, error) {-	sessionID := session.FromContext(ctx)+func getContentStore(ctx context.Context, sm *session.Manager, g session.Group, storeID string) (content.Store, error) {+	// TODO: to ensure correct session is detected, new api for finding if storeID is supported is needed

When there are multiple sessions daemon could send "detect" request to know which one supports the current storeID. If a specific session does not know about storeID then the next one is tried.

tonistiigi

comment created time in 4 days

issue commentmoby/buildkit

[Bug] --export-cache is generating a malformed v2 schema manifest. Missing platform property

What would setting is to true do in this context?

It would just replace "docker" string in the mediatype values to "oci". No changes to the actual objects. We don't set it by default so that more registries that don't know about oci would be supported. This would allow us to be compatible with the spec(by switching the spec document). Looks like the spec docs from 2016 do not explicitly mark the platform field as optional although all the implementations of docker/distribution and hub have always done that. The pattern of using descriptor lists like this is nothing novel to buildkit, same thing is used bu cnab-oci, contained snapshots etc.

Craga89

comment created time in 4 days

issue closedmoby/buildkit

docker build regression: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.secrets was not provided by any .service files

Docker was working perfectly fine, but recently it stopped building the simplest possible Dockerfile, such as this one:

FROM ubuntu:19.04
RUN echo "hello world"

Here is the output:

$ DOCKER_BUILDKIT=1  docker build -t test .
[+] Building 0.6s (4/5)                                                                                                                                         
 => [internal] load build definition from Dockerfile                       0.0s
 => => transferring dockerfile: 37B                                        0.0s
 => [internal] load .dockerignore                                          0.0s
 => => transferring context: 2B                                            0.0s
 => ERROR [internal] load metadata for docker.io/library/ubuntu:19.04      0.4s
 => ERROR [1/2] FROM docker.io/library/ubuntu:19.04                        0.1s
 => => resolve docker.io/library/ubuntu:19.04                              0.1s
------
 > [internal] load metadata for docker.io/library/ubuntu:19.04:
------
------
 > [1/2] FROM docker.io/library/ubuntu:19.04:
------
rpc error: code = Unknown desc = error getting credentials - err: exit status 1, out:
`GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.secrets
was not provided by any .service files`

Do you have any ideas of what might be wrong?

r$ docker version
Client:
 Version:           18.09.7
 API version:       1.39
 Go version:        go1.10.8
 Git commit:        2d0083d
 Built:             Thu Jun 27 17:56:23 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.7
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.8
  Git commit:       2d0083d
  Built:            Thu Jun 27 17:23:02 2019
  OS/Arch:          linux/amd64
  Experimental:     false
$ docker info
Containers: 1
 Running: 0
 Paused: 0
 Stopped: 1
Images: 25
Server Version: 18.09.7
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: nvidia runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 894b81a4b802e4eb2a91d1ce216b8817763c29fb
runc version: 425e105d5a03fabd737a126ad93d62a9eeede87f
init version: fec3683
Security Options:
 apparmor
 seccomp
  Profile: default
Kernel Version: 5.0.0-20-generic
Operating System: Ubuntu 19.04
OSType: linux
Architecture: x86_64
CPUs: 12
Total Memory: 62.84GiB
Name: Impedance
ID: PAJA:TZMR:JCJS:Y3CO:VWNZ:DDXQ:WHT3:U467:F7S4:BE37:VIH2:ZALQ
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine

WARNING: No swap limit support

closed time in 4 days

DimanNe

pull request commentmoby/moby

Adds a --chmod flag to ADD and COPY

@AkihiroSuda yes

Ahmadposten

comment created time in 4 days

issue commentmoby/buildkit

[Bug] --export-cache is generating a malformed v2 schema manifest. Missing platform property

If you want we could add the oci-mediatypes=true option for cache export as we do for images. Not ready to have it default yet as older registries don't support them.

Craga89

comment created time in 4 days

pull request commentmoby/buildkit

session: track sessions with a group construct

@hinshun PTAL . This fixes the issue described in https://github.com/moby/buildkit/issues/1432#issuecomment-611117415 . I didn't make changes to the way the session interacts with frontends atm. I think more changes are needed there to really prepare for better shared session support but this should get us into a better state for these changes.

tonistiigi

comment created time in 4 days

pull request commentmoby/buildkit

Dockerfile: update binaries

Looks like error coming from git. Not sure how it is related but same passed in https://travis-ci.org/github/moby/buildkit/builds/703329229

AkihiroSuda

comment created time in 5 days

pull request commentmoby/buildkit

Dockerfile: update binaries

@AkihiroSuda PTAL what happened to the master deploy https://travis-ci.org/github/moby/buildkit/jobs/703785290

AkihiroSuda

comment created time in 5 days

push eventtonistiigi/buildkit

Tonis Tiigi

commit sha 75810bbc7aafb3b4092f60575e0ff06ba2ab8f1d

pull: allow separate sessions for different parts of pull Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tonis Tiigi

commit sha 6acceaa04b9aee2196e560d88de5445ab42de2c0

resolver: add credential cache As authenticator is short-lived seems harmless to cache credential values. This would help for remote builders where session roundtrips are not needed. It looks like containerd also asks credentials too aggressively. Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

push time in 5 days

push eventtonistiigi/buildkit

Tonis Tiigi

commit sha 6ed25573def1be881d51359d210b758aee184c9b

session: track sessions with a group construct Avoid hidden session passing and allow one session to drop when multiple builds share a vertex. Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tonis Tiigi

commit sha d1effeafe8bb59c9bb3be3870654f1759fe3fac4

pull: allow separate sessions for different parts of pull Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tonis Tiigi

commit sha ad9bb6e4a9a623fae4848da98a7d5146ad705893

resolver: add credential cache As authenticator is short-lived seems harmless to cache credential values. This would help for remote builders where session roundtrips are not needed. It looks like containerd also asks credentials too aggressively. Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

push time in 5 days

push eventtonistiigi/buildkit

Tonis Tiigi

commit sha 3e10750541fa1141c985f1f41d3f318ed990a6de

session: track sessions with a group construct Avoid hidden session passing and allow one session to drop when multiple builds share a vertex. Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tonis Tiigi

commit sha bc5b3a7edc841cb6f2a74ea23c836f0a9d953c81

pull: allow separate sessions for different parts of pull Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tonis Tiigi

commit sha 5ec7998252ceb5c6fd0474ee4183c4f9f5a96a46

resolver: add credential cache As authenticator is short-lived seems harmless to cache credential values. This would help for remote builders where session roundtrips are not needed. It looks like containerd also asks credentials too aggressively. Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

push time in 5 days

push eventtonistiigi/buildkit

Tonis Tiigi

commit sha a9e16f3537d4f61d942adab192e45b68839bf01f

session: track sessions with a group construct Avoid hidden session passing and allow one session to drop when multiple builds share a vertex. Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tonis Tiigi

commit sha c81d5584ed569068cc0482dd10e14ce00c925f9b

pull: allow separate sessions for different parts of pull Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tonis Tiigi

commit sha 5a2c799da7896fffe3d72e7544dd05c129744c91

resolver: add credential cache As authenticator is short-lived seems harmless to cache credential values. This would help for remote builders where session roundtrips are not needed. It looks like containerd also asks credentials too aggressively. Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

push time in 5 days

issue commentmoby/buildkit

[Bug] --export-cache is generating a malformed v2 schema manifest. Missing platform property

Platform is not a required property https://github.com/opencontainers/image-spec/blob/master/image-index.md#image-index-property-descriptions

platform object

This OPTIONAL property describes the minimum runtime requirements of the image. This property SHOULD be present if its target is platform-specific.

Craga89

comment created time in 5 days

push eventmoby/buildkit

Akihiro Suda

commit sha ceb41d435098d180e3009708ebf9fc2bf613500f

Dockerfile: update binaries Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>

view details

Tõnis Tiigi

commit sha 7b723809ce0731d616f344931e0028875c293bf0

Merge pull request #1552 from AkihiroSuda/update-20200701 Dockerfile: update binaries

view details

push time in 5 days

PR merged moby/buildkit

Dockerfile: update binaries
+15 -25

0 comment

1 changed file

AkihiroSuda

pr closed time in 5 days

PR opened moby/buildkit

session: track sessions with a group construct

Avoid hidden session passing and allow one session to drop when multiple builds share a vertex.

Signed-off-by: Tonis Tiigi tonistiigi@gmail.com

+495 -402

0 comment

41 changed files

pr created time in 5 days

create barnchtonistiigi/buildkit

branch : session-group

created branch time in 5 days

issue commentmoby/buildkit

docker build command outputs general information lines to stderr instead of stdout when buildKit is enabled.

@mdonoughe --iidfile

@bwateratmsft

The build output isn't files, it's images

Build output is whatever you define in --output, eg. for stdout docker build -o - . > t.tar

pratiksanglikar

comment created time in 6 days

PR opened moby/buildkit

progressui: fix logs time formatting

old value ended with a . on sec > 1000 and was plain wrong on sec > 10000

Signed-off-by: Tonis Tiigi tonistiigi@gmail.com

+8 -1

0 comment

1 changed file

pr created time in 6 days

create barnchtonistiigi/buildkit

branch : time-formatting

created branch time in 6 days

PR opened moby/buildkit

push: dedupe push handlers

This allows to work around the containerd issue on tracking push status on pushing manifest lists.

https://github.com/containerd/containerd/issues/2706

FYI @hairyhenderson

+101 -2

0 comment

2 changed files

pr created time in 6 days

create barnchtonistiigi/buildkit

branch : push-fix

created branch time in 6 days

issue commentmoby/buildkit

Login to a private registry using command line arguments

buildkit can access multiple registries for a single build so if we are talking about cli flags that would fill in authprovider they need to be a combination of host+user+pw/token .

Monnoroch

comment created time in 6 days

delete tag tonistiigi/binfmt

delete tag : test0

delete time in 7 days

push eventtonistiigi/binfmt

Tonis Tiigi

commit sha 8c1c574610f92132ff78e554adb0fda1170ffa8d

github: add deploy target Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

push time in 7 days

push eventtonistiigi/binfmt

Tonis Tiigi

commit sha 4577a35dd61da03e1ba51f27ee7bec9eea0bdc9f

update buildkit-helper source Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tonis Tiigi

commit sha 8d4b69a7766952517ef990588adc4b107a2a12e8

add master builds pushing Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

push time in 7 days

push eventtonistiigi/binfmt

Tonis Tiigi

commit sha 585f2e1ce2ad10492427cd3d833ae0c9ffa6beec

ci: move login to beginning This action redefines DOCKER_CONFIG :( Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

push time in 8 days

push eventtonistiigi/binfmt

Tonis Tiigi

commit sha c03d078625b7384f83ee23715eee9683f9b1d265

add master builds pushing Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tonis Tiigi

commit sha b95a108c4ee9ea5f2393d1520fa1901b30e1284f

update buildkit-helper source Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

push time in 8 days

push eventtonistiigi/binfmt

Tonis Tiigi

commit sha 4ea5e163bba1030d25990300ebe703b5e8a4f05d

add master builds pushing Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tonis Tiigi

commit sha 1c73d3b120c493d67e3eaeb5fbab959275c08e21

update buildkit-helper source Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

push time in 8 days

push eventtonistiigi/binfmt

Tonis Tiigi

commit sha 910f5c0385c76813c710b6cac146ebf0a44bc638

add master builds pushing Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tonis Tiigi

commit sha f909f96e2d00e02ab21591152206c652d0451098

update buildkit-helper source Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

push time in 8 days

push eventtonistiigi/binfmt

Tonis Tiigi

commit sha faa3753e94c44102ecf8dffdbff800a3369b685d

add ci workflow Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

push time in 8 days

push eventtonistiigi/binfmt

Tonis Tiigi

commit sha 8dccbd062b5a46b561985899544ef29a13947d3f

add bake build targets Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tonis Tiigi

commit sha e52855428be1353d10be0c3aacc85023d57d7002

hack: add test script Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tonis Tiigi

commit sha 40d363a21f84e9124d5c31934a2ec6b683310e84

hack: add ci script Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tonis Tiigi

commit sha a08717a8faa3da37e099fb70acb9c9b18f618648

add ci workflow Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

push time in 8 days

push eventtonistiigi/binfmt

Tonis Tiigi

commit sha ea5a4a93dbc36786d81cbd76f92c7de398e5ad36

new binfmt installer program Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tonis Tiigi

commit sha 43835f6c1340f1f44257c84b3485da09642d04a4

vendor: start vendoring dependencies Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tonis Tiigi

commit sha 14db06b2b11c3efc40fe8a1d2ca6c7581ab1738f

move binfmt to project root Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tonis Tiigi

commit sha 89246f9e498eb6cf6fbaa741845309f07c42a042

hack: update vendor scripts to buildx only Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tonis Tiigi

commit sha 2828ad070dd2278714b2f8c0be84de334545df77

bake: add binaries target Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tonis Tiigi

commit sha 4c99bcdf62746e16a5ac6d699f76cea923b7da89

hack: add linter Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tonis Tiigi

commit sha c517102be809c2117fc04723159c8990748270e1

add bake build targets Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tonis Tiigi

commit sha 02a96ceb2b8adbdcb151a07c7142c701b1f982a2

hack: add test script Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tonis Tiigi

commit sha 5062c1b9fde904f55fe5196c2fe8ad7511fa902f

hack: add ci script Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tonis Tiigi

commit sha 1bf083eb2f6f760d5105758bcd335978bd8f1f7c

add ci workflow Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

push time in 8 days

issue commentmoby/buildkit

.dockerignore doesn't work if dockerfile path != context path

This is expected, .dockerignore is relative to context.

mjgallag

comment created time in 10 days

issue commentmoby/buildkit

how do buildkitd gc policies work?

Reference count tracking was changed a lot for v0.7 to fix some of the edge cases that might be the same that you're hitting. Also cache lookups were improved in https://github.com/moby/buildkit/pull/1498 (although that is not the same error you've pasted here).

jsravn

comment created time in 10 days

issue commentmoby/moby

Multistage Dockerfile does not work with WORKDIR

If we can think of some microformat to signify that the path is meant to be relative we could theoretically add this but we're way past the point where we could just change the behavior. I think even changing paths that begin with ./ would be risky. The original thinking behind leaving the path as only absolute was probably that when you are using COPY without --from the path doesn't need to start with / either but is still always absolute to the context.

Miouge1

comment created time in 10 days

push eventmoby/buildkit

Erik Sipsma

commit sha 9e0870415dadafa23e221322ae61369b40fd20e3

Test media type after first push in TestBuildExportWithUncompressed There is a bug in the way images are pushed that results in oci types being used for layers even when docker types should be used, but only in single-layer images. Signed-off-by: Erik Sipsma <erik@sipsma.dev>

view details

Erik Sipsma

commit sha 61bd44337b78b80044c4f8e29d1b55b3c9beead4

image export: Use correct media type when creating new layer blobs. There are a few bugs in the image export related code being fixed here. GetMediaTypeForLayers was iterating over diffPairs in the wrong order, resulting in it always returning nil for images with more than one layer. This actually worked most of the time because it accidentally triggered a separate codepath meant to handle v0.6 migrations where mediatypes left empty get filled in. However, fixing that bug revealed another existing bug where the "oci" parameter in the image exporter was not being honored except when the v0.6 codepath got followed, resulting in images to always have oci layer media types even when docker types are used for the rest of the image descriptors. Due to the interaction between these various bugs, the only practical end effect previously was that single-layer images could use the wrong layer media type. An existing test has been expanded to cover that case in a previous commit. Signed-off-by: Erik Sipsma <erik@sipsma.dev>

view details

Erik Sipsma

commit sha 43b58b2016fc8b2257404143e4fc99920b8d15a0

client test: log skip in checkAllReleasable. Signed-off-by: Erik Sipsma <erik@sipsma.dev>

view details

Tõnis Tiigi

commit sha 17c11d9a97aec64e87e420c6c8f552fdb6164146

Merge pull request #1541 from sipsma/fix-media-type image export: Use correct media type when creating new layer blobs.

view details

push time in 11 days

issue commentmoby/buildkit

Unable to `go get` locally

You need to copy the replace rules from buildkit's go.mod to your project's go.mod

maciej-gol

comment created time in 11 days

Pull request review commentmoby/buildkit

image export: Use correct media type when creating new layer blobs.

 func (ic *ImageWriter) Commit(ctx context.Context, inp exporter.Source, oci bool 	return &idxDesc, nil } -func (ic *ImageWriter) exportLayers(ctx context.Context, compression blobs.CompressionType, refs ...cache.ImmutableRef) ([][]blobs.DiffPair, error) {+func (ic *ImageWriter) exportLayers(ctx context.Context, compression blobs.CompressionType, oci bool, refs ...cache.ImmutableRef) ([][]blobs.DiffPair, error) {

oci bool unused in here now

sipsma

comment created time in 11 days

Pull request review commentmoby/buildkit

image export: Use correct media type when creating new layer blobs.

 func detectCompressionType(cr io.Reader) (CompressionType, error) { }  // GetMediaTypeForLayers retrieves media type for layer from ref information.+// If there is a mismatch in diff IDs or blobsums between the diffPairs and+// corresponding ref, the returned slice will have an empty media type for+// that layer and all parents. func GetMediaTypeForLayers(diffPairs []DiffPair, ref cache.ImmutableRef) []string {-	tref := ref+	layerTypes := make([]string, len(diffPairs))+	if ref == nil {+		return layerTypes+	}++	tref := ref.Clone()+	// diffPairs is ordered parent->child, but we iterate over refs from child->parent,+	// so iterate over diffPairs in reverse+	for i := range diffPairs {+		dp := diffPairs[len(diffPairs)-1-i] -	layerTypes := make([]string, 0, len(diffPairs))-	for _, dp := range diffPairs { 		if tref == nil {-			return nil+			break 		}- 		info := tref.Info() 		if !(info.DiffID == dp.DiffID && info.Blob == dp.Blobsum) {-			return nil+			tref.Release(context.TODO())+			break 		}+		layerTypes[len(diffPairs)-1-i] = info.MediaType -		layerTypes = append(layerTypes, info.MediaType)-		tref = tref.Parent()+		parent := tref.Parent()+		tref.Release(context.TODO())+		tref = parent 	}

in an odd case where parent chain length does not match the length of diffPairs (shouldn't happen) we should make sure we don't leave the tref referenced after returning.

sipsma

comment created time in 11 days

Pull request review commentmoby/buildkit

image export: Use correct media type when creating new layer blobs.

 func detectCompressionType(cr io.Reader) (CompressionType, error) { }  // GetMediaTypeForLayers retrieves media type for layer from ref information.+// If there is a mismatch in diff IDs or blobsums between the diffPairs and+// corresponding ref, the returned slice will have an empty media type for+// that layer and all parents. func GetMediaTypeForLayers(diffPairs []DiffPair, ref cache.ImmutableRef) []string {-	tref := ref+	layerTypes := make([]string, len(diffPairs))+	if ref == nil {+		return layerTypes+	}++	tref := ref.Clone()+	// diffPairs is ordered parent->child, but we iterate over refs from child->parent,+	// so iterate over diffPairs in reverse+	for i := range diffPairs {+		dp := diffPairs[len(diffPairs)-1-i] -	layerTypes := make([]string, 0, len(diffPairs))-	for _, dp := range diffPairs { 		if tref == nil {-			return nil+			break 		}- 		info := tref.Info() 		if !(info.DiffID == dp.DiffID && info.Blob == dp.Blobsum) {-			return nil+			tref.Release(context.TODO())+			break 		}+		layerTypes[len(diffPairs)-1-i] = info.MediaType -		layerTypes = append(layerTypes, info.MediaType)-		tref = tref.Parent()+		parent := tref.Parent()+		tref.Release(context.TODO())+		tref = parent 	} 	return layerTypes }++var toDockerLayerType = map[string]string{+	ocispec.MediaTypeImageLayer:            images.MediaTypeDockerSchema2Layer,+	images.MediaTypeDockerSchema2Layer:     images.MediaTypeDockerSchema2Layer,+	ocispec.MediaTypeImageLayerGzip:        images.MediaTypeDockerSchema2LayerGzip,+	images.MediaTypeDockerSchema2LayerGzip: images.MediaTypeDockerSchema2LayerGzip,+}++var toOCILayerType = map[string]string{+	ocispec.MediaTypeImageLayer:            ocispec.MediaTypeImageLayer,+	images.MediaTypeDockerSchema2Layer:     ocispec.MediaTypeImageLayer,+	ocispec.MediaTypeImageLayerGzip:        ocispec.MediaTypeImageLayerGzip,+	images.MediaTypeDockerSchema2LayerGzip: ocispec.MediaTypeImageLayerGzip,+}++func ConvertLayerMediaType(mediaType string, oci bool) (converted string, err error) {+	if oci {+		converted = toOCILayerType[mediaType]+	} else {+		converted = toDockerLayerType[mediaType]+	}+	if converted == "" {+		return "", fmt.Errorf("unhandled layer media type %q", mediaType)

I think ignoring error (and maybe printing the warning would be more appropriate). Especially because the conversion doesn't currently handle the alternative compressions (zstd etc).

sipsma

comment created time in 11 days

issue commentmoby/buildkit

how do buildkitd gc policies work?

This looks like some old version of buildkit. Make sure you're running at least v0.7

jsravn

comment created time in 11 days

issue commentmoby/buildkit

how do buildkitd gc policies work?

Can you post your /var/lib/buildkit/cache.db /var/lib/buildkit/runc-overlayfs/*.db files

jsravn

comment created time in 11 days

issue commentmoby/buildkit

grpc: received message larger than max (4564294 vs. 4194304)

Using master buildctl and moby/buildkit:master image should show you a better stacktrace when the error happens.

nlg521

comment created time in 12 days

issue commentmoby/buildkit

grpc: received message larger than max (4564294 vs. 4194304)

Can it be set as a parameter ?

Setting it as a parameter seems wrong solution. If it is too small we should set a bigger value. But bigger values are indicator that something else is wrong.

nlg521

comment created time in 12 days

issue commentmoby/buildkit

grpc: received message larger than max (4564294 vs. 4194304)

Can you test with master buildkit/buildctl with --debug set in buildctl. You should get a better trace. I wonder if maven is producing so much logs that it overflows the object. And maybe the logs are not newline-separated causing them to pile up in a single object. In first case, we might need to drop logs as pulling 5MB of logs in this case unnecessarily slows down build. For the second one we should have a hard limit on log size.

If that is the case it looks partly as maven error as well. Buildkit is not enabling tty when calling maven so it shouldn't repeatedly split out the ansi codes.

nlg521

comment created time in 12 days

pull request commentmoby/buildkit

image export: Use correct media type when creating new layer blobs.

I'll update to do the mediatype conversion on a higher level in the exporter rather than GetDiffPairs, which I imagine is closer to what you were originally suggesting.

You can add a public helper function that takes an array of descriptors and makes sure they are either docker or oci. Then if someone calls GetRemote() and wants to have layers for a specific mediatype (eg. maybe in the exported cache) they can reuse this.

sipsma

comment created time in 12 days

issue commentmoby/buildkit

grpc: received message larger than max (4564294 vs. 4194304)

=> => # Progress (5): 0.6/1.7 MB | 216/453 kB | 328/528 kB | 320/749 kB | 27 kB

This output is not from buildkit afaics.

We can increase grpc message size but we would need to understand what request is hitting the limit and in what condition.

nlg521

comment created time in 12 days

issue commentmoby/buildkit

Large images using --cache-from have some layers come through with 0 bytes unexpectedly

I don't think it'll be easy make a reproducible case, so if anyone has any ideas that can help me work out what parts might be important and what definitely won't be that would be useful.

I understand but we do need a test case to understand this. It could error in display, size accounting, corrupt cache graph etc. . I guess it is possible for it to be size related if some errors gets dropped but that seems unlikely atm.

tobymiller1

comment created time in 12 days

Pull request review commentmoby/buildkit

image export: Use correct media type when creating new layer blobs.

 func detectCompressionType(cr io.Reader) (CompressionType, error) {  // GetMediaTypeForLayers retrieves media type for layer from ref information. func GetMediaTypeForLayers(diffPairs []DiffPair, ref cache.ImmutableRef) []string {

Add a comment that on error, slice with partial data would be returned. This confused me in the beginning.

sipsma

comment created time in 12 days

pull request commentmoby/buildkit

client test: Fix check for whether sandbox has containerd

Thanks!

sipsma

comment created time in 12 days

push eventmoby/buildkit

Erik Sipsma

commit sha 463ec47ba07f5c20fd4c126362f7818520883bd5

client test: Fix check for whether sandbox has containerd Before this, the check was always returning that containerd wasn't running and thus skipping the rest of several test cases. Signed-off-by: Erik Sipsma <erik@sipsma.dev>

view details

Erik Sipsma

commit sha 83af7c3c94591656457efa8c003e49a17109c65f

client test: Fix TestBuildExportWithUncompressed This test had previously been accidentally not executing past the check for whether containerd was running. A previous commit fixes that check, this commit fixes a few bugs that had been going unnoticed in the containerd-specific part of the test's code. Signed-off-by: Erik Sipsma <erik@sipsma.dev>

view details

Tõnis Tiigi

commit sha 95010be66d7f567ebd8ed101d7be2210b28ae6d8

Merge pull request #1538 from sipsma/integ-fix client test: Fix check for whether sandbox has containerd

view details

push time in 12 days

PR merged moby/buildkit

client test: Fix check for whether sandbox has containerd

Before this, the check was always returning that containerd wasn't running and thus skipping the rest of several test cases.

Signed-off-by: Erik Sipsma erik@sipsma.dev

Here's a go playground w/ a stripped down example that I believe shows the interface{} check wasn't working as expected.

+65 -125

1 comment

6 changed files

sipsma

pr closed time in 12 days

issue commentdocker/buildx

how to buid image using base image on local repo

@manofthelionarmy That seems unrelated. Current versions of Docker have no concept of a local multi-arch image, therefore there is no way to load them and load needs to happen single-arch at a time. Described in https://github.com/docker/buildx#docker . But this is unrelated to accessing images as base image.

ishii1648

comment created time in 14 days

Pull request review commentmoby/buildkit

client test: Fix check for whether sandbox has containerd

 func testFrontendImageNaming(t *testing.T, sb integration.Sandbox) { 			require.Equal(t, exporterResponse["image.name"], imageName)  			// check if we can pull (requires containerd)-			var cdAddress string-			if cd, ok := sb.(interface {-				ContainerdAddress() string-			}); !ok {+			cdAddress := sb.ContainerdAddress()+			if cdAddress == "" { 				return

Should return skip here so we would notice when something like this happens. Or if a test has meaningful parts before the return maybe log out that "containerd specific parts were skipped".

sipsma

comment created time in 14 days

push eventmoby/moby

Julien Pivotto

commit sha 87a7fc1ced93430cd301d55bec4ff5fb353493a5

Enable client on netbsd and dragonfly Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>

view details

Tõnis Tiigi

commit sha 33fba35d42e7ffad4c770391bff568a55abebbc9

Merge pull request #41132 from roidelapluie/bsd Enable client on netbsd and dragonfly

view details

push time in 15 days

more