profile
viewpoint
Tõnis Tiigi tonistiigi Docker San Francisco

push eventdocker/buildx

Brian Goff

commit sha 6634f1e75c3663ec41865611dcbb2a25a68a93b7

Support reading from env on bake --set <t>.args This works just like the `build` command where if you have `--build-arg FOO`, it will read the variable from env and only set a value if the variable is defined. Signed-off-by: Brian Goff <cpuguy83@gmail.com>

view details

Tõnis Tiigi

commit sha f5c267387883d3d4612aa33aefdb5f97c5842948

Merge pull request #184 from cpuguy83/bake_args_from_env Support reading from env on bake --set <t>.args

view details

push time in 18 hours

PR merged docker/buildx

Support reading from env on bake --set <t>.args

This works just like the build command where if you have --build-arg FOO, it will read the variable from env and only set a value if the variable is defined.

+55 -8

2 comments

2 changed files

cpuguy83

pr closed time in 18 hours

issue commentmoby/buildkit

Build fails when context retrieves filename with newline

..., but why? 😢

sudo-bmitch

comment created time in a day

pull request commentmoby/buildkit

Change result type to array of refs

Should we change the following struct/interfaces in this PR too?

I think it makes sense do to this when we can update the exporters as well. It may even be needed to pass the ability to receive arrays based on exporter. All current builtin exporters should support it but current moby exporter would be problematic (that being said there is a branch where moby exporter doesn't exist any more).

If we don't add all that atm, then we should make sure to error whenever array with more than one item is returned. Also, we wouldn't add api cap now but when it is actually possible to return multiple items.

envisioning something like this?

Ideally I thought repeated Ref and no repeated id inside the Ref but I guess that would mean yet another type for the map, so not that important.

Where should we write tests?

Tests would come when it is possible to return multiple items. Atm just needs to keep everything working. CI is not passing atm, btw

hinshun

comment created time in a day

pull request commentmoby/buildkit

Inherit extended agent so we get modern sign hashes

@tlbdk We should probably disable calling ExtendedAgent.Extension() by masking it in readOnlyAgent, right?

tlbdk

comment created time in a day

issue commentmoby/buildkit

SSH agent forward downgrades signing algorithm to sha-rsa (from rsa-sha2-512)

https://github.com/moby/buildkit/blob/master/.github/CONTRIBUTING.md all the dev/test flows are containerized so as long as you have Docker you should be good to go.

tlbdk

comment created time in 2 days

issue commentmoby/buildkit

SSH agent forward downgrades signing algorithm to sha-rsa (from rsa-sha2-512)

I'm guessing, but could it be that it needs to be ExtendedAgent here:

That seems right. Can you do a PR?

tlbdk

comment created time in 2 days

issue commentmoby/buildkit

BuildKit builds (via Docker) are broken if /etc/hosts or /etc/resolv.conf is replaced

What version of docker is this?

rassie

comment created time in 2 days

created tagmoby/buildkit

tagdockerfile/1.1.4

concurrent, cache-efficient, and Dockerfile-agnostic builder toolkit

created time in 7 days

created tagmoby/buildkit

tagdockerfile/1.1.4-experimental

concurrent, cache-efficient, and Dockerfile-agnostic builder toolkit

created time in 7 days

delete tag moby/buildkit

delete tag : dockerfile/1.1.4

delete time in 7 days

delete tag moby/buildkit

delete tag : dockerfile/1.1.4-experimental

delete time in 7 days

created tagmoby/buildkit

tagdockerfile/1.1.4-experimental

concurrent, cache-efficient, and Dockerfile-agnostic builder toolkit

created time in 7 days

created tagmoby/buildkit

tagdockerfile/1.1.4

concurrent, cache-efficient, and Dockerfile-agnostic builder toolkit

created time in 7 days

created tagmoby/buildkit

tagv0.6.3

concurrent, cache-efficient, and Dockerfile-agnostic builder toolkit

created time in 8 days

push eventmoby/buildkit

Akihiro Suda

commit sha f2c90ce0a05b0c6c5f27e8c2c136074b6bc8c018

CONTRIBUTING.md: fix broken link Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>

view details

Tõnis Tiigi

commit sha 5c9365b6f4c2232a2d743a5792a2e708d7790fc4

Merge pull request #1259 from AkihiroSuda/contributing-md-fix-link CONTRIBUTING.md: fix broken link

view details

push time in 8 days

PR merged moby/buildkit

CONTRIBUTING.md: fix broken link

Signed-off-by: Akihiro Suda akihiro.suda.cz@hco.ntt.co.jp

+2 -2

0 comment

1 changed file

AkihiroSuda

pr closed time in 8 days

PR opened docker/engine

[19.03] vendor: update buildkit to 928f3b48

Brings in: https://github.com/moby/buildkit/pull/1257 cache: fix possible concurrent maps write on parent release

Signed-off-by: Tonis Tiigi tonistiigi@gmail.com

+5 -7

0 comment

2 changed files

pr created time in 8 days

create barnchtonistiigi/docker

branch : 1903-update-buildkit

created branch time in 8 days

issue closedmoby/buildkit

how to share host system's composer cache docker build through dockerfile?

I've been searching and trying for over an hour and I'm about to give us so thought I would ask here.

I'm willing to use the latest, experimental build, and enable buildkit via DOCKER_BUILDKIT=1

I want to do something that should be simple yet seems impossible

this is my dockerfile

FROM composer:1.8 as vendor

COPY database/ database/

COPY composer.json composer.json
COPY composer.lock composer.lock

RUN composer install \
    --ignore-platform-reqs \
    --no-interaction \
    --no-plugins \
    --no-scripts \
    --prefer-dist

This is a rather large composer.json so everytime I build it redownloads every package and takes close to a minute just for this part. When I run the same command on my host everything is already cached so it takes around 5 seconds.

I just want to share my host's ~/.composer/cache folder with the /tmp/cache folder on the image so that my builds run a lot faster and Composer can use a cache

I've tried using VOLUME but then found out this is not the intended use of VOLUME. I tried using ADD/COPY but found out that you can't ADD/COPY from files that are outside the relative path of the folder housing the Dockerfile

Finally I found a thread on SO that claims that BuildKit is the solution, using the --mount switch when using the RUN command. But I still can't figure out how to do it, it feels like this only allows to share a cache between build stages but doesn't help with sharing a cache from the host system

If I can't share the cache from the host system, I'd be happy if at least the cache persisted between builds (I don't mean between build stages inside the Dockerfile but I mean, if I run docker build multiple times, that it won't re-download half the internet every single time)

Hoping someone can clear this up for me.. many thanks

closed time in 8 days

vesper8

issue commentmoby/buildkit

Cache pushed from one machine can not be reused on another machine

@kindritskyiMax And that is even if you remove your local cache to make sure remote cache is used? In https://gitlab.com/kindritskiy.m/docker-cache-issue/-/jobs/348012533 I also see that the remote cache was used, so is it that cache exported in ci only works when importing to ci machines(with fresh state).

Or does it have something to do with exporting cache that has already been imported like it happens in https://gitlab.com/kindritskiy.m/docker-cache-issue/-/jobs/348012533 ?

kindritskyiMax

comment created time in 9 days

issue commentmoby/buildkit

Cache pushed from one machine can not be reused on another machine

@kindritskyiMax Yes, it should work if you switch machines. Did you try if my cache works for you? So are you saying that --cache-to works for you (even when switching machine) but does not work if you export from a specific machine?

kindritskyiMax

comment created time in 9 days

issue commentmoby/buildkit

Cache pushed from one machine can not be reused on another machine

I can reproduce your case but if I push my own image with docker buildx build --cache-to type=registry,ref=tonistiigi/build-cache-issue:latest,mode=max . then now running docker buildx build --cache-from tonistiigi/build-cache-issue:latest . seems to work fine for whole build.

kindritskyiMax

comment created time in 9 days

push eventtonistiigi/buildkit

Tonis Tiigi

commit sha a393a767f8d114e0044c7efb3a9168ca28950286

cache: fix possible concurrent maps write on parent release Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

push time in 9 days

push eventtonistiigi/buildkit

Tonis Tiigi

commit sha 19558904457962c46fc9ae0fc066f903222e112d

cache: fix possible concurrent maps write on parent release Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

push time in 9 days

PR opened moby/buildkit

cache: fix possible concurrent maps write on parent release

19.03 version of https://github.com/moby/buildkit/pull/1256

@tiborvass @andrewhsu

Signed-off-by: Tonis Tiigi tonistiigi@gmail.com

+6 -6

0 comment

2 changed files

pr created time in 9 days

create barnchtonistiigi/buildkit

branch : 1903-fix-parent-release

created branch time in 9 days

PR opened moby/buildkit

cache: fix possible concurrent maps write on parent release

fixes https://github.com/moby/buildkit/issues/1250

Signed-off-by: Tonis Tiigi tonistiigi@gmail.com

@bpaquet @tiborvass

+6 -1

0 comment

2 changed files

pr created time in 9 days

create barnchtonistiigi/buildkit

branch : fix-parent-release

created branch time in 9 days

push eventmoby/buildkit

Tonis Tiigi

commit sha 565deba34208f779e8c99432d7a73b86722b2c6d

blobs: allow alternative compare-with-parent diff Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tõnis Tiigi

commit sha 2ceaa119e4fcc6c5529d4db813650996da334bef

Merge pull request #1248 from tonistiigi/parent-diff blobs: allow alternative compare-with-parent diff

view details

push time in 9 days

PR merged moby/buildkit

blobs: allow alternative compare-with-parent diff

This adds an alternative differ method that can be used instead of regular containerd Compare. We will use this in docker where storage/diff is managed by moby layerstore.

Signed-off-by: Tonis Tiigi tonistiigi@gmail.com

+40 -26

0 comment

1 changed file

tonistiigi

pr closed time in 9 days

push eventmoby/buildkit

Tonis Tiigi

commit sha 044271e0adf40556077d27c789c5a6777eb5178f

exporter: add canonical and dangling image naming Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tonis Tiigi

commit sha 6c70bacf8e9ddfd8c9088a7f3a48c26d2f8299ff

readme: document available options for image output Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tõnis Tiigi

commit sha e486c1193f105b73113024b7d88cf8533233094d

Merge pull request #1247 from tonistiigi/dangling-naming exporter: add canonical and dangling image naming

view details

push time in 9 days

PR merged moby/buildkit

exporter: add canonical and dangling image naming

New exporter attrs that allow naming dangling images (without name, only prefix) and canonical references that also contain the image digest.

Signed-off-by: Tonis Tiigi tonistiigi@gmail.com

+61 -23

2 comments

2 changed files

tonistiigi

pr closed time in 9 days

issue commentmoby/buildkit

Cache pushed from one machine can not be reused on another machine

Do you mean this two lines

You can't use the same ref on --cache-* and -t because they are different objects and pushed separately. (The exception here would be inline cache that would not push a separate object but append metatdata to image config).

kindritskyiMax

comment created time in 9 days

push eventmoby/buildkit

Akihiro Suda

commit sha 14d5f06ed28d24b1c941e14dd28f5ca7ee0fee57

examples/kubernetes: use Parallel mode for StatefulSet Parallel mode releaxes the pod creation order constraint. https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#parallel-pod-management Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>

view details

Tõnis Tiigi

commit sha 5afa48a5a6bf2c72ae9d2e93efdc9ff0b6d8c42d

Merge pull request #1255 from AkihiroSuda/statefulset-parallel examples/kubernetes: use Parallel mode for StatefulSet

view details

push time in 9 days

PR merged moby/buildkit

examples/kubernetes: use Parallel mode for StatefulSet

Parallel mode releaxes the pod creation order constraint.

https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#parallel-pod-management

Signed-off-by: Akihiro Suda akihiro.suda.cz@hco.ntt.co.jp

+3 -1

0 comment

3 changed files

AkihiroSuda

pr closed time in 9 days

push eventmoby/buildkit

Pablo Chico de Guzman

commit sha 7a9f5e7696850de8a5817e2b01d6c2f28738a878

Used by Okteto Cloud Signed-off-by: Pablo Chico de Guzman <pchico83@gmail.com>

view details

Tõnis Tiigi

commit sha 1642b9ce917cb648ad37d19271c42f9910a6c458

Merge pull request #1253 from pchico83/okteto Used by Okteto Cloud

view details

push time in 9 days

PR merged moby/buildkit

Used by Okteto Cloud
+1 -0

1 comment

1 changed file

pchico83

pr closed time in 9 days

issue commentmoby/buildkit

Cache pushed from one machine can not be reused on another machine

Please post a reproducible testcase that we could run to figure this out.

One thing I noticed is that you are using the same reference for cache and your image, so unless this is just a mistake on the report this is definitely wrong. Also, until recently github registry didn't support manifest lists that are used in the external cache format and mulit-platform image, so I'm surprised you made that far.

kindritskyiMax

comment created time in 10 days

issue commentmoby/buildkit

fatal error: concurrent map writes

Thanks for the report. I had a quick look and noticed one possible case (https://github.com/moby/buildkit/blob/v0.6.2/cache/manager.go#L159) that may cause this but this stacktrace is a bit weird and doesn't fully confirm it. If you have more stacktraces for this error please post them as well.

bpaquet

comment created time in 10 days

Pull request review commentmoby/buildkit

Run integration tests against dockerd

 ADD %s /dest/ }  func testDockerfileAddArchive(t *testing.T, sb integration.Sandbox) {+	skipDockerd(t, sb)

We should make buildctl available for these tests.

SamWhited

comment created time in 13 days

issue commentmoby/moby

--cache-from and Multi Stage: Pre-Stages are not cached

@glensc https://github.com/moby/buildkit/pull/777 https://github.com/moby/moby/pull/38882

Schnitzel

comment created time in 13 days

Pull request review commentmoby/buildkit

Run integration tests against dockerd

+package integration++import (+	"bytes"+	"context"+	"fmt"+	"io"+	"io/ioutil"+	"net"+	"time"++	"github.com/docker/docker/testutil/daemon"+)++const dockerdBinary = "dockerd"++type logTAdapter struct {+	Name string+	Logs map[string]*bytes.Buffer+}++func (l logTAdapter) Logf(format string, v ...interface{}) {+	if buf, ok := l.Logs[l.Name]; !ok || buf == nil {+		l.Logs[l.Name] = &bytes.Buffer{}+	}+	fmt.Fprintf(l.Logs[l.Name], format, v...)+}++// InitDockerdWorker registers a dockerd worker with the global registry.+func InitDockerdWorker() {+	Register(&dockerd{})+}++type dockerd struct{}++func (c dockerd) Name() string {+	return dockerdBinary+}++func (c dockerd) New(cfg *BackendConfig) (b Backend, cl func() error, err error) {+	if err := requireRoot(); err != nil {+		return nil, nil, err+	}++	deferF := &multiCloser{}+	cl = deferF.F()++	defer func() {+		if err != nil {+			deferF.F()()+			cl = nil+		}+	}()++	workDir, err := ioutil.TempDir("", "integration")+	if err != nil {+		return nil, nil, err+	}++	cmd, err := daemon.NewDaemon(+		workDir,+		daemon.WithTestLogger(logTAdapter{+			Name: "creatingDaemon",+			Logs: cfg.Logs,+		}),+		daemon.WithContainerdSocket(""),+	)+	if err != nil {+		return nil, nil, fmt.Errorf("new daemon error: %q, %s", err, formatLogs(cfg.Logs))+	}++	err = cmd.StartWithError()+	if err != nil {+		return nil, nil, err+	}+	deferF.append(cmd.StopWithError)++	logs := map[string]*bytes.Buffer{}+	if err := waitUnix(cmd.Sock(), 5*time.Second); err != nil {+		return nil, nil, fmt.Errorf("dockerd did not start up: %q, %s", err, formatLogs(logs))+	}++	ctx, cancel := context.WithCancel(context.Background())+	deferF.append(func() error { cancel(); return nil })++	dockerAPI, err := cmd.NewClient()+	if err != nil {+		return nil, nil, err+	}+	deferF.append(dockerAPI.Close)++	listener, err := net.Listen("tcp", ":0")+	if err != nil {+		return nil, nil, err+	}+	deferF.append(listener.Close)++	go func() {+		for {+			tmpConn, err := listener.Accept()+			if err != nil {+				return+			}+			conn, err := dockerAPI.DialHijack(ctx, "/grpc", "h2c", nil)+			if err != nil {+				return+			}++			go func() {

actually, not sure if errgroup is needed but logging on errors would be nice

SamWhited

comment created time in 14 days

Pull request review commentmoby/buildkit

Run integration tests against dockerd

 ADD %s /dest/ }  func testDockerfileAddArchive(t *testing.T, sb integration.Sandbox) {+	skipDockerd(t, sb)

curious what was the error for this one?

SamWhited

comment created time in 14 days

Pull request review commentmoby/buildkit

Run integration tests against dockerd

+package integration++import (+	"bytes"+	"context"+	"fmt"+	"io"+	"io/ioutil"+	"net"+	"time"++	"github.com/docker/docker/testutil/daemon"+)++const dockerdBinary = "dockerd"++type logTAdapter struct {+	Name string+	Logs map[string]*bytes.Buffer+}++func (l logTAdapter) Logf(format string, v ...interface{}) {+	if buf, ok := l.Logs[l.Name]; !ok || buf == nil {+		l.Logs[l.Name] = &bytes.Buffer{}+	}+	fmt.Fprintf(l.Logs[l.Name], format, v...)+}++// InitDockerdWorker registers a dockerd worker with the global registry.+func InitDockerdWorker() {+	Register(&dockerd{})+}++type dockerd struct{}++func (c dockerd) Name() string {+	return dockerdBinary+}++func (c dockerd) New(cfg *BackendConfig) (b Backend, cl func() error, err error) {+	if err := requireRoot(); err != nil {+		return nil, nil, err+	}++	deferF := &multiCloser{}+	cl = deferF.F()++	defer func() {+		if err != nil {+			deferF.F()()+			cl = nil+		}+	}()++	workDir, err := ioutil.TempDir("", "integration")+	if err != nil {+		return nil, nil, err+	}++	cmd, err := daemon.NewDaemon(+		workDir,+		daemon.WithTestLogger(logTAdapter{+			Name: "creatingDaemon",+			Logs: cfg.Logs,+		}),+		daemon.WithContainerdSocket(""),+	)+	if err != nil {+		return nil, nil, fmt.Errorf("new daemon error: %q, %s", err, formatLogs(cfg.Logs))+	}++	err = cmd.StartWithError()+	if err != nil {+		return nil, nil, err+	}+	deferF.append(cmd.StopWithError)++	logs := map[string]*bytes.Buffer{}+	if err := waitUnix(cmd.Sock(), 5*time.Second); err != nil {+		return nil, nil, fmt.Errorf("dockerd did not start up: %q, %s", err, formatLogs(logs))+	}++	ctx, cancel := context.WithCancel(context.Background())+	deferF.append(func() error { cancel(); return nil })++	dockerAPI, err := cmd.NewClient()+	if err != nil {+		return nil, nil, err+	}+	deferF.append(dockerAPI.Close)++	listener, err := net.Listen("tcp", ":0")

nit: this could be unix as well that might be bit cleaner

SamWhited

comment created time in 14 days

Pull request review commentmoby/buildkit

Run integration tests against dockerd

 func testExporterTargetExists(t *testing.T, sb integration.Sandbox) { }  func testTarExporterWithSocket(t *testing.T, sb integration.Sandbox) {+	if os.Getenv("TEST_DOCKERD") == "1" {

curious what was the error for this one?

SamWhited

comment created time in 14 days

Pull request review commentmoby/buildkit

Run integration tests against dockerd

+package integration++import (+	"bytes"+	"context"+	"fmt"+	"io"+	"io/ioutil"+	"net"+	"time"++	"github.com/docker/docker/testutil/daemon"+)++const dockerdBinary = "dockerd"++type logTAdapter struct {+	Name string+	Logs map[string]*bytes.Buffer+}++func (l logTAdapter) Logf(format string, v ...interface{}) {+	if buf, ok := l.Logs[l.Name]; !ok || buf == nil {+		l.Logs[l.Name] = &bytes.Buffer{}+	}+	fmt.Fprintf(l.Logs[l.Name], format, v...)+}++// InitDockerdWorker registers a dockerd worker with the global registry.+func InitDockerdWorker() {+	Register(&dockerd{})+}++type dockerd struct{}++func (c dockerd) Name() string {+	return dockerdBinary+}++func (c dockerd) New(cfg *BackendConfig) (b Backend, cl func() error, err error) {+	if err := requireRoot(); err != nil {+		return nil, nil, err+	}++	deferF := &multiCloser{}+	cl = deferF.F()++	defer func() {+		if err != nil {+			deferF.F()()+			cl = nil+		}+	}()++	workDir, err := ioutil.TempDir("", "integration")+	if err != nil {+		return nil, nil, err+	}++	cmd, err := daemon.NewDaemon(+		workDir,+		daemon.WithTestLogger(logTAdapter{+			Name: "creatingDaemon",+			Logs: cfg.Logs,+		}),+		daemon.WithContainerdSocket(""),+	)+	if err != nil {+		return nil, nil, fmt.Errorf("new daemon error: %q, %s", err, formatLogs(cfg.Logs))+	}++	err = cmd.StartWithError()+	if err != nil {+		return nil, nil, err+	}+	deferF.append(cmd.StopWithError)++	logs := map[string]*bytes.Buffer{}+	if err := waitUnix(cmd.Sock(), 5*time.Second); err != nil {+		return nil, nil, fmt.Errorf("dockerd did not start up: %q, %s", err, formatLogs(logs))+	}++	ctx, cancel := context.WithCancel(context.Background())+	deferF.append(func() error { cancel(); return nil })++	dockerAPI, err := cmd.NewClient()+	if err != nil {+		return nil, nil, err+	}+	deferF.append(dockerAPI.Close)++	listener, err := net.Listen("tcp", ":0")+	if err != nil {+		return nil, nil, err+	}+	deferF.append(listener.Close)++	go func() {+		for {+			tmpConn, err := listener.Accept()+			if err != nil {+				return+			}+			conn, err := dockerAPI.DialHijack(ctx, "/grpc", "h2c", nil)+			if err != nil {+				return+			}++			go func() {

try to switch these to https://godoc.org/golang.org/x/sync/errgroup (or https://golang.org/pkg/sync/#WaitGroup)

SamWhited

comment created time in 14 days

Pull request review commentmoby/buildkit

Run integration tests against dockerd

 func testStdinClosed(t *testing.T, sb integration.Sandbox) { }  func testSSHMount(t *testing.T, sb integration.Sandbox) {+	skipDockerd(t, sb)

curious what was the error for this one?

SamWhited

comment created time in 14 days

issue closeddocker/buildx

Import multi-arch tarball for tag and push

In my workflow I usually have a step between build and push, in which I run some tests on the image (often in the form of a tarball or local image), when I use 'standard docker' I just load the image locally and test it that way, but when doing multi platform builds, that is a bit more of an issue, seeing I cant import multi-arch images(?).

What I figured would be the best way to do this is to export the image to tar or OCI-tar and then run the tests on that, and when tests are done, import the tarball and push it to docker. Here comes my issue: I can't for the life of me figure out how to load the tarball and then tag and push it to my registries...

Is there any documentation on this? Is it even possible?
I know that the docker image load and docker image import commands exist, but neither of those seem to support multi-arch images as of right now.

closed time in 14 days

Johannestegner

Pull request review commentmoby/buildkit

client: add context to some test failures

 func testFileOpRmWildcard(t *testing.T, sb integration.Sandbox) { 	require.Equal(t, true, fi.IsDir())  	_, err = os.Stat(filepath.Join(destDir, "foo/target"))-	require.Equal(t, true, os.IsNotExist(err))+	if !os.IsNotExist(err) {+		t.Errorf("expected %s/foo/target to not exist, got error %v", destDir, err)

There are more complicated cases in this repo where the comparison and the amount of extra code isn't so obvious to defend. I have no specific love for testify/require, I think it is fine and tests read better and provide better information than without it, as go-check was fine (in moby) before it was replaced because someone had a different taste. First project to use testify/require was swarmkit, where it went to containerd and here now. I think multiple switches of test frameworks in moby has been a mistake and don't want to repeat it.

SamWhited

comment created time in 14 days

push eventmoby/moby

lzhfromustc

commit sha 49fbb9c9854ff18ad9304f435c7c6722b0b4cfdb

registry: add a critical section to protect authTransport.modReq Signed-off-by: Ziheng Liu <lzhfromustc@gmail.com>

view details

Tõnis Tiigi

commit sha fee149e723dff096cb77cfa28f0eabc7b3830990

Merge pull request #40143 from lzhfromustc/IFP_modReq registry: add a critical section to protect authTransport.modReq

view details

push time in 14 days

PR merged moby/moby

registry: add a critical section to protect authTransport.modReq area/distribution kind/bugfix status/2-code-review

closes #39502

- What I did I made sure that there would be no data race of tr.modReq. tr.modReq is a map. Among 5 usages of this field, 4 of them are protected by critical sections, but 1 delete operation is not protected. This is dangerous, because data race of map will crash all running goroutines.

- How I did it Added tr.mu.Lock() and tr.mu.Unlock() to protect delete(tr.modReq,orig).

- How to verify it All other 4 usages of tr.modReq are protected by tr.mu.Lock().

- Description for the changelog NONE

+2 -0

2 comments

1 changed file

lzhfromustc

pr closed time in 14 days

issue closedmoby/moby

registry: a delete operation should be in critical section

Description authTransport.modReq is a map. Among its 5 usages, 4 are protected by authTransport.mu, but 1 is not protected.

Source Code: File: registry/session.go Line: 102-164

func (tr *authTransport) RoundTrip(orig *http.Request) (*http.Response, error) {
  ...
  tr.mu.Lock()
  tr.modReq[orig] = req  //PROTECTED
  tr.mu.Unlock()
  ...
  resp, err := tr.RoundTripper.RoundTrip(req)
  if err != nil {
     delete(tr.modReq, orig)  //NOT PROTECTED
     return nil, err
  }
   if len(resp.Header["X-Docker-Token"]) > 0 {
     tr.token = resp.Header["X-Docker-Token"]
  }
  resp.Body = &ioutils.OnEOFReader{
     Rc: resp.Body,
     Fn: func() {
        tr.mu.Lock()
        delete(tr.modReq, orig)  //PROTECTED
        tr.mu.Unlock()
     },
  }
  return resp, nil
}

// CancelRequest cancels an in-flight request by closing its connection.
func (tr *authTransport) CancelRequest(req *http.Request) {
  type canceler interface {
     CancelRequest(*http.Request)
  }
  if cr, ok := tr.RoundTripper.(canceler); ok {
     tr.mu.Lock()
     modReq := tr.modReq[req]  //PROTECTED
     delete(tr.modReq, req)  //PROTECTED
     tr.mu.Unlock()
     cr.CancelRequest(modReq)
  }
}

Suggested Fix: Add tr.mu.Lock() and tr.mu.Unlock() to protect delete(tr.modReq, orig) I am requesting an irrelevant PR now. After that one is closed, I can open a PR for this fix.

closed time in 14 days

lzhfromustc

Pull request review commentmoby/buildkit

client: add context to some test failures

 func testFileOpRmWildcard(t *testing.T, sb integration.Sandbox) { 	require.Equal(t, true, fi.IsDir())  	_, err = os.Stat(filepath.Join(destDir, "foo/target"))-	require.Equal(t, true, os.IsNotExist(err))+	if !os.IsNotExist(err) {+		t.Errorf("expected %s/foo/target to not exist, got error %v", destDir, err)

It's better because it is consistent with the tests in the repo. No other reason.

SamWhited

comment created time in 14 days

issue commentdocker/buildx

Import multi-arch tarball for tag and push

Yes, https://github.com/moby/moby/pull/38738 https://gist.github.com/tonistiigi/5c86c720d196ce74d989ea37b325a621

Johannestegner

comment created time in 14 days

Pull request review commentmoby/buildkit

client: add context to some test failures

 func testFileOpRmWildcard(t *testing.T, sb integration.Sandbox) { 	require.Equal(t, true, fi.IsDir())  	_, err = os.Stat(filepath.Join(destDir, "foo/target"))-	require.Equal(t, true, os.IsNotExist(err))+	if !os.IsNotExist(err) {+		t.Errorf("expected %s/foo/target to not exist, got error %v", destDir, err)

You don't need Equal like the example above.

SamWhited

comment created time in 14 days

Pull request review commentmoby/buildkit

client: add context to some test failures

 func testFileOpRmWildcard(t *testing.T, sb integration.Sandbox) { 	require.Equal(t, true, fi.IsDir())  	_, err = os.Stat(filepath.Join(destDir, "foo/target"))-	require.Equal(t, true, os.IsNotExist(err))+	if !os.IsNotExist(err) {+		t.Errorf("expected %s/foo/target to not exist, got error %v", destDir, err)
require.True(t, os.IsNotExist(err), "expected %s/foo/target to not exist, got error %v", destDir, err)
// require.Equal(t, true, os.IsNotExist(err), "expected %s/foo/target to not exist, got error %v", destDir, err)
SamWhited

comment created time in 14 days

issue commentdocker/buildx

docker buildx bake --print does not output default targets

Yes, this was not the expected behavior of --print that is used to show the current active configuration, not to generate a build file. To make the JSON file usable you would need to add "group": {"default": ["addon", "aws"]} to it.

This is an interesting use-case though. It might make sense for us to implicitly create a default group containing all targets in none was specified.

Puneeth-n

comment created time in 14 days

issue commentdocker/buildx

Import multi-arch tarball for tag and push

Yes, in 19.03 docker load does not support loading multi-arch images. You either need to use a local registry or build a single-arch image for your local steps that you can later merge to multi-arch in registry with buildx imagetools create

Johannestegner

comment created time in 14 days

push eventdocker/buildx

Solomon Hykes

commit sha d7adb9ef6e8d89e4f2e4214609acd1859141eb38

Clarify documentation structure Move a paragraph in README to clarify where it fits in the structure. - Before the move, the paragraph seems to apply to the `--output=local` section when in fact it applies to the entire `--output` section. This is especially confusing for the sentence "if just the path is specified as a value, `buildx` will use the local exporter with this path as the destination". - After the move, it is clear that the paragraph applies to `--output`

view details

Tõnis Tiigi

commit sha 8e92bfc8f0485d27c2d10582fb5377599fc621ad

Merge pull request #188 from shykes/patch-1 Clarify documentation structure

view details

push time in 14 days

PR merged docker/buildx

Clarify documentation structure

Move a paragraph in README to clarify where it fits in the structure. I got confused by this, so taking 5mn to propose a version that would have not confused me.

  • Before the move, the paragraph seems to apply to the --output=local section when in fact it applies to the entire --output section. This is especially confusing for the sentence "if just the path is specified as a value, buildx will use the local exporter with this path as the destination".

  • After the move, it is clear that the paragraph applies to --output

+16 -17

0 comment

1 changed file

shykes

pr closed time in 14 days

push eventmoby/buildkit

Edgar Lee

commit sha 7846d924ff4020b19e420a1a032e78855711a7ec

Improve solver type godocs Signed-off-by: Edgar Lee <edgarl@netflix.com>

view details

Edgar Lee

commit sha e8326b213b209e933d00e34b0ecf211a4717e50c

Fixup doc strings for solver types Signed-off-by: Edgar Lee <edgarl@netflix.com>

view details

Tõnis Tiigi

commit sha 18f2c62285efc138f6cf86ba8626e6a4a46613fe

Merge pull request #1244 from hinshun/doc-solver-types Improve solver type godocs

view details

push time in 14 days

PR merged moby/buildkit

Improve solver type godocs

I wanted to improve the godocs of the solver types based on:

  • https://github.com/moby/buildkit/blob/master/docs/solver.md
  • https://dockercommunity.slack.com/archives/C7S7A40MP/p1572918118341900
+57 -18

0 comment

1 changed file

hinshun

pr closed time in 14 days

push eventtonistiigi/buildkit

Tonis Tiigi

commit sha 6c70bacf8e9ddfd8c9088a7f3a48c26d2f8299ff

readme: document available options for image output Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

push time in 14 days

Pull request review commentmoby/buildkit

exporter: add canonical and dangling image naming

 buildctl build ...\   --import-cache type=registry,ref=docker.io/username/image ``` +Keys supported by image output:+* `name=[value]`: image name+* `push=true`: push after creating the image+* `push-by-digest=true`: push unnamed image+* `registry.insecure=true`: push to insecure HTTP registry

should normalize this to dashes

tonistiigi

comment created time in 14 days

pull request commentmoby/buildkit

exporter: add canonical and dangling image naming

@AkihiroSuda done

tonistiigi

comment created time in 14 days

push eventtonistiigi/buildkit

Tonis Tiigi

commit sha 3519bb8868f3c348341845f83989a86e40d34290

readme: document available options for image output Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

push time in 14 days

Pull request review commentmoby/buildkit

Improve solver type godocs

 type CacheLink struct { 	Selector digest.Digest `json:",omitempty"` } -// Op is an implementation for running a vertex+// Op defines how the solver can evaluate the properties of a vertex operation.+// An op is executed in the worker, and is retrieved from the vertex by the+// value of `vertex.Sys()`. The solver is configured with a resolve function to+// convert a `vertex.Sys()` into an `Op`. type Op interface { 	// CacheMap returns structure describing how the operation is cached. 	// Currently only roots are allowed to return multiple cache maps per op. 	CacheMap(context.Context, int) (*CacheMap, bool, error)+ 	// Exec runs an operation given results from previous operations. 	Exec(ctx context.Context, inputs []Result) (outputs []Result, err error) }  type ResultBasedCacheFunc func(context.Context, Result) (digest.Digest, error) +// CacheMap is a description for calculating the cache key of an operation. type CacheMap struct {-	// Digest is a base digest for operation that needs to be combined with-	// inputs cache or selectors for dependencies.+	// Digest returns a checksum for the operation. The operation result can be+	// cached by a checksum that combines this digest and the cache keys of the+	// operation's inputs.+	//+	// For example, in LLB this digest is a manifest digest for OCI images, or+	// commit SHA for git sources. 	Digest digest.Digest-	Deps   []struct {-		// Optional digest that is merged with the cache key of the input++	// Deps contain optional selectors or content-based cache functions for its+	// inputs.+	Deps []struct {+		// Selector is a digest that is merged with the cache key of the input.

You can add to make it more clear: "Selectors are not merged with the result of the ComputeDigestFunc for this input."

hinshun

comment created time in 14 days

Pull request review commentmoby/buildkit

Improve solver type godocs

 type CacheRecord struct { 	key          *CacheKey } -// CacheManager implements build cache backend+// CacheManager determine if there is a result that matches the cache keys

determines

hinshun

comment created time in 14 days

PR opened moby/buildkit

blobs: allow alternative compare-with-parent diff

This adds an alternative differ method that can be used instead of regular containerd Compare. We will use this in docker where storage/diff is managed by moby layerstore.

Signed-off-by: Tonis Tiigi tonistiigi@gmail.com

+40 -26

0 comment

1 changed file

pr created time in 14 days

create barnchtonistiigi/buildkit

branch : parent-diff

created branch time in 14 days

PR opened moby/buildkit

exporter: add canonical and dangling image naming

New exporter attrs that allow naming dangling images (without name, only prefix) and canonical references that also contain the image digest.

Signed-off-by: Tonis Tiigi tonistiigi@gmail.com

+48 -21

0 comment

1 changed file

pr created time in 14 days

create barnchtonistiigi/buildkit

branch : dangling-naming

created branch time in 14 days

PR opened moby/buildkit

exporter: keep blob refs on images

These were erroneously removed in https://github.com/moby/buildkit/pull/1176

Signed-off-by: Tonis Tiigi tonistiigi@gmail.com

+13 -4

0 comment

1 changed file

pr created time in 14 days

create barnchtonistiigi/buildkit

branch : restore-ref-labels

created branch time in 14 days

push eventtonistiigi/docker

Tonis Tiigi

commit sha bc4a5f1a88e1633b992564c1205024a69dbb76be

builder-next: sync ensurelayer with flightcontrol Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

push time in 14 days

issue closedmoby/buildkit

Docker compose support

Is there any work done to support docker-compose build ?

closed time in 14 days

FernandoMiguel

issue commentmoby/buildkit

Docker compose support

https://github.com/docker/compose/pull/6865 is merged

Building compose with buildkit is also available with docker buildx bake https://github.com/docker/buildx#buildx-bake-options-target

FernandoMiguel

comment created time in 14 days

issue closedmoby/moby

--cache-from and Multi Stage: Pre-Stages are not cached

Description

If you want to use a Multi Stage Build together with --cache-from it's very hard and complicated to load the Cache of Pre Stages, as --cache-from disables the lookup in the local cache (see https://github.com/moby/moby/issues/32612). The only way is to tag the pre-stage images as well and add them to the --cache-from, which is very complicated.

Steps to reproduce the issue:

  1. Assuming we have a Multistage Dockerfile like:
FROM busybox as builder
RUN echo "hello" > test

FROM busybox
COPY --from=builder test test
RUN echo test
  1. Building it the first time:
$ docker build -t test:latest .
Sending build context to Docker daemon  2.048kB
Step 1/5 : FROM busybox as builder
 ---> d20ae45477cb
Step 2/5 : RUN echo "hello" > test
 ---> Running in b5e871ebd251
 ---> 6889762613a0
Removing intermediate container b5e871ebd251
Step 3/5 : FROM busybox
 ---> d20ae45477cb
Step 4/5 : COPY --from=builder test test
 ---> f9ee9cc534a7
Removing intermediate container 8d76fd7eb6be
Step 5/5 : RUN echo test
 ---> Running in 5b768ed39212
test
 ---> b4a81a0e7c96
Removing intermediate container 5b768ed39212
Successfully built b4a81a0e7c96
Successfully tagged test:latest

So far all good. 3. Now running it a second time, see how all layers are fully cached:

$ docker build -t test:latest .
Sending build context to Docker daemon  2.048kB
Step 1/5 : FROM busybox as builder
 ---> d20ae45477cb
Step 2/5 : RUN echo "hello" > test
 ---> Using cache
 ---> 6889762613a0
Step 3/5 : FROM busybox
 ---> d20ae45477cb
Step 4/5 : COPY --from=builder test test
 ---> Using cache
 ---> f9ee9cc534a7
Step 5/5 : RUN echo test
 ---> Using cache
 ---> b4a81a0e7c96
Successfully built b4a81a0e7c96
Successfully tagged test:latest
  1. Now running it with --cache-from test:latest:
$ docker build -t test:latest --cache-from test:latest .
Sending build context to Docker daemon  2.048kB
Step 1/5 : FROM busybox as builder
 ---> d20ae45477cb
Step 2/5 : RUN echo "hello" > test
 ---> Running in 89d43713b017
 ---> 18e01d7690cb
Removing intermediate container 89d43713b017
Step 3/5 : FROM busybox
 ---> d20ae45477cb
Step 4/5 : COPY --from=builder test test
 ---> Using cache
 ---> f9ee9cc534a7
Step 5/5 : RUN echo test
 ---> Using cache
 ---> b4a81a0e7c96
Successfully built b4a81a0e7c96
Successfully tagged test:latest

See how Step 2/5 : RUN echo "hello" > test is not using any cache. Interestingly Step 4 is using the cache again, as it finds that cache within the test:latest image. So it actually builds the first stage image but never uses it. A lot of time the first stages are very heavy computations, like installing packages, building stuff etc. So we almost loose the niceness of Multi Stage Build.

There is a way to fix this, with tagging the first stage image via --target builder:

$ docker build -t test-builder:latest --target builder .
Sending build context to Docker daemon  2.048kB
Step 1/5 : FROM busybox as builder
 ---> d20ae45477cb
Step 2/5 : RUN echo "hello" > test
 ---> Using cache
 ---> 18e01d7690cb
Successfully built 18e01d7690cb
Successfully tagged test-builder:latest

and then using both images for --cache-from:

$ docker build -t test:latest --cache-from test:latest --cache-from test-builder:latest .
Sending build context to Docker daemon  2.048kB
Step 1/5 : FROM busybox as builder
 ---> d20ae45477cb
Step 2/5 : RUN echo "hello" > test
 ---> Using cache
 ---> 18e01d7690cb
Step 3/5 : FROM busybox
 ---> d20ae45477cb
Step 4/5 : COPY --from=builder test test
 ---> Using cache
 ---> f9ee9cc534a7
Step 5/5 : RUN echo test
 ---> Using cache
 ---> b4a81a0e7c96
Successfully built b4a81a0e7c96
Successfully tagged test:latest

but IMHO that is super complicated and confusing.

I'm not 100% sure how we could fix this. Implementing --cache-from to also use the local cache as a secondary cache lookup would solve the problem (see https://github.com/moby/moby/issues/32612)

Output of docker version:

$ docker version
Client:
 Version:      17.06.1-ce
 API version:  1.30
 Go version:   go1.8.3
 Git commit:   874a737
 Built:        Thu Aug 17 22:53:38 2017
 OS/Arch:      darwin/amd64

Server:
 Version:      17.06.1-ce
 API version:  1.30 (minimum version 1.12)
 Go version:   go1.8.3
 Git commit:   874a737
 Built:        Thu Aug 17 22:54:55 2017
 OS/Arch:      linux/amd64
 Experimental: true

Output of docker info:

$ docker info
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 1059
Server Version: 17.06.1-ce
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host ipvlan macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 6e23458c129b551d5c9871e5174f6b1b7f6d1170
runc version: 810190ceaa507aa2727d7ae6f4790c76ec150bd2
init version: 949e6fa
Security Options:
 seccomp
  Profile: default
Kernel Version: 4.9.41-moby
Operating System: Alpine Linux v3.5
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 1.952GiB
Name: moby
ID: FHCJ:CF22:VRF6:Y4HR:BM3W:ATJ3:3QGW:AGO5:OTKL:W2ES:OM6Q:WZ5Y
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
 File Descriptors: 18
 Goroutines: 31
 System Time: 2017-09-03T19:59:13.529192672Z
 EventsListeners: 1
No Proxy: *.local, 169.254/16
Registry: https://index.docker.io/v1/
Experimental: true
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

closed time in 14 days

Schnitzel

issue commentmoby/moby

--cache-from and Multi Stage: Pre-Stages are not cached

This has been addressed by buildkit. DOCKER_BUILDKIT=1 docker build --build-arg BUILDKIT_INLINE_CACHE=1 .

Schnitzel

comment created time in 14 days

issue openeddmcgowan/docker

Latest tag handling not backwards compatible

root@dev3:/tmp/foo# docker rmi foo
Error: No such image: foo
root@dev3:/tmp/foo# docker rmi foo:latest
Untagged: foo:latest
Untagged: foo:latest@sha256:a13aeea8b0193bc748eb87895204c257f51efca0a336ec01bf360df403c31660
Deleted: sha256:a13aeea8b0193bc748eb87895204c257f51efca0a336ec01bf360df403c31660

created time in 15 days

issue openeddmcgowan/docker

Storage stack should use platforms matcher, not a fixed platform

On building arm64 image on x86: Failed to solve with frontend dockerfile.v0: failed to build LLB: failed to get layerstore for {arm64 linux [] }: no layer storage backend configured for linux

Most platforms use the same storage that is needed for building/running non-native architecture images. In reality, only windows and linux have different storage, and even then linux storage may be used for building windows images in some cases.

Temporary workaround: https://github.com/moby/moby/commit/f6cea13da214fa32b99477b4912fb315ffeb2fc7#diff-1a1f3e7ad9b1d7584e2d3e7d0c4c3db9R969

created time in 15 days

issue openeddmcgowan/docker

Issues on matching images by prefix

root@dev3:/tmp/foo# docker rmi 9e4
Error: No such image: 9e4
root@dev3:/tmp/foo# docker rmi 9e43
Deleted: sha256:9e4313589c6c8805197ddfb517993466422b803c3645ad4d4c671fec0ff71eec

created time in 15 days

issue openeddmcgowan/docker

Dangling images handing not backwards compatible

 => => naming to <build>@sha256:9e4313589c6c8805197ddfb517993466422b803c3645ad4d4c671fec0ff71eec                                              0.0s
root@dev3:/tmp/foo# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
root@dev3:/tmp/foo#

docker images -a would show the 9e4313589c6c8805197d but that is not the original behavior.

created time in 15 days

issue openeddmcgowan/docker

Parallel migration can result locked errors

Places like https://github.com/dmcgowan/docker/blob/containerd-integration/daemon/images/image_commit.go#L344 seem to not have protection for parallel requests that share a layer. This may cause failures caused by already locked reference. I haven't gone through all of the possible places where this can exist.

created time in 15 days

push eventtonistiigi/docker

Tonis Tiigi

commit sha 483cf2992c8fe5c3ed52c8a5701361cdfa230b3f

vendor-dirty: handle canonical/dangling images Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tonis Tiigi

commit sha 1960722c4e59e812e3d8ebf0ce2861e0b623a806

vendor-dirty: bring back layer refs Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tonis Tiigi

commit sha f6cea13da214fa32b99477b4912fb315ffeb2fc7

builder-next: fix tagging and platform passing Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tonis Tiigi

commit sha 24ff96afc241faaa368ca263236a089785d52079

daemon: support platform for container create Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tonis Tiigi

commit sha 73dcbbca9f76b2c48a7ce8bda97a97aa1a21aa2e

builder-next: index config requests by platform Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

push time in 15 days

push eventtonistiigi/docker

Ziheng Liu

commit sha d7bc994a08a5dc13547b0aaf756d012e1dbba722

awslogs & archive: prevent 2 goroutine leaks in test functions Signed-off-by: Ziheng Liu <lzhfromustc@gmail.com>

view details

Sebastiaan van Stijn

commit sha a0a5ec4c6d18debd1ef94fbcc8096dfb8c561a7b

Integration: skip TestInfoDebug on Windows The test starts a new daemon, but attempts to run it with overlay2, and using a unix:// socket, which doesn't really work on Windows. ``` 00:14:14.623 --- FAIL: TestInfoDebug (0.01s) 00:14:14.623 info_test.go:75: [dbe75bf7729f3] failed to start daemon with arguments [--containerd /var/run/docker/containerd/containerd.sock --data-root D:\gopath\src\github.com\docker\docker\bundles\tmp\TestInfoDebug\dbe75bf7729f3\root --exec-root C:\windows\TEMP\dxr\dbe75bf7729f3 --pidfile D:\gopath\src\github.com\docker\docker\bundles\tmp\TestInfoDebug\dbe75bf7729f3\docker.pid --userland-proxy=true --containerd-namespace dbe75bf7729f3 --containerd-plugins-namespace dbe75bf7729f3p --host unix://C:\windows\TEMP\docker-integration\dbe75bf7729f3.sock --storage-driver overlay2 --debug] : protocol not available 00:14:14.623 === RUN TestInfoInsecureRegistries 00:14:14.623 --- FAIL: TestInfoInsecureRegistries (0.00s) 00:14:14.623 info_test.go:100: [d3c745c16a39c] failed to start daemon with arguments [--containerd /var/run/docker/containerd/containerd.sock --data-root D:\gopath\src\github.com\docker\docker\bundles\tmp\TestInfoInsecureRegistries\d3c745c16a39c\root --exec-root C:\windows\TEMP\dxr\d3c745c16a39c --pidfile D:\gopath\src\github.com\docker\docker\bundles\tmp\TestInfoInsecureRegistries\d3c745c16a39c\docker.pid --userland-proxy=true --containerd-namespace d3c745c16a39c --containerd-plugins-namespace d3c745c16a39cp --host unix://C:\windows\TEMP\docker-integration\d3c745c16a39c.sock --debug --storage-driver overlay2 --insecure-registry=192.168.1.0/24 --insecure-registry=insecurehost.com:5000] : protocol not available 00:14:14.623 === RUN TestInfoRegistryMirrors 00:14:14.623 --- FAIL: TestInfoRegistryMirrors (0.01s) 00:14:14.623 info_test.go:124: [d277126ad0515] failed to start daemon with arguments [--containerd /var/run/docker/containerd/containerd.sock --data-root D:\gopath\src\github.com\docker\docker\bundles\tmp\TestInfoRegistryMirrors\d277126ad0515\root --exec-root C:\windows\TEMP\dxr\d277126ad0515 --pidfile D:\gopath\src\github.com\docker\docker\bundles\tmp\TestInfoRegistryMirrors\d277126ad0515\docker.pid --userland-proxy=true --containerd-namespace d277126ad0515 --containerd-plugins-namespace d277126ad0515p --host unix://C:\windows\TEMP\docker-integration\d277126ad0515.sock --debug --storage-driver overlay2 --registry-mirror=https://192.168.1.2 --registry-mirror=http://registry.mirror.com:5000] : protocol not available ``` Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

view details

Brian Goff

commit sha e7d2d853f6ec333061282183a8a58af47de1888a

Make binary output targets use own build cmd The binary targets now use buildkit to build/output binaries instead of doing it in a DOCKER_RUN_DOCKER container. With that change caused issues when trying to call multiple make targets such as `make binary cross` since those targets are updating the variables (with conflicting data) used by the shared `build` prerequisite. This change has those binary output targets call `docker build` (or `buildx build`) directly since that is the action they are preforming and no longer have any pre-reqs. Signed-off-by: Brian Goff <cpuguy83@gmail.com>

view details

Brian Goff

commit sha c057825cf56850ffb97cae532d0bfa261b4b9a53

Pass VERSION variable to binary build targets. Signed-off-by: Brian Goff <cpuguy83@gmail.com>

view details

Tibor Vass

commit sha 7cb46617fcc1071963ab59d81082eab3e3ef8f9d

Merge pull request #40155 from thaJeztah/skip_testinfodebug Integration: skip TestInfoDebug on Windows

view details

Sebastiaan van Stijn

commit sha 27552ceb15bca544820229e574427d4c1d6ef585

bump containerd/cgroups 5fbad35c2a7e855762d3c60f2e474ffcad0d470a full diff: https://github.com/containerd/cgroups/compare/c4b9ac5c7601384c965b9646fc515884e091ebb9...5fbad35c2a7e855762d3c60f2e474ffcad0d470a - containerd/cgroups#82 Add go module support - containerd/cgroups#96 Move metrics proto package to stats/v1 - containerd/cgroups#97 Allow overriding the default /proc folder in blkioController - containerd/cgroups#98 Allows ignoring memory modules - containerd/cgroups#99 Add Go 1.13 to Travis - containerd/cgroups#100 stats/v1: export per-cgroup stats Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

view details

Sam Whited

commit sha d6a91ca71c655f71c171e375b787c9c8b361c19e

Rename DCO check param in Jenkinsfile Previously it was a negative parameter for skiping the DCO check, but this is different from other checks. It was requested that I change this in #40023 but I'm factoring it out as an unrelated change. Signed-off-by: Sam Whited <sam@samwhited.com>

view details

Tibor Vass

commit sha 9232e1096cbcadc18363fedae1230bb740ef93ca

Merge pull request #40154 from thaJeztah/bump_cgroups bump containerd/cgroups 5fbad35c2a7e855762d3c60f2e474ffcad0d470a

view details

Tõnis Tiigi

commit sha 64fd3dc0d5e0b15246dcf8d2a58baf202cc179bc

Merge pull request #40157 from lzhfromustc/GL_2test awslogs & archive: prevent 2 goroutine leaks in test functions

view details

Sebastiaan van Stijn

commit sha 9a7e96b5b7e97e034ce7bb0f1e7788d1bd881c7f

Rename "v1" to "statsV1" follow-up to 27552ceb15bca544820229e574427d4c1d6ef585, where this was left as a review comment, but the PR was already merged. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

view details

Sebastiaan van Stijn

commit sha ac7306503d237d548e376a89ab0b899ea1a245b0

Merge pull request #40091 from cpuguy83/40088_explicit_build Make binary output targets use own build cmd

view details

Kirill Kolyshkin

commit sha 7cde98488c2cfd7c3bc5a4a9044047cdab596663

Merge pull request #40159 from SamWhited/jenkins_dco_var_name Rename DCO check param in Jenkinsfile

view details

Kirill Kolyshkin

commit sha 76dbd884d3f1a02dc193305d2ac5824bcd3e4f0f

Merge pull request #40167 from thaJeztah/stats_alias Rename "v1" to "statsV1"

view details

Tonis Tiigi

commit sha fb1601d5ab0abd0456b737c2dabd74503cf35de8

vendor: update buildkit to leases support Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tonis Tiigi

commit sha fe16d95dcd5c5332b55054f2d7aaac08ea9f795f

builder-next: update adapters to new buildkit interfaces Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tonis Tiigi

commit sha f14c9d4df5f572745aee16ad55c385b5d7712de8

builder-next: track layers and graphdrivers with leases Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tonis Tiigi

commit sha f632e2d8d3f9fe11e9bb04d7df1ba3a510d8d648

vendor: update containerd to acdcf13d5eaf0dfe0eaeabe7194a82535549bc2b Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tonis Tiigi

commit sha 21dfcc730b4c06cd7e070f93e0bda250427dc9fb

builder-next: clear temp leases on startup Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

push time in 16 days

PR opened docker/engine

[19.03] vendor: update buildkit to ff93519ee

https://github.com/moby/buildkit/pull/1243

Signed-off-by: Tonis Tiigi tonistiigi@gmail.com

+10 -24

0 comment

4 changed files

pr created time in 17 days

create barnchtonistiigi/docker

branch : 1903-buildkit-update

created branch time in 17 days

startedmitchellh/gon

started time in 17 days

startedmitchellh/gon

started time in 17 days

issue commentmoby/moby

`Copy`ing a directory with symlinks does not follow only adds them leading to broken links.

perhaps you recall the reason we do resolve the final target before copying?

It is resolved if you are copying a symlink, not if you copy a directory containing a symlink. This is just a common behavior of anything copying/extracting files like cp, rsync, tar etc. If this was not done it would mutate the original files so that some of the previous cases might not work anymore or in a simple case where you have 10 symlink pointing to the same file increase your storage requirement 10x. And if you are thinking about just copying the symlink target to its matching location, it wouldn't be possible in a lot of cases just because the matching path can't be constructed in a different destination root, and it would be a security risk for copy to write anything outside the destination directory specified in the command.

kairichard

comment created time in 17 days

PR opened moby/buildkit

[19.03] bugfixes cherry-pick

#1231 #1228

+50 -23

0 comment

4 changed files

pr created time in 20 days

create barnchtonistiigi/buildkit

branch : 1903-update

created branch time in 20 days

push eventmoby/buildkit

Edgar Lee

commit sha bb0ed031112e755b002184e0f829d84c1fe7c2bf

Fix update generated files via docker buildkit Signed-off-by: Edgar Lee <edgarl@netflix.com>

view details

Tõnis Tiigi

commit sha 638df98898a79d32f66c462c4b6854fbf5257417

Merge pull request #1238 from hinshun/docker-gen Fix update generated files via docker buildkit

view details

push time in 20 days

PR merged moby/buildkit

Fix update generated files via docker buildkit

There's a typo in ./hack/update-generated-files which causes the error:

$ make generated-files
./hack/update-generated-files
+ :
+ progressFlag=
+ '[' '' == true ']'
++ awk '$1 == "github.com/gogo/protobuf" { print $2 }' go.mod
+ gogo_version=v1.2.0
+ case $buildmode in
+ echo 'Unsupported build mode: docker-buildkit'
Unsupported build mode: docker-buildkit
+ exit 1
Makefile:35: recipe for target 'generated-files' failed
make: *** [generated-files] Error 1
+1 -1

2 comments

1 changed file

hinshun

pr closed time in 20 days

issue commentmoby/buildkit

RUN --mount=type=cache should inherit ownership/permissions from mountpoint

I read it wrong but the issues remain. Both in the sense of multiple mounts colliding and mounts being prepared independently.

Isn't that an existing issue?

Yes, I believe this is the case. That's why it's better for it to be explicit so the user can point set a different id in this case. Actually, it would probably be safer to enforce this by default, so that in the example above the cache would not be shared.

thaJeztah

comment created time in 20 days

issue commentmoby/buildkit

[Question] understand of "--platform=$BUILDPLATFORM" in dockerfile

Is there any case the build stage is not fixed to work architecture without specifying the

Yes. In case you want to run binaries for your main architecture even if the user has requested an image for another architecture as a result of the build.

from --platform=$TARGETPLATFORM alpine is always the same as from alpine

from --platform=$BUILDPLATFORM alpine is only the same as from alpine if no --platform is specified for the build or if --platform value matches the native platform of the node.

Note that specifying build --platform does not mean that the build is guaranteed to return an image for that platform. The only thing it does is change the TARGETPLATFORM value that automatically has an effect on all the FROM commands that do not set --platform flag itself.

Eg. if you have a dockerfile:

from --platform=linux/arm64 alpine
copy foo .

building this dockerfile will always result in arm64 image, even if you build it with docker build --platform=linux/amd64, because it returns an image based on arm64 alpine. (Therefore the example is bad and you should never write a dockerfile like this). But if you use TARGETPLATFORM/default there then the matching alpine version is pulled and your resulting image will match the platform of build --platform value.

chendave

comment created time in 20 days

push eventmoby/buildkit

Akihiro Suda

commit sha 6e090f58202d32d4b99314a6dcc4df4e4d305f69

README.md: fix description about local cache Fix #1234 This has been already implemented. Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>

view details

Tõnis Tiigi

commit sha d8a369733fe4c1c5dbf703f5e08ec39988f21726

Merge pull request #1239 from AkihiroSuda/doc-fix-local-cache README.md: fix description about local cache

view details

push time in 20 days

PR merged moby/buildkit

README.md: fix description about local cache

Fix #1234

This has been already implemented.

Signed-off-by: Akihiro Suda akihiro.suda.cz@hco.ntt.co.jp

+3 -5

0 comment

1 changed file

AkihiroSuda

pr closed time in 20 days

more