profile
viewpoint
Moby moby https://mobyproject.org/ An open framework to assemble specialized container systems without reinventing the wheel.

moby/moby 57156

Moby Project - a collaborative project for the container ecosystem to assemble container-based systems

moby/buildkit 2584

concurrent, cache-efficient, and Dockerfile-agnostic builder toolkit

moby/hyperkit 2556

A toolkit for embedding hypervisor capabilities in your application

moby/libnetwork 1651

networking for containers

moby/datakit 910

Connect processes into powerful data pipelines with a simple git-like filesystem interface

moby/vpnkit 725

A toolkit for embedding VPN capabilities in your application

moby/tool 71

Temporary repository for the moby assembly tool used by the Moby project

moby/libentitlement 60

Entitlements library for high level control of container permissions

moby/ipvs 27

IPVS networking for containers (package derived from moby/libnetwork)

startedmoby/buildkit

started time in 23 minutes

Pull request review commentmoby/moby

refactor some CPU RT and CFS code

 func WithCgroups(daemon *Daemon, c *container.Container) coci.SpecOpts { 			parentPath = filepath.Clean("/" + parentPath) 		} -		if err := daemon.initCgroupsPath(parentPath); err != nil {-			return fmt.Errorf("linux init cgroups path: %v", err)+		mnt, root, err := cgroups.FindCgroupMountpointAndRoot("", "cpu")+		if err != nil {+			return errors.Wrap(err, "unable to init CPU RT controller")+		}+		// When docker is run inside docker, the root is based of the host cgroup.+		// Should this be handled in runc/libcontainer/cgroups ?+		if strings.HasPrefix(root, "/docker/") {+			root = "/"

Here's a similar code in runc: https://github.com/opencontainers/runc/blob/7673bee6bfbc28c8cfbc64165eea29b83e9957f0/libcontainer/cgroups/utils.go#L338-L352

The code initially comes from https://github.com/opencontainers/runc/pull/135, which explains that it is needed for the docker-in-docker case.

In this case, I think, we won't be able to apply CPU RT settings as we don't have access to a parent cgroup, so it does not make sense here.

kolyshkin

comment created time in 27 minutes

fork fanzeyi/moby

Moby Project - a collaborative project for the container ecosystem to assemble container-based systems

https://mobyproject.org/

fork in an hour

startedmoby/moby

started time in an hour

startedmoby/moby

started time in 3 hours

pull request commentmoby/moby

refactor some CPU RT and CFS code

Oh, perhaps I'm recalling #29846, where I was looking at that 

I think I'm not breaking what was fixed in there, I'm just moving this check earlier in the code.

kolyshkin

comment created time in 3 hours

Pull request review commentmoby/moby

refactor some CPU RT and CFS code

 func applyCPUCgroupInfo(info *SysInfo, cgMounts map[string]string) []string {  	info.CPUShares = cgroupEnabled(mountPoint, "cpu.shares") 	if !info.CPUShares {-		warnings = append(warnings, "Your kernel does not support cgroup cpu shares")+		warnings = append(warnings, "Your kernel does not support CPU shares") 	} -	info.CPUCfsPeriod = cgroupEnabled(mountPoint, "cpu.cfs_period_us")-	if !info.CPUCfsPeriod {-		warnings = append(warnings, "Your kernel does not support cgroup cfs period")+	info.CPUCfs = cgroupEnabled(mountPoint, "cpu.cfs_quota_us")+	if !info.CPUCfs {+		warnings = append(warnings, "Your kernel does not support CPU CFS scheduler") 	} -	info.CPUCfsQuota = cgroupEnabled(mountPoint, "cpu.cfs_quota_us")-	if !info.CPUCfsQuota {-		warnings = append(warnings, "Your kernel does not support cgroup cfs quotas")-	}--	info.CPURealtimePeriod = cgroupEnabled(mountPoint, "cpu.rt_period_us")-	if !info.CPURealtimePeriod {-		warnings = append(warnings, "Your kernel does not support cgroup rt period")-	}--	info.CPURealtimeRuntime = cgroupEnabled(mountPoint, "cpu.rt_runtime_us")

You mean, outside of this repo? I am not sure how to find this out.

If there are external users, that's a bummer, and we have two options

  1. change it, and have the external users adapt their code to the new thing
  2. keep it there for backward compatibility

I would go bold and do 1 since the adaptation seem trivial.

kolyshkin

comment created time in 3 hours

Pull request review commentmoby/moby

refactor some CPU RT and CFS code

 func WithCgroups(daemon *Daemon, c *container.Container) coci.SpecOpts { 			parentPath = filepath.Clean("/" + parentPath) 		} -		if err := daemon.initCgroupsPath(parentPath); err != nil {-			return fmt.Errorf("linux init cgroups path: %v", err)+		mnt, root, err := cgroups.FindCgroupMountpointAndRoot("", "cpu")+		if err != nil {+			return errors.Wrap(err, "unable to init CPU RT controller")+		}+		// When docker is run inside docker, the root is based of the host cgroup.+		// Should this be handled in runc/libcontainer/cgroups ?+		if strings.HasPrefix(root, "/docker/") {+			root = "/"

This is a dirty hack in here, and I didn't want to mess with it as it's easy to break things.

Most probably yes, it won't work since the prefix would be system.slice or so.

To be honest, even after looking at all this code for a few montjs, I still can't figure out why mountinfo's root field is needed anywhere :( and this missing knowledge prevents me from optimizing runc :(

An alternative to the above would be to not use root field from mountinfo, i.e.

               mnt, err := cgroups.FindCgroupMountpoint("", "cpu")
               if err != nil {
                       return errors.Wrap(err, "unable to init CPU RT controller")
               }
               if err := daemon.initCpuRtController(mnt, parentPath); err != nil {
                       return errors.Wrap(err, "unable to init CPU RT controller")

but I am hesitant to remove it as I don't fully understand why it's needed here.

@AkihiroSuda maybe you can shed some light on it?

kolyshkin

comment created time in 3 hours

pull request commentmoby/moby

[19.03] Fix crui armhf build on armv8 agents

Relevant CRIU commit: https://github.com/checkpoint-restore/criu/commit/075f1beaf7d36cb9ea5030e1faab9661c33290ab

I'm not sure what's the difference between armv7-a (in the above commit) and armv7l (here). Otherwise LGTM

StefanScherer

comment created time in 3 hours

startedmoby/moby

started time in 3 hours

pull request commentmoby/moby

pkg/sysinfo.applyPIDSCgroupInfo: optimize

@kolyshkin do you know how much time we save with this? Wondering if we should backport this (given that the fix itself is tiny and trivial), or not worth the effort?

Up to 0.01s in busy scenarios, much less so on a relatively idle system. If it's a straightforward backport, I'd do it

kolyshkin

comment created time in 3 hours

fork Tomasz69/moby

Moby Project - a collaborative project for the container ecosystem to assemble container-based systems

https://mobyproject.org/

fork in 4 hours

startedmoby/libnetwork

started time in 5 hours

issue commentmoby/moby

Error response from daemon: Get https://registry-1.docker.io/v2/

Resolved by Restarting docker service

systemctl restart docker

not a full proof solution but works anyways :yum:

c0nscience

comment created time in 6 hours

startedmoby/moby

started time in 7 hours

fork alexismarinoiu/moby

Moby Project - a collaborative project for the container ecosystem to assemble container-based systems

https://mobyproject.org/

fork in 8 hours

pull request commentmoby/buildkit

Add support for lazily-pulled blobs in cache manager.

For now, I'll probably just have it use whichever helper it happens to find first.

Yes, that should be fine. Digests should provide stability here. Eg. if you today have 2 different image references in the same build that point to the same digest, pull also happens once and there is no predefined order defining which ref was used.

sipsma

comment created time in 8 hours

fork rstraining4/moby

Moby Project - a collaborative project for the container ecosystem to assemble container-based systems

https://mobyproject.org/

fork in 9 hours

PR opened moby/moby

daemon.allocateNetwork: include original error in logs area/networking status/2-code-review

When failing to destroy a stale sandbox, we logged that the removal failed, but omitted the original error message.

+1 -1

0 comment

1 changed file

pr created time in 9 hours

PR opened moby/moby

Better selection of DNS server area/networking impact/changelog status/2-code-review
  • fixes / addresses https://github.com/moby/moby/issues/38243 18.09 breaks containers name resolution for non default networks on systems with systemd-resolved
  • fixes / addresses https://github.com/moby/moby/issues/39978 Wrong resolv.conf used on Ubuntu 19 (systemd-resolved enabled)
  • fixes / addresses https://github.com/docker/for-linux/issues/889 Container /etc/resolv.conf does not update when /run/systemd/resolve/resolv.conf changes
  • relates to https://github.com/moby/libnetwork/pull/2385#issuecomment-498326101
  • relates to https://github.com/kubernetes-sigs/kind/issues/1594#issuecomment-629483100

Commit e353e7e3f0ce8eceeff657393cba2876375403fa (https://github.com/moby/moby/pull/37485) updated selection of the resolv.conf file to use in situations where systemd-resolvd is used as a resolver.

If a host uses systemd-resolvd, the system's /etc/resolv.conf file is updated to set 127.0.0.53 as DNS, which is the local IP address for systemd-resolvd. The DNS servers that are configured by the user will now be stored in /run/systemd/resolve/resolv.conf, and systemd-resolvd acts as a forwarding DNS for those.

Originally, Docker copied the DNS servers as configured in /etc/resolv.conf as default DNS servers in containers, which failed to work if systemd-resolvd is used (as 127.0.0.53 is not available inside the container's networking namespace). To resolve this, e353e7e3f0ce8eceeff657393cba2876375403fa instead detected if systemd-resolvd is in use, and in that case copied the "upstream" DNS servers from the /run/systemd/resolve/resolv.conf configuration.

While this worked for most situations, it had some downsides, among which:

  • we're skipping systemd-resolvd altogether, which means that we cannot take advantage of addition functionality provided by it (such as per-interface DNS servers)
  • when updating DNS servers in the system's configuration, those changes were not reflected in the container configuration, which could be problematic in "developer" scenarios, when switching between networks.

This patch changes the way we select which resolv.conf to use as template for the container's resolv.conf;

  • in situations where a custom network is attached to the container, and the embedded DNS is available, we use /etc/resolv.conf unconditionally. If systemd-resolvd is used, the embedded DNS forwards external DNS lookups to systemd-resolvd, which in turn is responsible for forwarding requests to the external DNS servers configured by the user.
  • if the container is running in "host mode" networking, we also use the DNS server that's configured in /etc/resolv.conf. In this situation, no embedded DNS server is available, but the container runs in the host's networking namespace, and can use the same DNS servers as the host (which could be systemd-resolvd or DNSMasq
  • if the container uses the default (bridge) network, no embedded DNS is available, and the container has its own networking namespace. In this situation we check if systemd-resolvd is used, in which case we skip systemd-resolvd, and configure the upstream DNS servers as DNS for the container. This situation is the same as is used currently, which means that dynamically switching DNS servers won't be supported for these containers.

Signed-off-by: Sebastiaan van Stijn github@gone.nl

<!-- Please make sure you've read and understood our contributing guidelines; https://github.com/moby/moby/blob/master/CONTRIBUTING.md

** Make sure all your commits include a signature generated with git commit -s **

For additional information on our contributing process, read our contributing guide https://docs.docker.com/opensource/code/

If this is a bug fix, make sure your description includes "fixes #xxxx", or "closes #xxxx"

Please provide the following information: -->

- What I did

- How I did it

- How to verify it

- Description for the changelog <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: -->

- A picture of a cute animal (not mandatory but encouraged)

+60 -6

0 comment

1 changed file

pr created time in 10 hours

Pull request review commentmoby/buildkit

Add support for lazily-pulled blobs in cache manager.

 type ImageWriter struct { }  func (ic *ImageWriter) Commit(ctx context.Context, inp exporter.Source, oci bool, compression blobs.CompressionType) (*ocispec.Descriptor, error) {+	// Un-lazy refs if needed, but don't extract them.+	// TODO Just do a Copy from one remote provider to another and/or just check+	// if the remote provider you are pushing to already has the ref and skip+	// pushing the layer. This can be done by some minor refactoring to only+	// use ImmutableRef.Info to get data needed to generate diffpairs and then+	// providing a special ContentStore implementation that uses the local+	// store if a blob is present but falls back to remote providers set on the+	// ref otherwise.+	if inp.Ref != nil {+		err := inp.Ref.EnsureContentExists(ctx)

Ah okay, totally misunderstood that, will fix. I'll try to add a test to catch that case too.

sipsma

comment created time in 10 hours

PR opened moby/moby

[19.03] Fix crui armhf build on armv8 agents

Signed-off-by: Stefan Scherer stefan.scherer@docker.com

- What I did

This PR fixes the build error in the crui stage when we build static armhf binaries on ARMv8 build machines with 32bit dockerd/containerd installed.

We normally see an error like this

#39 44.89 include/common/asm/atomic.h:61:2: error: #error ARM architecture version (CONFIG_ARMV*) not set or unsupported.
#39 44.89  #error ARM architecture version (CONFIG_ARMV*) not set or unsupported.
#39 44.89   ^~~~~

This is because crui build also runs an uname -m inside the Makefile and that retrieves armv8l.

- How I did it

Overwrite the Makefile variable UNAME-M with armv7l to make the arm build happy on these machines.

- How to verify it

  • Use an ARMv8 machine, install and run docker-ce, docker-ce-cli and containerd.io armhf deb packages and run docker and containerd services with setarch linux32 to have an arm32 build machine.
  • Run make binary

- Description for the changelog <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: -->

- A picture of a cute animal (not mandatory but encouraged)

Screen Shot 2020-05-25 at 5 47 02 PM

+7 -4

0 comment

1 changed file

pr created time in 10 hours

pull request commentmoby/moby

refactor some CPU RT and CFS code

Oh, perhaps I'm recalling https://github.com/moby/moby/pull/29846, where I was looking at that 🤔

kolyshkin

comment created time in 10 hours

Pull request review commentmoby/moby

refactor some CPU RT and CFS code

 func applyCPUCgroupInfo(info *SysInfo, cgMounts map[string]string) []string {  	info.CPUShares = cgroupEnabled(mountPoint, "cpu.shares") 	if !info.CPUShares {-		warnings = append(warnings, "Your kernel does not support cgroup cpu shares")+		warnings = append(warnings, "Your kernel does not support CPU shares") 	} -	info.CPUCfsPeriod = cgroupEnabled(mountPoint, "cpu.cfs_period_us")-	if !info.CPUCfsPeriod {-		warnings = append(warnings, "Your kernel does not support cgroup cfs period")+	info.CPUCfs = cgroupEnabled(mountPoint, "cpu.cfs_quota_us")+	if !info.CPUCfs {+		warnings = append(warnings, "Your kernel does not support CPU CFS scheduler") 	} -	info.CPUCfsQuota = cgroupEnabled(mountPoint, "cpu.cfs_quota_us")-	if !info.CPUCfsQuota {-		warnings = append(warnings, "Your kernel does not support cgroup cfs quotas")-	}--	info.CPURealtimePeriod = cgroupEnabled(mountPoint, "cpu.rt_period_us")-	if !info.CPURealtimePeriod {-		warnings = append(warnings, "Your kernel does not support cgroup rt period")-	}--	info.CPURealtimeRuntime = cgroupEnabled(mountPoint, "cpu.rt_runtime_us")

Would there be users using these?

kolyshkin

comment created time in 10 hours

pull request commentmoby/moby

refactor some CPU RT and CFS code

I seem to recall there was some "setup" needed for this functionality in https://github.com/moby/moby/pull/23430 (parent cgroup for this to be created so that child containers got a share of this?) don't recall exactly

kolyshkin

comment created time in 10 hours

Pull request review commentmoby/moby

allocateNetwork: fix network sandbox not cleaned up on failure

 func (daemon *Daemon) allocateNetwork(container *container.Container) error { 			} 			updateSandboxNetworkSettings(container, sb) 			defer func() {

Yes, it's very easy to overlook the problem. My eye just fell on it, and I thought: "that doesn't look right!"

thaJeztah

comment created time in 10 hours

startedmoby/moby

started time in 10 hours

issue commentmoby/moby

Dockerfile CMD doesn't understand ENV variables

Write the command without use the args array: ie CMD gunicorn --bind 0.0.0.0:$PORT wsgi:app

mikea

comment created time in 11 hours

startedmoby/moby

started time in 11 hours

fork Tomasz69/buildkit

concurrent, cache-efficient, and Dockerfile-agnostic builder toolkit

https://github.com/moby/moby/issues/34227

fork in 11 hours

startedmoby/moby

started time in 11 hours

issue commentmoby/moby

99% performance loss when using encrypted overlay network in swarm mode

I found the problem and I deleted my comment so that it does not affect this thread. Updating packages using default mirrorlist User-defined network (overlay) ~600 KiB/s User-defined network (bridge) ~ 400 KiB/s Default bridge/ingress network ~ 3600 KiB/s (standalone or service container)

User-defined networks seems to be slower (~600 KiB vs ~3600 KiB).

Updating packages using custom mirrorlist The speed improves to ~91 MiB/s in some cases.

The big problem was the default mirrorlist. I think it is better not to put these details here to keep this thread clean. Yesterday i did some iperf tests on bouygues.iperf.fr and the results were similar. It is strange because today it works well in all networks: 380-390 MiB/s (user-defined and default network).

At the moment I consider it solved until I see any incident. When you read this message you can delete it so as not to interfere in the thread with absurd comments.

Thank you.

mortensteenrasmussen

comment created time in 11 hours

pull request commentmoby/moby

pkg/sysinfo.applyPIDSCgroupInfo: optimize

@kolyshkin do you know how much time we save with this? Wondering if we should backport this (given that the fix itself is tiny and trivial), or not worth the effort?

kolyshkin

comment created time in 12 hours

Pull request review commentmoby/moby

remove group name from identity mapping

 func setupRemappedRoot(config *config.Config) (*idtools.IdentityMapping, error) 		// update remapped root setting now that we have resolved them to actual names 		config.RemappedRoot = fmt.Sprintf("%s:%s", username, groupname) -		// try with username:groupname, uid:groupname, username:gid, uid:gid,+		// try with username and uid, 		// but keep the original error message (err)-		mappings, err := idtools.NewIdentityMapping(username, groupname)+		mappings, err := idtools.NewIdentityMapping(username) 		if err == nil { 			return mappings, nil

In that case, I think the first method you suggested will be better. i.e NewIdentityMapping(usrename) will be called only with a username. The method is guaranteed to return all mappings, irrespective of whether the system uses a uid, or usrename, or both.

akhilerm

comment created time in 12 hours

issue closedmoby/buildkit

Inconsistent caching behavior in rootless Docker

I want to use rootless buildkitd in K8s to build some Dockerfiles, but I am seeing that the caching behavior is inconsistent: when I build the same Dockerfile twice, it will something cache some layers, and something it will cache others, and sometimes it will cache none.

Here's my pod definition:

apiVersion: v1
kind: Pod
metadata:
  name: tmp
  annotations:
    container.apparmor.security.beta.kubernetes.io/buildkitd: unconfined
    container.seccomp.security.alpha.kubernetes.io/buildkitd: unconfined
spec:
  containers:
    - image: "moby/buildkit:v0.7.1"
      name: client
      command: [""]
      args: [ "sleep", "3000" ]
      resources:
        requests:
          memory: 50Mi
        limits:
          memory: 500Mi
      volumeMounts:
        - name: config-algorithm-docker
          mountPath: /opt/conf/docker/Dockerfile.algorithm
          subPath: Dockerfile.algorithm
        - name: secrets-algorithm-docker
          mountPath: /opt/conf/docker/rclone.conf
          subPath: rclone.conf
        - name: buildkit-socket-volume
          mountPath: /run/buildkit/
    - name: buildkitd
      image: "moby/buildkit:v0.7.1-rootless"
      resources:
        requests:
          memory: 1Gi
        limits:
          memory: 1.5Gi
      args:
      - --oci-worker-no-process-sandbox
      readinessProbe:
        exec:
          command:
          - buildctl 
          - debug 
          - workers
        initialDelaySeconds: 5
        periodSeconds: 30
      livenessProbe:
        exec:
          command:
          - buildctl 
          - debug 
          - workers
        initialDelaySeconds: 5
        periodSeconds: 30
      securityContext:
  # To change UID/GID, you need to rebuild the image
        runAsUser: 1000
        runAsGroup: 1000
      volumeMounts:
        - name: buildkit-cache
          mountPath: /home/user/.local/share/buildkit/
        - name: buildkit-socket-volume
          mountPath: /run/user/1000/buildkit/
  volumes:
    - name: buildkit-cache
      emptyDir: {}
    - name: buildkit-socket-volume
      emptyDir: {}
    - name: secrets-algorithm-docker
      secret:
        secretName: algorithm-manager
        items:
          - key: rclone.conf
            path: rclone.conf
            mode: 0555
    - name: config-algorithm-docker
      configMap:
        name: algorithm-manager
        items:
          - key: Dockerfile.algorithm
            path: Dockerfile.algorithm
            mode: 0555

Dockerfile.algorithm contents:

    ##### JAVA DEPS
    FROM maven:3.6.0-jdk-11 AS build

    ARG remote_repositories
    ARG jar_dependencies
    ARG AWS_ACCESS_KEY_ID
    ARG AWS_SECRET_ACCESS_KEY
    ARG AWS_REGION

    ENV JARS_DIR /dependecies/jars
    ENV POMS_DIR /dependecies/poms

    ENV S3_EXTENSION '<extensions xmlns="http://maven.apache.org/EXTENSIONS/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" \n\
                                       xsi:schemaLocation="http://maven.apache.org/EXTENSIONS/1.0.0 http://maven.apache.org/xsd/core-extensions-1.0.0.xsd"> \n\
                                         <!-- Repository on AMAZON S3 --> \n\
                                         <extension> \n\
                                             <groupId>com.gkatzioura.maven.cloud</groupId> \n\
                                             <artifactId>s3-storage-wagon</artifactId> \n\
                                             <version>2.3</version> \n\
                                         </extension> \n\
                                     </extensions>'

    RUN mkdir -p $JARS_DIR
    # XXX: copy comand needs at least one file using wildcards
    RUN touch $JARS_DIR/.success

    # This is so Maven can download from S3
    RUN if [ -n "$AWS_ACCESS_KEY_ID" ]; then mkdir .mvn ; fi
    RUN if [ -n "$AWS_ACCESS_KEY_ID" ]; then echo $S3_EXTENSION > .mvn/extensions.xml ; fi

    # Download dependencies
    # Plugin doc: https://maven.apache.org/plugins-archives/maven-dependency-plugin-2.8/get-mojo.html
    RUN for artifact in $(echo $jar_dependencies | tr "," "\n"); do \
      mvn -e dependency:get \
      -DremoteRepositories="$remote_repositories" \
      -Dartifact="$artifact" \
      -Ddest=$JARS_DIR \
      ; done

    # Download pom's dependencies & reseolve transisitive dependencies
    RUN mkdir -p $POMS_DIR
    RUN if [ -n "$AWS_ACCESS_KEY_ID" ]; then mkdir -p $POMS_DIR/.mvn ; fi
    RUN if [ -n "$AWS_ACCESS_KEY_ID" ]; then echo $S3_EXTENSION > $POMS_DIR/.mvn/extensions.xml ; fi
    RUN for artifact in $(echo $jar_dependencies | tr "," "\n"); do \
      mvn -e dependency:get \
      -DremoteRepositories="$remote_repositories" \
      -Dartifact="$artifact:pom" \
      -Ddest=$POMS_DIR \
      ; done

    # Plugin doc: https://maven.apache.org/plugins-archives/maven-dependency-plugin-2.8/copy-dependencies-mojo.html
    RUN if [ ! -z "$(ls -A $POMS_DIR)" ]; then for pomfile in $POMS_DIR/*.pom; do \
      mvn -e dependency:copy-dependencies \
      -f $pomfile -DoutputDirectory=$JARS_DIR \
      -DincludeScope=runtime \
      ; done \
      ; fi

    ##### ALGORITHM FILES
    FROM telefonica/rclone:1.48.0 AS algorithm-files

    COPY rclone.conf /root/.config/rclone/rclone.conf

    ARG path_to_algorithm
    ENV ALGORITHM_DIR  /algorithm/$path_to_algorithm

    RUN mkdir -p $ALGORITHM_DIR
    ARG NOCACHE_LAYER
    RUN echo $NOCACHE_LAYER && rclone copy algorithm-storage:$path_to_algorithm $ALGORITHM_DIR

    ##### ALGORITHM IMG
    FROM telefonica/spark-py:2.4.4-hadoop2.9.1-S3A-WASB-3

    ARG py_dependencies

    RUN for dependency in $(echo $py_dependencies | tr "," "\n"); do \
      python3 -m pip install $dependency --user \
      ; done

    ENV PYTHONPATH $PYTHONPATH:/opt/spark/algorithm

    COPY --from=build --chown=spark:spark /dependecies/jars/* /opt/spark/jars/
    COPY --from=algorithm-files --chown=spark:spark /algorithm /opt/spark/algorithm

Command I run:

buildctl build --frontend=dockerfile.v0 --local context=/opt/conf/docker/ --opt filename=/opt/conf/docker/Dockerfile.algorithm --local dockerfile=. --opt build-arg:NOCACHE_LAYER=0 --opt build-arg:remote_repositories="" --opt build-arg:jar_dependencies="" --opt build-arg:AWS_ACCESS_KEY_ID="" --opt build-arg:AWS_SECRET_ACCESS_KEY="" --opt build-arg:AWS_REGION="" --opt build-arg:path_to_algorithm="" --opt build-arg:NOCACHE_LAYER="" --opt build-arg:py_dependencies=""

Output on first run (nothing is cached, as expected):

[+] Building 340.9s (25/25) FINISHED
 => [internal] load .dockerignore                                                                  0.7s
 => => transferring context: 2B                                                                    0.0s
 => [internal] load build definition from /opt/conf/docker/Dockerfile.algorithm                    0.9s
 => => transferring dockerfile: 3.30kB                                                             0.0s
 => [internal] load metadata for docker.io/telefonica/spark-py:2.4.4-hadoop2.9.1-S3A-WASB-3        1.3s
 => [internal] load metadata for docker.io/library/maven:3.6.0-jdk-11                              1.6s
 => [internal] load metadata for docker.io/telefonica/rclone:1.48.0                                1.2s
 => [algorithm-files 1/4] FROM docker.io/telefonica/rclone:1.48.0@sha256:3b85f5ddf11202b3a20ce1d  47.6s
 => => resolve docker.io/telefonica/rclone:1.48.0@sha256:3b85f5ddf11202b3a20ce1d0374be0e67b6535b8  0.0s
 => => sha256:3b85f5ddf11202b3a20ce1d0374be0e67b6535b89324a5314c73f14ed6cba503 950B / 950B         0.0s
 => => sha256:56006a50897441593e8f2e59d0026a7a0b472c5d14817e91f879b5281e7c3007 1.91kB / 1.91kB     0.0s
 => => sha256:f5bdd32999b54f71d56d38d3447f87706cf4008355abe107761575648b73679e 10.70MB / 10.70MB   3.3s
 => => sha256:5d20c808ce198565ff70b3ed23a991dd49afac45dece63474b27ce6ed036adc6 2.11MB / 2.11MB     3.2s
 => => sha256:d04e7dd7887df9c2269b4cf15ff6ad4bad87485d96f01895cfcd28a847c61b8 301.73kB / 301.73kB  3.4s
 => => sha256:d04e7dd7887df9c2269b4cf15ff6ad4bad87485d96f01895cfcd28a847c61b8 301.73kB / 301.73kB  3.4s
 => => unpacking docker.io/telefonica/rclone:1.48.0@sha256:3b85f5ddf11202b3a20ce1d0374be0e67b653  32.9s
 => [build  1/11] FROM docker.io/library/maven:3.6.0-jdk-11@sha256:6a0430ded2cfaba7e16080f4cc09  183.4s
 => => resolve docker.io/library/maven:3.6.0-jdk-11@sha256:6a0430ded2cfaba7e16080f4cc097c9c65d140  0.0s
 => => sha256:6a0430ded2cfaba7e16080f4cc097c9c65d1406b3b235d0fcfcfd84c354c4177 2.37kB / 2.37kB     0.0s
 => => sha256:4fe5c6d53794c2135ae0ba9d564f1ab50451df64c9b29e45432be81e49f98bc4 3.04kB / 3.04kB     0.0s
 => => sha256:592298276402a8feafb127b57464c4e723f1c8bced39186db4af3e6b90458a50 8.72kB / 8.72kB     0.0s
 => => sha256:a3d516f512e0d5c064be962d0f4ee49d9b8675b5a5acd3f70527ea40d60d766f 247B / 247B         1.6s
 => => sha256:2b5aab290196cd06ffa3c0f2c9de69924abc7789c063daf2d4662bafa0b4481f 358B / 358B         1.8s
 => => sha256:818845f741c457054b344e3108bd8e119c3f1f4d8f48b515a610f797b9a20cb9 131B / 131B         1.7s
 => => sha256:99a9d43f5b1c3593b7eb98ccf10e74bc0390cb227befbef3bf13f27d2a79f7f9 222B / 222B         1.8s
 => => sha256:30b0085be4e7fa5ad8798eecfe41be21d601526fbc1c3130670f7a82debdc7c4 222B / 222B         1.9s
 => => sha256:f05ab768584560f8a900c1864422922f4929ab2563c83cd9200b2e18fff389 318.89MB / 318.89MB  12.9s
 => => sha256:1a97c78dad716eca1d273d3f7f5661d3fa2dcbbefeab64f5690b285bf395d16 892.37kB / 892.37kB  1.9s
 => => sha256:1b2a72d4e03052566e99130108071fc4eca4942c62923e3e5cf19666a23088ef 4.34MB / 4.34MB     2.3s
 => => sha256:d4b7902036fe0cefdfe9ccf0404fe13322ecbd552f132be73d3e840f95538838 10.78MB / 10.78MB   2.1s
 => => sha256:d54db43011fd116b8cb6d9e49e268cee1fa6212f152b30cbfa7f3c4c684427c3 50.07MB / 50.07MB   3.2s
 => => sha256:e79bb959ec00faf01da52437df4fad4537ec669f60455a38ad583ec2b8f00498 45.34MB / 45.34MB   3.7s
 => => sha256:10426a27e05380ee3eb231f53b51bab0c670470ba865a73fb34e0f5c614b27b6 9.09MB / 9.09MB     2.4s
 => => sha256:914cd0a490d88cacd8482b39440469945ba4d703366f46fbea77809a5ad943f6 751B / 751B         2.6s
 => => sha256:a3d516f512e0d5c064be962d0f4ee49d9b8675b5a5acd3f70527ea40d60d766f 247B / 247B         1.6s
 => => sha256:1a97c78dad716eca1d273d3f7f5661d3fa2dcbbefeab64f5690b285bf395d16 892.37kB / 892.37kB  1.9s
 => => sha256:f05ab768584560f8a900c1864422922f4929ab2563c83cd9200b2e18fff389 318.89MB / 318.89MB  12.9s
 => => unpacking docker.io/library/maven:3.6.0-jdk-11@sha256:6a0430ded2cfaba7e16080f4cc097c9c65  148.3s
 => [stage-2 1/4] FROM docker.io/telefonica/spark-py:2.4.4-hadoop2.9.1-S3A-WASB-3@sha256:f53857  243.9s
 => => resolve docker.io/telefonica/spark-py:2.4.4-hadoop2.9.1-S3A-WASB-3@sha256:f538575065f2dcb9  0.0s
 => => sha256:f538575065f2dcb9117bb800e3d27443cdc7303c3bc7a267c6fec6757898793e 6.20kB / 6.20kB     0.0s
 => => sha256:b13819d6db98a8f7b65ce4e94caa00d8bf1744ea67c51df7f39e1cf4a06798b8 11.96kB / 11.96kB   0.0s
 => => sha256:10fdc906c07dc25f1f27a01302d10fa07f119191105204fd16f32e22e07f62 159.28MB / 159.28MB  10.0s
 => => sha256:695a7f1ecd283a4bc9be33abe529aacf4d5bc411b43dfb23d73548792505420 628.01kB / 628.01kB  3.5s
 => => sha256:d4126858affd8eabc5eadf6159824a06a177688c79cd94f3d1b0ff97fa81acad 25.13MB / 25.13MB   4.0s
 => => sha256:73fdf47615a112037a42f679f927976fc32df5d0fb2d56ef153f43b28acc99fd 8.84kB / 8.84kB     3.6s
 => => sha256:f92d8eadf3b05f770e8752c83199922e314baaf847dfa26dde9918e92566992 317.14kB / 317.14kB  3.6s
 => => sha256:ec0d7aac5a1b195dbe835ab1bc426c9d58cb2eb0d5f0d4c9cce31e5e6e7c31e1 6.11kB / 6.11kB     3.7s
 => => sha256:531ece523dff17d99733f33e8b4757b2215b9da0c5fda0558b8a4fbffcaae19a 9.50MB / 9.50MB     4.3s
 => => sha256:f910a506b6cb1dbec766725d70356f695ae2bf2bea6224dbe8c7c6ad4f3664a2 238B / 238B         3.8s
 => => sha256:66711a62b261de9e05c5f44ba4f9bc15ed7e958af374aabd2b2bf2a9fb89034 558.66kB / 558.66kB  4.2s
 => => sha256:1110c8fe40351359da178f83d17b277a17602d835186e16bddca826f0c15df45 1.69kB / 1.69kB     4.4s
 => => sha256:3618c6d6ba335577044eef50533e011f56f7a0cdc103aa9ab1ffdc79aab9cca 801.47kB / 801.47kB  4.5s
 => => sha256:55ca0f37c1aa8f9fa90416e887287bc82e24d92d2b3431cffe087e13f9c7507b 144B / 144B         4.6s
 => => sha256:84e67a9b7798e68443d21f6d578a0306169d233ed024f69f91a00987938696f4 1.65MB / 1.65MB     4.7s
 => => sha256:f80a845ee741f5dbad60e12cfd205100acf9a24c5a063095653f9eeb6220b7d7 1.39kB / 1.39kB     4.8s
 => => sha256:c2274a1a0e2786ee9101b08f76111f9ab8019e368dce1e325d3c284a0ca33397 70.73MB / 70.73MB  12.7s
 => => sha256:a8643f14d652aceacc2d4da79a395de7409c09636f7507b9b8a89189c600d0e 354.40kB / 354.40kB  4.9s
 => => sha256:5f307ac011d29b4e84cb6b09a4cd5671986620bcb95761d6e74ccedfc3d4aa72 52.62MB / 52.62MB  13.5s
 => => sha256:e7c96db7181be991f19a9fb6975cdbbd73c65f4a2681348e63a141a2192a5f10 2.76MB / 2.76MB    10.2s
 => => sha256:a9f3cd3a9512b28022bd142915390b4f7fc53ebd2af101dbe380e5815a7f942f 291B / 291B        10.1s
 => => sha256:81bffe94a638d5d46384c40e6cd23f6a322137ba0f8f7b070f1ac9c36eef04 266.16kB / 266.16kB  10.3s
 => => sha256:c000da1d096e28317d88e328439dd40267357763bdc83077def1a3937c63c6 152.37MB / 152.37MB  17.3s
 => => sha256:297b14375685e29875e85f5d0a199b01dd356e02d63983334bc2727463abbc50 145B / 145B        12.8s
 => => sha256:2a58c2926f02f2f667d8f96bccaf988c33c534b6c8778616b3398013ef5ef9 808.09kB / 808.09kB  13.6s
 => => sha256:327ab1d3222c7328cc34eaae8f440f5927ddfa7db12ef04a85b8cf46453f3e 212.24kB / 212.24kB  13.7s
 => => sha256:5a63a215004637ebba2e2054161ed6c5046487b330c39b091bef3ddd8c68f2bd 95.79kB / 95.79kB  13.8s
 => => sha256:cb39bff6f1ad1fce7b3ff6d0fd04f3629d1f2787c335637b85893c97c57e5e 171.63MB / 171.63MB  20.3s
 => => sha256:4b9558f79e2a2b27b39ab5c5ae33117962523f03cb54b97c9fb50423e6c608 728.72kB / 728.72kB  14.0s
 => => sha256:898c5f337769b382ac6ea094ba3c822d3ee62d8b51b8ae1d36c5d506a9be31 673.63kB / 673.63kB  14.1s
 => => sha256:695a7f1ecd283a4bc9be33abe529aacf4d5bc411b43dfb23d73548792505420 628.01kB / 628.01kB  3.5s
 => => sha256:f92d8eadf3b05f770e8752c83199922e314baaf847dfa26dde9918e92566992 317.14kB / 317.14kB  3.6s
 => => sha256:1110c8fe40351359da178f83d17b277a17602d835186e16bddca826f0c15df45 1.69kB / 1.69kB     4.4s
 => => sha256:e7c96db7181be991f19a9fb6975cdbbd73c65f4a2681348e63a141a2192a5f10 2.76MB / 2.76MB    10.2s
 => => sha256:cb39bff6f1ad1fce7b3ff6d0fd04f3629d1f2787c335637b85893c97c57e5e 171.63MB / 171.63MB  20.3s
 => => unpacking docker.io/telefonica/spark-py:2.4.4-hadoop2.9.1-S3A-WASB-3@sha256:f538575065f2  221.8s
 => [internal] load build context                                                                  0.5s
 => => transferring context: 299B                                                                  0.0s
 => [algorithm-files 2/4] COPY rclone.conf /root/.config/rclone/rclone.conf                        3.5s
 => [algorithm-files 3/4] RUN mkdir -p /algorithm/                                                 2.6s
 => [algorithm-files 4/4] RUN echo  && rclone copy algorithm-storage: /algorithm/                 54.5s
 => [build  2/11] RUN mkdir -p /dependecies/jars                                                   1.7s
 => [build  3/11] RUN touch /dependecies/jars/.success                                             2.4s
 => [build  4/11] RUN if [ -n "" ]; then mkdir .mvn ; fi                                           2.2s
 => [build  5/11] RUN if [ -n "" ]; then echo <extensions xmlns="http://maven.apache.org/EXTENSIO  2.4s
 => [build  6/11] RUN for artifact in $(echo  | tr "," "\n"); do   mvn -e dependency:get   -Dremo  2.4s
 => [build  7/11] RUN mkdir -p /dependecies/poms                                                   7.1s
 => [build  8/11] RUN if [ -n "" ]; then mkdir -p /dependecies/poms/.mvn ; fi                      6.2s
 => [build  9/11] RUN if [ -n "" ]; then echo <extensions xmlns="http://maven.apache.org/EXTENSIO  2.3s
 => [build 10/11] RUN for artifact in $(echo  | tr "," "\n"); do   mvn -e dependency:get   -Dremo  2.5s
 => [build 11/11] RUN if [ ! -z "$(ls -A /dependecies/poms)" ]; then for pomfile in /dependecies/  5.1s
 => [stage-2 2/4] RUN for dependency in $(echo  | tr "," "\n"); do   python3 -m pip install $depe  1.2s
 => [stage-2 3/4] COPY --from=build --chown=spark:spark /dependecies/jars/* /opt/spark/jars/       1.1s
 => [stage-2 4/4] COPY --from=algorithm-files --chown=spark:spark /algorithm /opt/spark/algorith  56.4s

Output on second run:

[+] Building 215.7s (25/25) FINISHED
 => [internal] load .dockerignore                                                                  0.2s
 => => transferring context: 2B                                                                    0.0s
 => [internal] load build definition from /opt/conf/docker/Dockerfile.algorithm                    0.5s
 => => transferring dockerfile: 3.30kB                                                             0.0s
 => [internal] load metadata for docker.io/telefonica/spark-py:2.4.4-hadoop2.9.1-S3A-WASB-3        1.6s
 => [internal] load metadata for docker.io/library/maven:3.6.0-jdk-11                              1.3s
 => [internal] load metadata for docker.io/telefonica/rclone:1.48.0                                0.5s
 => [algorithm-files 1/4] FROM docker.io/telefonica/rclone:1.48.0@sha256:3b85f5ddf11202b3a20ce1d0  0.0s
 => => resolve docker.io/telefonica/rclone:1.48.0@sha256:3b85f5ddf11202b3a20ce1d0374be0e67b6535b8  0.0s
 => [build  1/11] FROM docker.io/library/maven:3.6.0-jdk-11@sha256:6a0430ded2cfaba7e16080f4cc097  40.7s
 => => resolve docker.io/library/maven:3.6.0-jdk-11@sha256:6a0430ded2cfaba7e16080f4cc097c9c65d140  0.0s
 => => sha256:6a0430ded2cfaba7e16080f4cc097c9c65d1406b3b235d0fcfcfd84c354c4177 2.37kB / 2.37kB     0.0s
 => => sha256:4fe5c6d53794c2135ae0ba9d564f1ab50451df64c9b29e45432be81e49f98bc4 3.04kB / 3.04kB     0.0s
 => => sha256:d4b7902036fe0cefdfe9ccf0404fe13322ecbd552f132be73d3e840f95538838 10.78MB / 10.78MB   0.0s
 => => sha256:818845f741c457054b344e3108bd8e119c3f1f4d8f48b515a610f797b9a20cb9 131B / 131B         0.0s
 => => sha256:1a97c78dad716eca1d273d3f7f5661d3fa2dcbbefeab64f5690b285bf395d16 892.37kB / 892.37kB  0.0s
 => => sha256:d54db43011fd116b8cb6d9e49e268cee1fa6212f152b30cbfa7f3c4c684427c3 50.07MB / 50.07MB   0.0s
 => => sha256:1b2a72d4e03052566e99130108071fc4eca4942c62923e3e5cf19666a23088ef 4.34MB / 4.34MB     0.0s
 => => sha256:99a9d43f5b1c3593b7eb98ccf10e74bc0390cb227befbef3bf13f27d2a79f7f9 222B / 222B         0.0s
 => => sha256:2b5aab290196cd06ffa3c0f2c9de69924abc7789c063daf2d4662bafa0b4481f 358B / 358B         0.9s
 => => sha256:592298276402a8feafb127b57464c4e723f1c8bced39186db4af3e6b90458a50 8.72kB / 8.72kB     0.0s
 => => sha256:e79bb959ec00faf01da52437df4fad4537ec669f60455a38ad583ec2b8f00498 45.34MB / 45.34MB   0.0s
 => => sha256:a3d516f512e0d5c064be962d0f4ee49d9b8675b5a5acd3f70527ea40d60d766f 247B / 247B         0.0s
 => => sha256:30b0085be4e7fa5ad8798eecfe41be21d601526fbc1c3130670f7a82debdc7c4 222B / 222B         1.1s
 => => sha256:10426a27e05380ee3eb231f53b51bab0c670470ba865a73fb34e0f5c614b27b6 9.09MB / 9.09MB     1.2s
 => => sha256:914cd0a490d88cacd8482b39440469945ba4d703366f46fbea77809a5ad943f6 751B / 751B         1.3s
 => => sha256:f05ab768584560f8a900c1864422922f4929ab2563c83cd9200b2e18fff389 318.89MB / 318.89MB  10.4s
 => => sha256:30b0085be4e7fa5ad8798eecfe41be21d601526fbc1c3130670f7a82debdc7c4 222B / 222B         1.1s
 => => sha256:f05ab768584560f8a900c1864422922f4929ab2563c83cd9200b2e18fff389 318.89MB / 318.89MB  10.4s
 => => unpacking docker.io/library/maven:3.6.0-jdk-11@sha256:6a0430ded2cfaba7e16080f4cc097c9c65d  29.7s
 => [internal] load build context                                                                  0.4s
 => => transferring context: 299B                                                                  0.0s
 => [stage-2 1/4] FROM docker.io/telefonica/spark-py:2.4.4-hadoop2.9.1-S3A-WASB-3@sha256:f53857  112.1s
 => => resolve docker.io/telefonica/spark-py:2.4.4-hadoop2.9.1-S3A-WASB-3@sha256:f538575065f2dcb9  0.0s
 => => sha256:f538575065f2dcb9117bb800e3d27443cdc7303c3bc7a267c6fec6757898793e 6.20kB / 6.20kB     0.0s
 => => sha256:b13819d6db98a8f7b65ce4e94caa00d8bf1744ea67c51df7f39e1cf4a06798b8 11.96kB / 11.96kB   0.0s
 => => sha256:10fdc906c07dc25f1f27a01302d10fa07f119191105204fd16f32e22e07f62 159.28MB / 159.28MB  10.2s
 => => sha256:3618c6d6ba335577044eef50533e011f56f7a0cdc103aa9ab1ffdc79aab9cca 801.47kB / 801.47kB  2.8s
 => => sha256:e7c96db7181be991f19a9fb6975cdbbd73c65f4a2681348e63a141a2192a5f10 2.76MB / 2.76MB     3.0s
 => => sha256:55ca0f37c1aa8f9fa90416e887287bc82e24d92d2b3431cffe087e13f9c7507b 144B / 144B         3.1s
 => => sha256:d4126858affd8eabc5eadf6159824a06a177688c79cd94f3d1b0ff97fa81acad 25.13MB / 25.13MB   3.8s
 => => sha256:f910a506b6cb1dbec766725d70356f695ae2bf2bea6224dbe8c7c6ad4f3664a2 238B / 238B         3.2s
 => => sha256:f80a845ee741f5dbad60e12cfd205100acf9a24c5a063095653f9eeb6220b7d7 1.39kB / 1.39kB     3.3s
 => => sha256:695a7f1ecd283a4bc9be33abe529aacf4d5bc411b43dfb23d73548792505420 628.01kB / 628.01kB  3.4s
 => => sha256:66711a62b261de9e05c5f44ba4f9bc15ed7e958af374aabd2b2bf2a9fb89034 558.66kB / 558.66kB  4.0s
 => => sha256:f92d8eadf3b05f770e8752c83199922e314baaf847dfa26dde9918e92566992 317.14kB / 317.14kB  4.0s
 => => sha256:2a58c2926f02f2f667d8f96bccaf988c33c534b6c8778616b3398013ef5ef9a 808.09kB / 808.09kB  4.2s
 => => sha256:1110c8fe40351359da178f83d17b277a17602d835186e16bddca826f0c15df45 1.69kB / 1.69kB     4.1s
 => => sha256:898c5f337769b382ac6ea094ba3c822d3ee62d8b51b8ae1d36c5d506a9be31f 673.63kB / 673.63kB  4.3s
 => => sha256:c000da1d096e28317d88e328439dd40267357763bdc83077def1a3937c63c6 152.37MB / 152.37MB  16.9s
 => => sha256:531ece523dff17d99733f33e8b4757b2215b9da0c5fda0558b8a4fbffcaae19a 9.50MB / 9.50MB    10.5s
 => => sha256:84e67a9b7798e68443d21f6d578a0306169d233ed024f69f91a00987938696f4 1.65MB / 1.65MB    10.6s
 => => sha256:c2274a1a0e2786ee9101b08f76111f9ab8019e368dce1e325d3c284a0ca33397 70.73MB / 70.73MB  13.9s
 => => sha256:4b9558f79e2a2b27b39ab5c5ae33117962523f03cb54b97c9fb50423e6c608 728.72kB / 728.72kB  10.7s
 => => sha256:a8643f14d652aceacc2d4da79a395de7409c09636f7507b9b8a89189c600d0 354.40kB / 354.40kB  10.8s
 => => sha256:5a63a215004637ebba2e2054161ed6c5046487b330c39b091bef3ddd8c68f2bd 95.79kB / 95.79kB  10.9s
 => => sha256:81bffe94a638d5d46384c40e6cd23f6a322137ba0f8f7b070f1ac9c36eef04 266.16kB / 266.16kB  11.0s
 => => sha256:73fdf47615a112037a42f679f927976fc32df5d0fb2d56ef153f43b28acc99fd 8.84kB / 8.84kB    10.9s
 => => sha256:327ab1d3222c7328cc34eaae8f440f5927ddfa7db12ef04a85b8cf46453f3e 212.24kB / 212.24kB  11.1s
 => => sha256:a9f3cd3a9512b28022bd142915390b4f7fc53ebd2af101dbe380e5815a7f942f 291B / 291B        11.1s
 => => sha256:5f307ac011d29b4e84cb6b09a4cd5671986620bcb95761d6e74ccedfc3d4aa72 52.62MB / 52.62MB  17.6s
 => => sha256:ec0d7aac5a1b195dbe835ab1bc426c9d58cb2eb0d5f0d4c9cce31e5e6e7c31e1 6.11kB / 6.11kB    14.0s
 => => sha256:297b14375685e29875e85f5d0a199b01dd356e02d63983334bc2727463abbc50 145B / 145B        17.0s
 => => sha256:cb39bff6f1ad1fce7b3ff6d0fd04f3629d1f2787c335637b85893c97c57e5e 171.63MB / 171.63MB  21.4s
 => => sha256:2a58c2926f02f2f667d8f96bccaf988c33c534b6c8778616b3398013ef5ef9a 808.09kB / 808.09kB  4.2s
 => => sha256:10fdc906c07dc25f1f27a01302d10fa07f119191105204fd16f32e22e07f62 159.28MB / 159.28MB  10.2s
 => => unpacking docker.io/telefonica/spark-py:2.4.4-hadoop2.9.1-S3A-WASB-3@sha256:f538575065f2d  89.3s
 => CACHED [algorithm-files 2/4] COPY rclone.conf /root/.config/rclone/rclone.conf                 0.0s
 => CACHED [algorithm-files 3/4] RUN mkdir -p /algorithm/                                          0.0s
 => CACHED [algorithm-files 4/4] RUN echo  && rclone copy algorithm-storage: /algorithm/           0.0s
 => [build  2/11] RUN mkdir -p /dependecies/jars                                                   1.4s
 => [build  3/11] RUN touch /dependecies/jars/.success                                             0.9s
 => [build  4/11] RUN if [ -n "" ]; then mkdir .mvn ; fi                                           0.9s
 => [build  5/11] RUN if [ -n "" ]; then echo <extensions xmlns="http://maven.apache.org/EXTENSIO  0.9s
 => [build  6/11] RUN for artifact in $(echo  | tr "," "\n"); do   mvn -e dependency:get   -Dremo  0.9s
 => [build  7/11] RUN mkdir -p /dependecies/poms                                                   1.0s
 => [build  8/11] RUN if [ -n "" ]; then mkdir -p /dependecies/poms/.mvn ; fi                      0.9s
 => [build  9/11] RUN if [ -n "" ]; then echo <extensions xmlns="http://maven.apache.org/EXTENSIO  1.0s
 => [build 10/11] RUN for artifact in $(echo  | tr "," "\n"); do   mvn -e dependency:get   -Dremo  1.0s
 => [build 11/11] RUN if [ ! -z "$(ls -A /dependecies/poms)" ]; then for pomfile in /dependecies/  0.9s
 => [stage-2 2/4] RUN for dependency in $(echo  | tr "," "\n"); do   python3 -m pip install $depe  0.9s
 => [stage-2 3/4] COPY --from=build --chown=spark:spark /dependecies/jars/* /opt/spark/jars/       0.9s
 => [stage-2 4/4] COPY --from=algorithm-files --chown=spark:spark /algorithm /opt/spark/algorith  46.8s

closed time in 12 hours

edrevo

issue commentmoby/buildkit

Inconsistent caching behavior in rootless Docker

Nevermind, it turned out to be a cache size issue. Adding - --oci-worker-gc-keepstorage=20000 fixed it.

It would be nice to have buildkitd emit some kind of log if there are images that don't fit in the cache.

I'll go ahead and close the issue. Sorry for the false alarm!

edrevo

comment created time in 12 hours

Pull request review commentmoby/moby

allocateNetwork: fix network sandbox not cleaned up on failure

 func (daemon *Daemon) allocateNetwork(container *container.Container) error { 			} 			updateSandboxNetworkSettings(container, sb) 			defer func() {

The purpose of defer here is to handle the err in following container.WriteHostConfig(). You are right @thaJeztah .

thaJeztah

comment created time in 12 hours

push eventmoby/moby

Kir Kolyshkin

commit sha f02a53d6b9808c87b05f93b586fc5a1441bd64cb

pkg/sysinfo.applyPIDSCgroupInfo: optimize For some reason, commit 69cf03700fed7 chose not to use information already fetched, and called cgroups.FindCgroupMountpoint() instead. This is not a cheap call, as it has to parse the whole nine yards of /proc/self/mountinfo, and the info which it tries to get (whether the pids controller is present) is already available from cgMounts map. Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>

view details

Kir Kolyshkin

commit sha d5da7e53303dc1dd5d7d9062bf04318b129fc383

pkg/sysinfo/sysinfo_linux.go: fix some comments Some were misleading or vague, some were plain wrong. Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>

view details

Sebastiaan van Stijn

commit sha 55f0acd772f8aee62090256553a9e5a3c674d538

Merge pull request #41014 from kolyshkin/sysinfo pkg/sysinfo.applyPIDSCgroupInfo: optimize

view details

push time in 12 hours

PR merged moby/moby

pkg/sysinfo.applyPIDSCgroupInfo: optimize

For some reason, commit 69cf03700fed7 (PR #18697) chose not to use information already fetched, and called cgroups.FindCgroupMountpoint() instead.

This is not a cheap call, as it has to parse the whole nine yards of /proc/self/mountinfo, and the info which it tries to get (whether the pids controller is present) is already available from cgMounts map.

+12 -12

2 comments

1 changed file

kolyshkin

pr closed time in 12 hours

startedmoby/moby

started time in 12 hours

push eventmoby/buildkit

Tonis Tiigi

commit sha 1f9599aba3bd5adecba0d112b9845f2943c8408a

llb: move source mapping to llb metadata Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tonis Tiigi

commit sha 6073e6cff3775966bc4c96305147be5947af8df1

llb: enable source tracking Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tonis Tiigi

commit sha e536302180ab69722632a1c7d1c16d82dbc41741

dockerfile: keep mapping on #syntax error Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tonis Tiigi

commit sha 75d64ffb4a02b0655ccc31ab7e4393dec720d146

fix proto indentions Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tonis Tiigi

commit sha 90c5e674962c6ed723231bde431926d5d09cc847

client: add source mapping tests Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tonis Tiigi

commit sha 6dee7ee0fc323ba460170a18bec718290a4d46d8

dockerfile: add source mapping tests Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Tonis Tiigi

commit sha d173e3dca8ea2325bbd41a4c5b1877cebb5c8f22

pb: add more comments Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

view details

Edgar Lee

commit sha 7a90a36b4631e47a75dd45f57e68752a9dcfe652

Support multiple source maps and locations for source-level stacktraces Signed-off-by: Edgar Lee <edgarl@netflix.com>

view details

Edgar Lee

commit sha 59fa9fc9a0957aca156a985107277c18fa5c2ae7

Allow multiple source locations to be added as a constraint Signed-off-by: Edgar Lee <edgarl@netflix.com>

view details

Edgar Lee

commit sha 7c81e16b8af4175859a4f79997d1ba9e9f6c23f0

Fix duplicate source maps and fix issue preventing multiple locations per source map Signed-off-by: Edgar Lee <edgarl@netflix.com>

view details

Edgar Lee

commit sha fbee6cccbd1e4aa5c343cf8c97ed5f9835f73cc4

Fix source map test in client_test Signed-off-by: Edgar Lee <edgarl@netflix.com>

view details

Akihiro Suda

commit sha d6f5e972def2243620d03b37cd5a500eb8849efc

Merge pull request #1494 from tonistiigi/errdefs2 llb: add source tracking support

view details

push time in 12 hours

PR merged moby/buildkit

llb: add source tracking support

Adds ability to store location in original source location in llb graph. If an error occurs on build, source location can be accessed from the error.

@hinshun This uses the nested Defintion approach we discussed on slack. PTAL. One of the unexpected side-effects is that ops.proto where Definition is defined was using gogo while grpc.Status uses plain protobuf types. It turns out that if I mix these types with imports the unmarshaler breaks down on the map keys. So I needed to change errdefs to use gogo as well and add a bunch of hacks to make it work with grpc. Kind of regretting ever using gogo but no way to change it anymore.

@thaJeztah @tiborvass This PR introduces backward-incompatible changes to protobuf definitions. Previous changes that will not work anymore were only merged in master and not under v0.7. So this should be harmless but we need to make sure we don't do moby releases with current master vendored and this PR not vendored.

+2506 -506

6 comments

28 changed files

tonistiigi

pr closed time in 12 hours

issue commentmoby/buildkit

Inconsistent caching behavior in rootless Docker

Is this specific to rootless?

edrevo

comment created time in 12 hours

Pull request review commentmoby/moby

remove group name from identity mapping

 func setupRemappedRoot(config *config.Config) (*idtools.IdentityMapping, error) 		// update remapped root setting now that we have resolved them to actual names 		config.RemappedRoot = fmt.Sprintf("%s:%s", username, groupname) -		// try with username:groupname, uid:groupname, username:gid, uid:gid,+		// try with username and uid, 		// but keep the original error message (err)-		mappings, err := idtools.NewIdentityMapping(username, groupname)+		mappings, err := idtools.NewIdentityMapping(username) 		if err == nil { 			return mappings, nil

nevermind, it can happen. will do the concatenation

akhilerm

comment created time in 12 hours

issue commentmoby/moby

99% performance loss when using encrypted overlay network in swarm mode

@mvasi90 I see you're using the Arch Linux build of Docker, which is built by the Arch package maintainers; it's possible that your case is specific to those builds; could you report the issue in the arch linux issue tracker first?

mortensteenrasmussen

comment created time in 12 hours

Pull request review commentmoby/moby

remove group name from identity mapping

 func setupRemappedRoot(config *config.Config) (*idtools.IdentityMapping, error) 		// update remapped root setting now that we have resolved them to actual names 		config.RemappedRoot = fmt.Sprintf("%s:%s", username, groupname) -		// try with username:groupname, uid:groupname, username:gid, uid:gid,+		// try with username and uid, 		// but keep the original error message (err)-		mappings, err := idtools.NewIdentityMapping(username, groupname)+		mappings, err := idtools.NewIdentityMapping(username) 		if err == nil { 			return mappings, nil

suppose user1 (uid : 1001)

Can there be an entry like

user1:100000:65536
1001:200000:65536
akhilerm

comment created time in 12 hours

Pull request review commentmoby/moby

remove group name from identity mapping

 func setupRemappedRoot(config *config.Config) (*idtools.IdentityMapping, error) 		// update remapped root setting now that we have resolved them to actual names 		config.RemappedRoot = fmt.Sprintf("%s:%s", username, groupname) -		// try with username:groupname, uid:groupname, username:gid, uid:gid,+		// try with username and uid, 		// but keep the original error message (err)-		mappings, err := idtools.NewIdentityMapping(username, groupname)+		mappings, err := idtools.NewIdentityMapping(username) 		if err == nil { 			return mappings, nil

there can be multiple lines

akhilerm

comment created time in 12 hours

Pull request review commentmoby/moby

remove group name from identity mapping

 func setupRemappedRoot(config *config.Config) (*idtools.IdentityMapping, error) 		// update remapped root setting now that we have resolved them to actual names 		config.RemappedRoot = fmt.Sprintf("%s:%s", username, groupname) -		// try with username:groupname, uid:groupname, username:gid, uid:gid,+		// try with username and uid, 		// but keep the original error message (err)-		mappings, err := idtools.NewIdentityMapping(username, groupname)+		mappings, err := idtools.NewIdentityMapping(username) 		if err == nil { 			return mappings, nil

There is no need of concating right? Because there will be only one line corresponding to a username or uid. So if search by username gives an error, (which means no entry in the subordinate file), then it will be using either uid or no mapping will be present. There cannot be a case where both username and uid are used together right?

akhilerm

comment created time in 12 hours

issue openedmoby/buildkit

Inconsistent caching behavior in rootless Docker

I want to use rootless buildkitd in K8s to build some Dockerfiles, but I am seeing that the caching behavior is inconsistent: when I build the same Dockerfile twice, it will something cache some layers, and something it will cache others, and sometimes it will cache none.

Here's my pod definition:

apiVersion: v1
kind: Pod
metadata:
  name: tmp
  annotations:
    container.apparmor.security.beta.kubernetes.io/buildkitd: unconfined
    container.seccomp.security.alpha.kubernetes.io/buildkitd: unconfined
spec:
  containers:
    - image: "moby/buildkit:v0.7.1"
      name: client
      command: [""]
      args: [ "sleep", "3000" ]
      resources:
        requests:
          memory: 50Mi
        limits:
          memory: 500Mi
      volumeMounts:
        - name: config-algorithm-docker
          mountPath: /opt/conf/docker/Dockerfile.algorithm
          subPath: Dockerfile.algorithm
        - name: secrets-algorithm-docker
          mountPath: /opt/conf/docker/rclone.conf
          subPath: rclone.conf
        - name: buildkit-socket-volume
          mountPath: /run/buildkit/
    - name: buildkitd
      image: "moby/buildkit:v0.7.1-rootless"
      resources:
        requests:
          memory: 1Gi
        limits:
          memory: 1.5Gi
      args:
      - --oci-worker-no-process-sandbox
      readinessProbe:
        exec:
          command:
          - buildctl 
          - debug 
          - workers
        initialDelaySeconds: 5
        periodSeconds: 30
      livenessProbe:
        exec:
          command:
          - buildctl 
          - debug 
          - workers
        initialDelaySeconds: 5
        periodSeconds: 30
      securityContext:
  # To change UID/GID, you need to rebuild the image
        runAsUser: 1000
        runAsGroup: 1000
      volumeMounts:
        - name: buildkit-cache
          mountPath: /home/user/.local/share/buildkit/
        - name: buildkit-socket-volume
          mountPath: /run/user/1000/buildkit/
  volumes:
    - name: buildkit-cache
      emptyDir: {}
    - name: buildkit-socket-volume
      emptyDir: {}
    - name: secrets-algorithm-docker
      secret:
        secretName: algorithm-manager
        items:
          - key: rclone.conf
            path: rclone.conf
            mode: 0555
    - name: config-algorithm-docker
      configMap:
        name: algorithm-manager
        items:
          - key: Dockerfile.algorithm
            path: Dockerfile.algorithm
            mode: 0555

Dockerfile.algorithm contents:

    ##### JAVA DEPS
    FROM maven:3.6.0-jdk-11 AS build

    ARG remote_repositories
    ARG jar_dependencies
    ARG AWS_ACCESS_KEY_ID
    ARG AWS_SECRET_ACCESS_KEY
    ARG AWS_REGION

    ENV JARS_DIR /dependecies/jars
    ENV POMS_DIR /dependecies/poms

    ENV S3_EXTENSION '<extensions xmlns="http://maven.apache.org/EXTENSIONS/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" \n\
                                       xsi:schemaLocation="http://maven.apache.org/EXTENSIONS/1.0.0 http://maven.apache.org/xsd/core-extensions-1.0.0.xsd"> \n\
                                         <!-- Repository on AMAZON S3 --> \n\
                                         <extension> \n\
                                             <groupId>com.gkatzioura.maven.cloud</groupId> \n\
                                             <artifactId>s3-storage-wagon</artifactId> \n\
                                             <version>2.3</version> \n\
                                         </extension> \n\
                                     </extensions>'

    RUN mkdir -p $JARS_DIR
    # XXX: copy comand needs at least one file using wildcards
    RUN touch $JARS_DIR/.success

    # This is so Maven can download from S3
    RUN if [ -n "$AWS_ACCESS_KEY_ID" ]; then mkdir .mvn ; fi
    RUN if [ -n "$AWS_ACCESS_KEY_ID" ]; then echo $S3_EXTENSION > .mvn/extensions.xml ; fi

    # Download dependencies
    # Plugin doc: https://maven.apache.org/plugins-archives/maven-dependency-plugin-2.8/get-mojo.html
    RUN for artifact in $(echo $jar_dependencies | tr "," "\n"); do \
      mvn -e dependency:get \
      -DremoteRepositories="$remote_repositories" \
      -Dartifact="$artifact" \
      -Ddest=$JARS_DIR \
      ; done

    # Download pom's dependencies & reseolve transisitive dependencies
    RUN mkdir -p $POMS_DIR
    RUN if [ -n "$AWS_ACCESS_KEY_ID" ]; then mkdir -p $POMS_DIR/.mvn ; fi
    RUN if [ -n "$AWS_ACCESS_KEY_ID" ]; then echo $S3_EXTENSION > $POMS_DIR/.mvn/extensions.xml ; fi
    RUN for artifact in $(echo $jar_dependencies | tr "," "\n"); do \
      mvn -e dependency:get \
      -DremoteRepositories="$remote_repositories" \
      -Dartifact="$artifact:pom" \
      -Ddest=$POMS_DIR \
      ; done

    # Plugin doc: https://maven.apache.org/plugins-archives/maven-dependency-plugin-2.8/copy-dependencies-mojo.html
    RUN if [ ! -z "$(ls -A $POMS_DIR)" ]; then for pomfile in $POMS_DIR/*.pom; do \
      mvn -e dependency:copy-dependencies \
      -f $pomfile -DoutputDirectory=$JARS_DIR \
      -DincludeScope=runtime \
      ; done \
      ; fi

    ##### ALGORITHM FILES
    FROM telefonica/rclone:1.48.0 AS algorithm-files

    COPY rclone.conf /root/.config/rclone/rclone.conf

    ARG path_to_algorithm
    ENV ALGORITHM_DIR  /algorithm/$path_to_algorithm

    RUN mkdir -p $ALGORITHM_DIR
    ARG NOCACHE_LAYER
    RUN echo $NOCACHE_LAYER && rclone copy algorithm-storage:$path_to_algorithm $ALGORITHM_DIR

    ##### ALGORITHM IMG
    FROM telefonica/spark-py:2.4.4-hadoop2.9.1-S3A-WASB-3

    ARG py_dependencies

    RUN for dependency in $(echo $py_dependencies | tr "," "\n"); do \
      python3 -m pip install $dependency --user \
      ; done

    ENV PYTHONPATH $PYTHONPATH:/opt/spark/algorithm

    COPY --from=build --chown=spark:spark /dependecies/jars/* /opt/spark/jars/
    COPY --from=algorithm-files --chown=spark:spark /algorithm /opt/spark/algorithm

Command I run:

buildctl build --frontend=dockerfile.v0 --local context=/opt/conf/docker/ --opt filename=/opt/conf/docker/Dockerfile.algorithm --local dockerfile=. --opt build-arg:NOCACHE_LAYER=0 --opt build-arg:remote_repositories="" --opt build-arg:jar_dependencies="" --opt build-arg:AWS_ACCESS_KEY_ID="" --opt build-arg:AWS_SECRET_ACCESS_KEY="" --opt build-arg:AWS_REGION="" --opt build-arg:path_to_algorithm="" --opt build-arg:NOCACHE_LAYER="" --opt build-arg:py_dependencies=""

Output on first run (nothing is cached, as expected):

[+] Building 340.9s (25/25) FINISHED
 => [internal] load .dockerignore                                                                  0.7s
 => => transferring context: 2B                                                                    0.0s
 => [internal] load build definition from /opt/conf/docker/Dockerfile.algorithm                    0.9s
 => => transferring dockerfile: 3.30kB                                                             0.0s
 => [internal] load metadata for docker.io/telefonica/spark-py:2.4.4-hadoop2.9.1-S3A-WASB-3        1.3s
 => [internal] load metadata for docker.io/library/maven:3.6.0-jdk-11                              1.6s
 => [internal] load metadata for docker.io/telefonica/rclone:1.48.0                                1.2s
 => [algorithm-files 1/4] FROM docker.io/telefonica/rclone:1.48.0@sha256:3b85f5ddf11202b3a20ce1d  47.6s
 => => resolve docker.io/telefonica/rclone:1.48.0@sha256:3b85f5ddf11202b3a20ce1d0374be0e67b6535b8  0.0s
 => => sha256:3b85f5ddf11202b3a20ce1d0374be0e67b6535b89324a5314c73f14ed6cba503 950B / 950B         0.0s
 => => sha256:56006a50897441593e8f2e59d0026a7a0b472c5d14817e91f879b5281e7c3007 1.91kB / 1.91kB     0.0s
 => => sha256:f5bdd32999b54f71d56d38d3447f87706cf4008355abe107761575648b73679e 10.70MB / 10.70MB   3.3s
 => => sha256:5d20c808ce198565ff70b3ed23a991dd49afac45dece63474b27ce6ed036adc6 2.11MB / 2.11MB     3.2s
 => => sha256:d04e7dd7887df9c2269b4cf15ff6ad4bad87485d96f01895cfcd28a847c61b8 301.73kB / 301.73kB  3.4s
 => => sha256:d04e7dd7887df9c2269b4cf15ff6ad4bad87485d96f01895cfcd28a847c61b8 301.73kB / 301.73kB  3.4s
 => => unpacking docker.io/telefonica/rclone:1.48.0@sha256:3b85f5ddf11202b3a20ce1d0374be0e67b653  32.9s
 => [build  1/11] FROM docker.io/library/maven:3.6.0-jdk-11@sha256:6a0430ded2cfaba7e16080f4cc09  183.4s
 => => resolve docker.io/library/maven:3.6.0-jdk-11@sha256:6a0430ded2cfaba7e16080f4cc097c9c65d140  0.0s
 => => sha256:6a0430ded2cfaba7e16080f4cc097c9c65d1406b3b235d0fcfcfd84c354c4177 2.37kB / 2.37kB     0.0s
 => => sha256:4fe5c6d53794c2135ae0ba9d564f1ab50451df64c9b29e45432be81e49f98bc4 3.04kB / 3.04kB     0.0s
 => => sha256:592298276402a8feafb127b57464c4e723f1c8bced39186db4af3e6b90458a50 8.72kB / 8.72kB     0.0s
 => => sha256:a3d516f512e0d5c064be962d0f4ee49d9b8675b5a5acd3f70527ea40d60d766f 247B / 247B         1.6s
 => => sha256:2b5aab290196cd06ffa3c0f2c9de69924abc7789c063daf2d4662bafa0b4481f 358B / 358B         1.8s
 => => sha256:818845f741c457054b344e3108bd8e119c3f1f4d8f48b515a610f797b9a20cb9 131B / 131B         1.7s
 => => sha256:99a9d43f5b1c3593b7eb98ccf10e74bc0390cb227befbef3bf13f27d2a79f7f9 222B / 222B         1.8s
 => => sha256:30b0085be4e7fa5ad8798eecfe41be21d601526fbc1c3130670f7a82debdc7c4 222B / 222B         1.9s
 => => sha256:f05ab768584560f8a900c1864422922f4929ab2563c83cd9200b2e18fff389 318.89MB / 318.89MB  12.9s
 => => sha256:1a97c78dad716eca1d273d3f7f5661d3fa2dcbbefeab64f5690b285bf395d16 892.37kB / 892.37kB  1.9s
 => => sha256:1b2a72d4e03052566e99130108071fc4eca4942c62923e3e5cf19666a23088ef 4.34MB / 4.34MB     2.3s
 => => sha256:d4b7902036fe0cefdfe9ccf0404fe13322ecbd552f132be73d3e840f95538838 10.78MB / 10.78MB   2.1s
 => => sha256:d54db43011fd116b8cb6d9e49e268cee1fa6212f152b30cbfa7f3c4c684427c3 50.07MB / 50.07MB   3.2s
 => => sha256:e79bb959ec00faf01da52437df4fad4537ec669f60455a38ad583ec2b8f00498 45.34MB / 45.34MB   3.7s
 => => sha256:10426a27e05380ee3eb231f53b51bab0c670470ba865a73fb34e0f5c614b27b6 9.09MB / 9.09MB     2.4s
 => => sha256:914cd0a490d88cacd8482b39440469945ba4d703366f46fbea77809a5ad943f6 751B / 751B         2.6s
 => => sha256:a3d516f512e0d5c064be962d0f4ee49d9b8675b5a5acd3f70527ea40d60d766f 247B / 247B         1.6s
 => => sha256:1a97c78dad716eca1d273d3f7f5661d3fa2dcbbefeab64f5690b285bf395d16 892.37kB / 892.37kB  1.9s
 => => sha256:f05ab768584560f8a900c1864422922f4929ab2563c83cd9200b2e18fff389 318.89MB / 318.89MB  12.9s
 => => unpacking docker.io/library/maven:3.6.0-jdk-11@sha256:6a0430ded2cfaba7e16080f4cc097c9c65  148.3s
 => [stage-2 1/4] FROM docker.io/telefonica/spark-py:2.4.4-hadoop2.9.1-S3A-WASB-3@sha256:f53857  243.9s
 => => resolve docker.io/telefonica/spark-py:2.4.4-hadoop2.9.1-S3A-WASB-3@sha256:f538575065f2dcb9  0.0s
 => => sha256:f538575065f2dcb9117bb800e3d27443cdc7303c3bc7a267c6fec6757898793e 6.20kB / 6.20kB     0.0s
 => => sha256:b13819d6db98a8f7b65ce4e94caa00d8bf1744ea67c51df7f39e1cf4a06798b8 11.96kB / 11.96kB   0.0s
 => => sha256:10fdc906c07dc25f1f27a01302d10fa07f119191105204fd16f32e22e07f62 159.28MB / 159.28MB  10.0s
 => => sha256:695a7f1ecd283a4bc9be33abe529aacf4d5bc411b43dfb23d73548792505420 628.01kB / 628.01kB  3.5s
 => => sha256:d4126858affd8eabc5eadf6159824a06a177688c79cd94f3d1b0ff97fa81acad 25.13MB / 25.13MB   4.0s
 => => sha256:73fdf47615a112037a42f679f927976fc32df5d0fb2d56ef153f43b28acc99fd 8.84kB / 8.84kB     3.6s
 => => sha256:f92d8eadf3b05f770e8752c83199922e314baaf847dfa26dde9918e92566992 317.14kB / 317.14kB  3.6s
 => => sha256:ec0d7aac5a1b195dbe835ab1bc426c9d58cb2eb0d5f0d4c9cce31e5e6e7c31e1 6.11kB / 6.11kB     3.7s
 => => sha256:531ece523dff17d99733f33e8b4757b2215b9da0c5fda0558b8a4fbffcaae19a 9.50MB / 9.50MB     4.3s
 => => sha256:f910a506b6cb1dbec766725d70356f695ae2bf2bea6224dbe8c7c6ad4f3664a2 238B / 238B         3.8s
 => => sha256:66711a62b261de9e05c5f44ba4f9bc15ed7e958af374aabd2b2bf2a9fb89034 558.66kB / 558.66kB  4.2s
 => => sha256:1110c8fe40351359da178f83d17b277a17602d835186e16bddca826f0c15df45 1.69kB / 1.69kB     4.4s
 => => sha256:3618c6d6ba335577044eef50533e011f56f7a0cdc103aa9ab1ffdc79aab9cca 801.47kB / 801.47kB  4.5s
 => => sha256:55ca0f37c1aa8f9fa90416e887287bc82e24d92d2b3431cffe087e13f9c7507b 144B / 144B         4.6s
 => => sha256:84e67a9b7798e68443d21f6d578a0306169d233ed024f69f91a00987938696f4 1.65MB / 1.65MB     4.7s
 => => sha256:f80a845ee741f5dbad60e12cfd205100acf9a24c5a063095653f9eeb6220b7d7 1.39kB / 1.39kB     4.8s
 => => sha256:c2274a1a0e2786ee9101b08f76111f9ab8019e368dce1e325d3c284a0ca33397 70.73MB / 70.73MB  12.7s
 => => sha256:a8643f14d652aceacc2d4da79a395de7409c09636f7507b9b8a89189c600d0e 354.40kB / 354.40kB  4.9s
 => => sha256:5f307ac011d29b4e84cb6b09a4cd5671986620bcb95761d6e74ccedfc3d4aa72 52.62MB / 52.62MB  13.5s
 => => sha256:e7c96db7181be991f19a9fb6975cdbbd73c65f4a2681348e63a141a2192a5f10 2.76MB / 2.76MB    10.2s
 => => sha256:a9f3cd3a9512b28022bd142915390b4f7fc53ebd2af101dbe380e5815a7f942f 291B / 291B        10.1s
 => => sha256:81bffe94a638d5d46384c40e6cd23f6a322137ba0f8f7b070f1ac9c36eef04 266.16kB / 266.16kB  10.3s
 => => sha256:c000da1d096e28317d88e328439dd40267357763bdc83077def1a3937c63c6 152.37MB / 152.37MB  17.3s
 => => sha256:297b14375685e29875e85f5d0a199b01dd356e02d63983334bc2727463abbc50 145B / 145B        12.8s
 => => sha256:2a58c2926f02f2f667d8f96bccaf988c33c534b6c8778616b3398013ef5ef9 808.09kB / 808.09kB  13.6s
 => => sha256:327ab1d3222c7328cc34eaae8f440f5927ddfa7db12ef04a85b8cf46453f3e 212.24kB / 212.24kB  13.7s
 => => sha256:5a63a215004637ebba2e2054161ed6c5046487b330c39b091bef3ddd8c68f2bd 95.79kB / 95.79kB  13.8s
 => => sha256:cb39bff6f1ad1fce7b3ff6d0fd04f3629d1f2787c335637b85893c97c57e5e 171.63MB / 171.63MB  20.3s
 => => sha256:4b9558f79e2a2b27b39ab5c5ae33117962523f03cb54b97c9fb50423e6c608 728.72kB / 728.72kB  14.0s
 => => sha256:898c5f337769b382ac6ea094ba3c822d3ee62d8b51b8ae1d36c5d506a9be31 673.63kB / 673.63kB  14.1s
 => => sha256:695a7f1ecd283a4bc9be33abe529aacf4d5bc411b43dfb23d73548792505420 628.01kB / 628.01kB  3.5s
 => => sha256:f92d8eadf3b05f770e8752c83199922e314baaf847dfa26dde9918e92566992 317.14kB / 317.14kB  3.6s
 => => sha256:1110c8fe40351359da178f83d17b277a17602d835186e16bddca826f0c15df45 1.69kB / 1.69kB     4.4s
 => => sha256:e7c96db7181be991f19a9fb6975cdbbd73c65f4a2681348e63a141a2192a5f10 2.76MB / 2.76MB    10.2s
 => => sha256:cb39bff6f1ad1fce7b3ff6d0fd04f3629d1f2787c335637b85893c97c57e5e 171.63MB / 171.63MB  20.3s
 => => unpacking docker.io/telefonica/spark-py:2.4.4-hadoop2.9.1-S3A-WASB-3@sha256:f538575065f2  221.8s
 => [internal] load build context                                                                  0.5s
 => => transferring context: 299B                                                                  0.0s
 => [algorithm-files 2/4] COPY rclone.conf /root/.config/rclone/rclone.conf                        3.5s
 => [algorithm-files 3/4] RUN mkdir -p /algorithm/                                                 2.6s
 => [algorithm-files 4/4] RUN echo  && rclone copy algorithm-storage: /algorithm/                 54.5s
 => [build  2/11] RUN mkdir -p /dependecies/jars                                                   1.7s
 => [build  3/11] RUN touch /dependecies/jars/.success                                             2.4s
 => [build  4/11] RUN if [ -n "" ]; then mkdir .mvn ; fi                                           2.2s
 => [build  5/11] RUN if [ -n "" ]; then echo <extensions xmlns="http://maven.apache.org/EXTENSIO  2.4s
 => [build  6/11] RUN for artifact in $(echo  | tr "," "\n"); do   mvn -e dependency:get   -Dremo  2.4s
 => [build  7/11] RUN mkdir -p /dependecies/poms                                                   7.1s
 => [build  8/11] RUN if [ -n "" ]; then mkdir -p /dependecies/poms/.mvn ; fi                      6.2s
 => [build  9/11] RUN if [ -n "" ]; then echo <extensions xmlns="http://maven.apache.org/EXTENSIO  2.3s
 => [build 10/11] RUN for artifact in $(echo  | tr "," "\n"); do   mvn -e dependency:get   -Dremo  2.5s
 => [build 11/11] RUN if [ ! -z "$(ls -A /dependecies/poms)" ]; then for pomfile in /dependecies/  5.1s
 => [stage-2 2/4] RUN for dependency in $(echo  | tr "," "\n"); do   python3 -m pip install $depe  1.2s
 => [stage-2 3/4] COPY --from=build --chown=spark:spark /dependecies/jars/* /opt/spark/jars/       1.1s
 => [stage-2 4/4] COPY --from=algorithm-files --chown=spark:spark /algorithm /opt/spark/algorith  56.4s

Output on second run:

[+] Building 215.7s (25/25) FINISHED
 => [internal] load .dockerignore                                                                  0.2s
 => => transferring context: 2B                                                                    0.0s
 => [internal] load build definition from /opt/conf/docker/Dockerfile.algorithm                    0.5s
 => => transferring dockerfile: 3.30kB                                                             0.0s
 => [internal] load metadata for docker.io/telefonica/spark-py:2.4.4-hadoop2.9.1-S3A-WASB-3        1.6s
 => [internal] load metadata for docker.io/library/maven:3.6.0-jdk-11                              1.3s
 => [internal] load metadata for docker.io/telefonica/rclone:1.48.0                                0.5s
 => [algorithm-files 1/4] FROM docker.io/telefonica/rclone:1.48.0@sha256:3b85f5ddf11202b3a20ce1d0  0.0s
 => => resolve docker.io/telefonica/rclone:1.48.0@sha256:3b85f5ddf11202b3a20ce1d0374be0e67b6535b8  0.0s
 => [build  1/11] FROM docker.io/library/maven:3.6.0-jdk-11@sha256:6a0430ded2cfaba7e16080f4cc097  40.7s
 => => resolve docker.io/library/maven:3.6.0-jdk-11@sha256:6a0430ded2cfaba7e16080f4cc097c9c65d140  0.0s
 => => sha256:6a0430ded2cfaba7e16080f4cc097c9c65d1406b3b235d0fcfcfd84c354c4177 2.37kB / 2.37kB     0.0s
 => => sha256:4fe5c6d53794c2135ae0ba9d564f1ab50451df64c9b29e45432be81e49f98bc4 3.04kB / 3.04kB     0.0s
 => => sha256:d4b7902036fe0cefdfe9ccf0404fe13322ecbd552f132be73d3e840f95538838 10.78MB / 10.78MB   0.0s
 => => sha256:818845f741c457054b344e3108bd8e119c3f1f4d8f48b515a610f797b9a20cb9 131B / 131B         0.0s
 => => sha256:1a97c78dad716eca1d273d3f7f5661d3fa2dcbbefeab64f5690b285bf395d16 892.37kB / 892.37kB  0.0s
 => => sha256:d54db43011fd116b8cb6d9e49e268cee1fa6212f152b30cbfa7f3c4c684427c3 50.07MB / 50.07MB   0.0s
 => => sha256:1b2a72d4e03052566e99130108071fc4eca4942c62923e3e5cf19666a23088ef 4.34MB / 4.34MB     0.0s
 => => sha256:99a9d43f5b1c3593b7eb98ccf10e74bc0390cb227befbef3bf13f27d2a79f7f9 222B / 222B         0.0s
 => => sha256:2b5aab290196cd06ffa3c0f2c9de69924abc7789c063daf2d4662bafa0b4481f 358B / 358B         0.9s
 => => sha256:592298276402a8feafb127b57464c4e723f1c8bced39186db4af3e6b90458a50 8.72kB / 8.72kB     0.0s
 => => sha256:e79bb959ec00faf01da52437df4fad4537ec669f60455a38ad583ec2b8f00498 45.34MB / 45.34MB   0.0s
 => => sha256:a3d516f512e0d5c064be962d0f4ee49d9b8675b5a5acd3f70527ea40d60d766f 247B / 247B         0.0s
 => => sha256:30b0085be4e7fa5ad8798eecfe41be21d601526fbc1c3130670f7a82debdc7c4 222B / 222B         1.1s
 => => sha256:10426a27e05380ee3eb231f53b51bab0c670470ba865a73fb34e0f5c614b27b6 9.09MB / 9.09MB     1.2s
 => => sha256:914cd0a490d88cacd8482b39440469945ba4d703366f46fbea77809a5ad943f6 751B / 751B         1.3s
 => => sha256:f05ab768584560f8a900c1864422922f4929ab2563c83cd9200b2e18fff389 318.89MB / 318.89MB  10.4s
 => => sha256:30b0085be4e7fa5ad8798eecfe41be21d601526fbc1c3130670f7a82debdc7c4 222B / 222B         1.1s
 => => sha256:f05ab768584560f8a900c1864422922f4929ab2563c83cd9200b2e18fff389 318.89MB / 318.89MB  10.4s
 => => unpacking docker.io/library/maven:3.6.0-jdk-11@sha256:6a0430ded2cfaba7e16080f4cc097c9c65d  29.7s
 => [internal] load build context                                                                  0.4s
 => => transferring context: 299B                                                                  0.0s
 => [stage-2 1/4] FROM docker.io/telefonica/spark-py:2.4.4-hadoop2.9.1-S3A-WASB-3@sha256:f53857  112.1s
 => => resolve docker.io/telefonica/spark-py:2.4.4-hadoop2.9.1-S3A-WASB-3@sha256:f538575065f2dcb9  0.0s
 => => sha256:f538575065f2dcb9117bb800e3d27443cdc7303c3bc7a267c6fec6757898793e 6.20kB / 6.20kB     0.0s
 => => sha256:b13819d6db98a8f7b65ce4e94caa00d8bf1744ea67c51df7f39e1cf4a06798b8 11.96kB / 11.96kB   0.0s
 => => sha256:10fdc906c07dc25f1f27a01302d10fa07f119191105204fd16f32e22e07f62 159.28MB / 159.28MB  10.2s
 => => sha256:3618c6d6ba335577044eef50533e011f56f7a0cdc103aa9ab1ffdc79aab9cca 801.47kB / 801.47kB  2.8s
 => => sha256:e7c96db7181be991f19a9fb6975cdbbd73c65f4a2681348e63a141a2192a5f10 2.76MB / 2.76MB     3.0s
 => => sha256:55ca0f37c1aa8f9fa90416e887287bc82e24d92d2b3431cffe087e13f9c7507b 144B / 144B         3.1s
 => => sha256:d4126858affd8eabc5eadf6159824a06a177688c79cd94f3d1b0ff97fa81acad 25.13MB / 25.13MB   3.8s
 => => sha256:f910a506b6cb1dbec766725d70356f695ae2bf2bea6224dbe8c7c6ad4f3664a2 238B / 238B         3.2s
 => => sha256:f80a845ee741f5dbad60e12cfd205100acf9a24c5a063095653f9eeb6220b7d7 1.39kB / 1.39kB     3.3s
 => => sha256:695a7f1ecd283a4bc9be33abe529aacf4d5bc411b43dfb23d73548792505420 628.01kB / 628.01kB  3.4s
 => => sha256:66711a62b261de9e05c5f44ba4f9bc15ed7e958af374aabd2b2bf2a9fb89034 558.66kB / 558.66kB  4.0s
 => => sha256:f92d8eadf3b05f770e8752c83199922e314baaf847dfa26dde9918e92566992 317.14kB / 317.14kB  4.0s
 => => sha256:2a58c2926f02f2f667d8f96bccaf988c33c534b6c8778616b3398013ef5ef9a 808.09kB / 808.09kB  4.2s
 => => sha256:1110c8fe40351359da178f83d17b277a17602d835186e16bddca826f0c15df45 1.69kB / 1.69kB     4.1s
 => => sha256:898c5f337769b382ac6ea094ba3c822d3ee62d8b51b8ae1d36c5d506a9be31f 673.63kB / 673.63kB  4.3s
 => => sha256:c000da1d096e28317d88e328439dd40267357763bdc83077def1a3937c63c6 152.37MB / 152.37MB  16.9s
 => => sha256:531ece523dff17d99733f33e8b4757b2215b9da0c5fda0558b8a4fbffcaae19a 9.50MB / 9.50MB    10.5s
 => => sha256:84e67a9b7798e68443d21f6d578a0306169d233ed024f69f91a00987938696f4 1.65MB / 1.65MB    10.6s
 => => sha256:c2274a1a0e2786ee9101b08f76111f9ab8019e368dce1e325d3c284a0ca33397 70.73MB / 70.73MB  13.9s
 => => sha256:4b9558f79e2a2b27b39ab5c5ae33117962523f03cb54b97c9fb50423e6c608 728.72kB / 728.72kB  10.7s
 => => sha256:a8643f14d652aceacc2d4da79a395de7409c09636f7507b9b8a89189c600d0 354.40kB / 354.40kB  10.8s
 => => sha256:5a63a215004637ebba2e2054161ed6c5046487b330c39b091bef3ddd8c68f2bd 95.79kB / 95.79kB  10.9s
 => => sha256:81bffe94a638d5d46384c40e6cd23f6a322137ba0f8f7b070f1ac9c36eef04 266.16kB / 266.16kB  11.0s
 => => sha256:73fdf47615a112037a42f679f927976fc32df5d0fb2d56ef153f43b28acc99fd 8.84kB / 8.84kB    10.9s
 => => sha256:327ab1d3222c7328cc34eaae8f440f5927ddfa7db12ef04a85b8cf46453f3e 212.24kB / 212.24kB  11.1s
 => => sha256:a9f3cd3a9512b28022bd142915390b4f7fc53ebd2af101dbe380e5815a7f942f 291B / 291B        11.1s
 => => sha256:5f307ac011d29b4e84cb6b09a4cd5671986620bcb95761d6e74ccedfc3d4aa72 52.62MB / 52.62MB  17.6s
 => => sha256:ec0d7aac5a1b195dbe835ab1bc426c9d58cb2eb0d5f0d4c9cce31e5e6e7c31e1 6.11kB / 6.11kB    14.0s
 => => sha256:297b14375685e29875e85f5d0a199b01dd356e02d63983334bc2727463abbc50 145B / 145B        17.0s
 => => sha256:cb39bff6f1ad1fce7b3ff6d0fd04f3629d1f2787c335637b85893c97c57e5e 171.63MB / 171.63MB  21.4s
 => => sha256:2a58c2926f02f2f667d8f96bccaf988c33c534b6c8778616b3398013ef5ef9a 808.09kB / 808.09kB  4.2s
 => => sha256:10fdc906c07dc25f1f27a01302d10fa07f119191105204fd16f32e22e07f62 159.28MB / 159.28MB  10.2s
 => => unpacking docker.io/telefonica/spark-py:2.4.4-hadoop2.9.1-S3A-WASB-3@sha256:f538575065f2d  89.3s
 => CACHED [algorithm-files 2/4] COPY rclone.conf /root/.config/rclone/rclone.conf                 0.0s
 => CACHED [algorithm-files 3/4] RUN mkdir -p /algorithm/                                          0.0s
 => CACHED [algorithm-files 4/4] RUN echo  && rclone copy algorithm-storage: /algorithm/           0.0s
 => [build  2/11] RUN mkdir -p /dependecies/jars                                                   1.4s
 => [build  3/11] RUN touch /dependecies/jars/.success                                             0.9s
 => [build  4/11] RUN if [ -n "" ]; then mkdir .mvn ; fi                                           0.9s
 => [build  5/11] RUN if [ -n "" ]; then echo <extensions xmlns="http://maven.apache.org/EXTENSIO  0.9s
 => [build  6/11] RUN for artifact in $(echo  | tr "," "\n"); do   mvn -e dependency:get   -Dremo  0.9s
 => [build  7/11] RUN mkdir -p /dependecies/poms                                                   1.0s
 => [build  8/11] RUN if [ -n "" ]; then mkdir -p /dependecies/poms/.mvn ; fi                      0.9s
 => [build  9/11] RUN if [ -n "" ]; then echo <extensions xmlns="http://maven.apache.org/EXTENSIO  1.0s
 => [build 10/11] RUN for artifact in $(echo  | tr "," "\n"); do   mvn -e dependency:get   -Dremo  1.0s
 => [build 11/11] RUN if [ ! -z "$(ls -A /dependecies/poms)" ]; then for pomfile in /dependecies/  0.9s
 => [stage-2 2/4] RUN for dependency in $(echo  | tr "," "\n"); do   python3 -m pip install $depe  0.9s
 => [stage-2 3/4] COPY --from=build --chown=spark:spark /dependecies/jars/* /opt/spark/jars/       0.9s
 => [stage-2 4/4] COPY --from=algorithm-files --chown=spark:spark /algorithm /opt/spark/algorith  46.8s

created time in 12 hours

issue commentmoby/moby

`docker build` hangs on FROM line

No, can't tell from the information that's here.

  • Are you seeing the same both with DOCKER_BUILDKIT=1 and with DOCKER_BUILDKIT=0 ?
  • Is there anything in the daemon or system logs?
  • Is build only failing with that specific image, or also with other images?
  • You mention "These are both on-prem virtual machines."; are you connecting to all of those VM's from the same CLI, or using a CLI inside the VM?
faucherb94

comment created time in 12 hours

issue commentmoby/moby

Request to create new release(s) for Go modules compatibility

Specify the tag of the release you want to use in your go.mod, and go modules will pick the correct commit. If will show the confusing "pseudo version" but the commit will match.

Make sure to specify the version though, otherwise go modules will pick a very old version (v1.13.1), which it thinks is the current "stable" release. Here's a quick example;

mkdir foobar && cd foobar

Add a main.go

package main

import (
        "fmt"

        "github.com/docker/docker/api"
)

func main() {
       fmt.Println("the API version is", api.DefaultVersion)
}

Initialize your module;

go mod init foobar
# go: creating new go.mod: module foobar

Specify the version of docker/docker you want to use in your go.mod;

echo "require github.com/docker/docker v19.03.9" >> go.mod

Build the binary and verify that the correct API version is used. In this case, go downloads the module (the pseudo version it generates and downloads uses v17.12.0-ce-rc1, but the commit (811a247d06e8 matches v19.03.9: https://github.com/moby/moby/releases/tag/v19.03.9)

go build
# go: downloading github.com/docker/docker v17.12.0-ce-rc1.0.20200514230353-811a247d06e8+incompatible

./foobar
# the API version is 1.40
mcurtiss

comment created time in 12 hours

issue commentmoby/moby

Request to create new release(s) for Go modules compatibility

yes, but source code differs a lot from current master branch (19.x)

mcurtiss

comment created time in 13 hours

startedmoby/moby

started time in 13 hours

issue commentmoby/moby

Request to create new release(s) for Go modules compatibility

module xxx

go 1.14

require (
	github.com/docker/docker v1.4.2-0.20200213202729-31a86c4ab209
)

This version works for me. You might even be able to tweak it to a specific commit yourself.

mcurtiss

comment created time in 13 hours

issue commentmoby/moby

Request to create new release(s) for Go modules compatibility

FYI, the problem with go mod fetching v17.12.xxx instead of v19.03.xxx is related to the trailing zero not following semver compliance, that's why go fetches the nearest valid version (all v19 and v18 versions have a traling zero on the minor version number).

Is there any workaround for gomod? I'm not able to find a solution for this using go mod

mcurtiss

comment created time in 13 hours

pull request commentmoby/moby

seccomp: remove the unused query_module(2)

Today is a bank holiday in the UK, so we should probably check with Justin tomorrow.

KentaTada

comment created time in 14 hours

pull request commentmoby/moby

allocateNetwork: fix network sandbox not cleaned up on failure

@arkodg @xinfengliu @AkihiroSuda PTAL

thaJeztah

comment created time in 14 hours

PR opened moby/moby

allocateNetwork: fix network sandbox not cleaned up on failure area/networking kind/bugfix status/2-code-review

The defer function was checking for the local err variable, not on the error that was returned by the function. As a result, the sandbox would never be cleaned up for containers that used "none" networking, and a failiure occured during setup.

The second commit is a small refactor; allocateNetwork() can return early, in which case these variables were unused, so assign them later in the function.

- Description for the changelog <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: -->

+7 -5

0 comment

1 changed file

pr created time in 14 hours

issue commentmoby/moby

Running chmod on file results in 'text file busy' when running straight after.

I am seeing this (in RUN chmod a+x /usr/local/bin/script.sh && /usr/local/bin/script.sh) on one machine reliably while on four other machine exactly the same Dockerfile runs through just fine. The thing is that I've upgraded the docker installation from ancient 17.05 to 19.03.8, removed all old images and rebooted the machine and it's still showing that error.

The machines all have Linux 4.4.0, though slightly different builds. There is one significant difference though:

  • The non-working machine uses aufs.
  • The working machines all use overlay2.

So confirming this is an aufs issue.

GrahamDumpleton

comment created time in 14 hours

pull request commentmoby/moby

seccomp: remove the unused query_module(2)

@justincormack PTAL?

KentaTada

comment created time in 15 hours

pull request commentmoby/moby

Enable userns by default

Maybe this PR can be also decompose into a set of small PRs before making it default. Especially Address with userns=host container fs has ownership set to the remapped root instead of real root. .

@cpuguy83 WDYT?

cpuguy83

comment created time in 15 hours

pull request commentmoby/moby

Fix potential IP overlapping with node LB

@mightydok

Why we need remove whole network instead of just not give used for lb IP address to new containers?

We remove the network only when there's no containers running on this network on this node. The logic is in delete() in vendor/github.com/docker/libnetwork/network.go If there's no container running on the network on the worker node, then this node should not be a peer node for this overlay network. So we need to remove the network on this node.

xinfengliu

comment created time in 16 hours

fork DukeAnn/moby

Moby Project - a collaborative project for the container ecosystem to assemble container-based systems

https://mobyproject.org/

fork in 16 hours

Pull request review commentmoby/moby

Fix potential IP overlapping with node LB

 func isUnknownContainer(err error) bool { func isStoppedContainer(err error) bool { 	return strings.Contains(err.Error(), "is already stopped") }++func isErrContainerUnhealthy(err error) bool {+	if err != nil {+		if e, ok := err.(*exitError); ok {+			if c := e.Cause(); c != nil {+				return strings.Contains(c.Error(), "unhealthy container")+			}

Sorry, I don't quite understand what you mean. Here, i just want to filter out exitError{code: ctnr.State.ExitCode, cause: healthErr}. If I don't extract cause in exitError, I don't have a way to catch it. Yes, It's better to check error-type instead of do string-matching for healthErr. I will try doing it.

xinfengliu

comment created time in 16 hours

issue commentmoby/moby

docker service logs RPC error after network failure in swarm mode from 2/12 nodes

Encountered the same issue on 19.03.8. Resolved via docker swarm ca --rotate.

Same as @ProteanCode, is there anything that should discourage me from just executing docker swarm ca --rotate once a day to prevent this from happening?

SvenAbels

comment created time in 16 hours

Pull request review commentmoby/moby

Fix potential IP overlapping with node LB

 func isUnknownContainer(err error) bool { func isStoppedContainer(err error) bool { 	return strings.Contains(err.Error(), "is already stopped") }++func isErrContainerUnhealthy(err error) bool {+	if err != nil {+		if e, ok := err.(*exitError); ok {

Yes, will do.

xinfengliu

comment created time in 16 hours

Pull request review commentmoby/moby

Fix potential IP overlapping with node LB

 func (r *controller) Start(ctx context.Context) error {  				continue 			}-+			// We should only remove networks after container networks are initialized.+			// However, there are too many possible errors during container starting.+			// Since removeNeworks() is an idempotent operation, it should be OK to removeNetworks+			// here although it's not ideal.+			log.G(ctx).Debugf("Removing networks after container %s failed to start.", r.adapter.container.name())+			r.adapter.removeNetworks(ctx)

Yes, we need log the error of removeNetworks(ctx), but return the error indicating the reason for container-start.

xinfengliu

comment created time in 16 hours

Pull request review commentmoby/moby

Fix potential IP overlapping with node LB

 func (r *controller) Shutdown(ctx context.Context) error { 	}  	if err := r.adapter.shutdown(ctx); err != nil {+		// The container is already stopped or not even started.+		// don't need to removeNetworks.+		if isStoppedContainer(err) {+			log.G(ctx).Debugf("Shutdown(): container %s already stopped.", r.adapter.container.name())+			return nil+		}

Yes, I think we need debug logs, otherwise we don't know what happened on customer's production system.

xinfengliu

comment created time in 16 hours

Pull request review commentmoby/moby

Fix potential IP overlapping with node LB

 func (r *controller) Start(ctx context.Context) error {  				continue 			}-+			// We should only remove networks after container networks are initialized.+			// However, there are too many possible errors during container starting.+			// Since removeNeworks() is an idempotent operation, it should be OK to removeNetworks+			// here although it's not ideal.+			log.G(ctx).Debugf("Removing networks after container %s failed to start.", r.adapter.container.name())+			r.adapter.removeNetworks(ctx) 			return errors.Wrap(err, "starting container failed") 		}  		break 	} +	// At this time, the container is started via containerd, and container.State.Running is true+	// However, there is still possible errors from now on in this Start() function, and swarmkit will not+	// call Shutdown() after task state is FAIL. We must clean up here.

This is how current swarmkit works: https://github.com/docker/swarmkit/blob/master/agent/exec/controller.go#L295

	if task.DesiredState >= api.TaskStateShutdown {
		if status.State >= api.TaskStateCompleted {
			return noop()
		}
...
xinfengliu

comment created time in 16 hours

Pull request review commentmoby/moby

Fix potential IP overlapping with node LB

 func (r *controller) Prepare(ctx context.Context) error { }  // Start the container. An error will be returned if the container is already started.-func (r *controller) Start(ctx context.Context) error {+func (r *controller) Start(ctx context.Context) (err error) {

yes, I will rename it to make it clearer.

xinfengliu

comment created time in 16 hours

startedmoby/moby

started time in 17 hours

startedmoby/moby

started time in 17 hours

Pull request review commentmoby/moby

Fix potential IP overlapping with node LB

 func isUnknownContainer(err error) bool { func isStoppedContainer(err error) bool { 	return strings.Contains(err.Error(), "is already stopped") }++func isErrContainerUnhealthy(err error) bool {+	if err != nil {+		if e, ok := err.(*exitError); ok {

Should this use errors.As()? Something like;

	var eErr exitError
	if errors.As(err, & eErr) {

Perhaps use an early return for the err == nil case 🤔

xinfengliu

comment created time in 17 hours

Pull request review commentmoby/moby

Fix potential IP overlapping with node LB

 func isUnknownContainer(err error) bool { func isStoppedContainer(err error) bool { 	return strings.Contains(err.Error(), "is already stopped") }++func isErrContainerUnhealthy(err error) bool {+	if err != nil {+		if e, ok := err.(*exitError); ok {+			if c := e.Cause(); c != nil {+				return strings.Contains(c.Error(), "unhealthy container")+			}

We should probably avoid using errors.Cause() to prevent issues if something lower in the stack starts using native "go" wrapping of errors.

In this case, I'm wondering if just checking strings.Contains() on the exitError would suffice.

That said, if I see correctly, these errors are generated in https://github.com/moby/moby/blob/07d60bc2571ba3d680f21adc84d87803ab4959c6/daemon/cluster/executor/container/controller.go#L248-L273

Which is in this codebase, so could we instead wrap a specific error that we can check for here? (so modify the code I linked above, and make it wrap a typed error, so that we can check for that error-type instead of do string-matching?

xinfengliu

comment created time in 17 hours

Pull request review commentmoby/moby

Fix potential IP overlapping with node LB

 func (r *controller) Start(ctx context.Context) error {  				continue 			}-+			// We should only remove networks after container networks are initialized.+			// However, there are too many possible errors during container starting.+			// Since removeNeworks() is an idempotent operation, it should be OK to removeNetworks+			// here although it's not ideal.+			log.G(ctx).Debugf("Removing networks after container %s failed to start.", r.adapter.container.name())+			r.adapter.removeNetworks(ctx)

Should errors that occur during removal of the network be handled? (perhaps only logged if we can't / don't have to handle them otherwise)

xinfengliu

comment created time in 18 hours

Pull request review commentmoby/moby

Fix potential IP overlapping with node LB

 func (r *controller) Start(ctx context.Context) error { }  // Wait on the container to exit.-func (r *controller) Wait(pctx context.Context) error {+func (r *controller) Wait(pctx context.Context) (err error) {

Same comment here (s/err/retErr/)

xinfengliu

comment created time in 18 hours

Pull request review commentmoby/moby

Fix potential IP overlapping with node LB

 func (r *controller) Shutdown(ctx context.Context) error { 	}  	if err := r.adapter.shutdown(ctx); err != nil {+		// The container is already stopped or not even started.+		// don't need to removeNetworks.+		if isStoppedContainer(err) {+			log.G(ctx).Debugf("Shutdown(): container %s already stopped.", r.adapter.container.name())+			return nil+		}

Looks like this was already handled by the line below;

if !(isUnknownContainer(err) || isStoppedContainer(err)) {

So this is only adding a debug-log? Do we still need that, or did you only need it while working on your patch?

Also wondering; this looks to be the only place where adapter.shutdown() is called; should this logic be in that function?

xinfengliu

comment created time in 18 hours

Pull request review commentmoby/moby

Fix potential IP overlapping with node LB

 func (r *controller) Prepare(ctx context.Context) error { }  // Start the container. An error will be returned if the container is already started.-func (r *controller) Start(ctx context.Context) error {+func (r *controller) Start(ctx context.Context) (err error) {

Could you rename err to retErr to make it stand out as being this output-variable? err is used as name for local variables in this function, which makes it easy to confuse them (and to accidentally mask the output variable)

xinfengliu

comment created time in 18 hours

Pull request review commentmoby/moby

Fix potential IP overlapping with node LB

 func (r *controller) Start(ctx context.Context) error {  				continue 			}-+			// We should only remove networks after container networks are initialized.+			// However, there are too many possible errors during container starting.+			// Since removeNeworks() is an idempotent operation, it should be OK to removeNetworks+			// here although it's not ideal.+			log.G(ctx).Debugf("Removing networks after container %s failed to start.", r.adapter.container.name())+			r.adapter.removeNetworks(ctx) 			return errors.Wrap(err, "starting container failed") 		}  		break 	} +	// At this time, the container is started via containerd, and container.State.Running is true+	// However, there is still possible errors from now on in this Start() function, and swarmkit will not+	// call Shutdown() after task state is FAIL. We must clean up here.

and swarmkit will not call Shutdown() after task state is FAIL

Was this by design? Should this be changed? @dperny

xinfengliu

comment created time in 18 hours

issue commentmoby/moby

[feature request] nftables support

I don't know if changing from nftables to iptables will break something else in some other app/service or the os itself.

senden9

comment created time in 17 hours

pull request commentmoby/moby

[19.03] Fix dns fallback regression

Same comment here as on https://github.com/moby/moby/pull/41008#issuecomment-633439844

@tiborvass I see you reverted the bump (probably to verify the integration-test); could you

  • remove the revert
  • swap the "bump" and "test-case", so that git bisect doesn't break?
  • move this PR out of draft
tiborvass

comment created time in 18 hours

pull request commentmoby/moby

Fix dns fallback regression

@tiborvass I see you reverted the bump (probably to verify the integration-test); could you

  • remove the revert
  • swap the "bump" and "test-case", so that git bisect doesn't break?
  • move this PR out of draft
tiborvass

comment created time in 18 hours

startedmoby/moby

started time in 18 hours

fork k1rk/moby

Moby Project - a collaborative project for the container ecosystem to assemble container-based systems

https://mobyproject.org/

fork in 19 hours

fork metarsit/moby

Moby Project - a collaborative project for the container ecosystem to assemble container-based systems

https://mobyproject.org/

fork in 19 hours

startedmoby/buildkit

started time in 19 hours

startedmoby/moby

started time in 19 hours

startedmoby/buildkit

started time in 19 hours

startedmoby/moby

started time in 20 hours

issue closedmoby/buildkit

Does `docker save` saves inline image cache?

I’m trying to figure out whether docker save and docker load preserve --export-cache type=inline if used with Docker-integrated BuildKit.

closed time in a day

andreiborisov

issue commentmoby/buildkit

Does `docker save` saves inline image cache?

Thank you!

andreiborisov

comment created time in a day

startedmoby/libnetwork

started time in a day

PR opened moby/mobywebsite

docs: fixed link
+1 -1

0 comment

1 changed file

pr created time in a day

fork krazyeom/mobywebsite

website for the moby project

fork in a day

Pull request review commentmoby/moby

remove group name from identity mapping

 func setupRemappedRoot(config *config.Config) (*idtools.IdentityMapping, error) 		// update remapped root setting now that we have resolved them to actual names 		config.RemappedRoot = fmt.Sprintf("%s:%s", username, groupname) -		// try with username:groupname, uid:groupname, username:gid, uid:gid,+		// try with username and uid, 		// but keep the original error message (err)-		mappings, err := idtools.NewIdentityMapping(username, groupname)+		mappings, err := idtools.NewIdentityMapping(username) 		if err == nil { 			return mappings, nil

always lookup by both name and by number, and concat them

akhilerm

comment created time in a day

startedmoby/moby

started time in a day

issue commentmoby/moby

`docker build` hangs on FROM line

@thaJeztah Any idea what might be going on?

faucherb94

comment created time in a day

issue commentmoby/moby

what does exitCode=3221226505 means when container die?

Yeah, VMTools 11.0.6 fixs the issue.

On May 25, 2020, at 4:28 AM, fluffyDough notifications@github.com wrote:

Confirmed. After upgrade from 10.x to 11.0.1 or 11.0.5 random containers shutdown after a few minutes with 3221226505 exit code. After I upgraded VMTools to 11.0.6 all works properly - Docker containers do not shutdown after a few minutes after VM is restarted. I use:

virtual machine hosted by VMWare VM OS: Windows Server 2016 Standard VM Docker version: 19.03.5 VMTools version on the VM 11.0.6.19689 When I restarted VM just after I had upgraded VMTools, some of my containers had "restarting" status, so I had to run docker stop <id> docker start <id> for each of them. After a few minutes all has been up and I do not get 3221226505 exit code anymore.

Thanks @Roemer https://github.com/Roemer and @hulltr1 https://github.com/hulltr1.

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/moby/moby/issues/40328#issuecomment-633293654, or unsubscribe https://github.com/notifications/unsubscribe-auth/AHUIEAX3VVXC3PBPEL43VQTRTF7O7ANCNFSM4J663FMQ.

mianxiang

comment created time in a day

startedmoby/buildkit

started time in a day

more