profile
viewpoint
Luiz Carvalho lcarva Red Hat, Inc. Eastern US

lcarva/ansible-patterns 2

Ansible usage patterns that make you smile

fedora-modularity/message-tagging-service 1

Tag koji builds with the correct tags, triggered by the message bus

lcarva/ansible-playbook 0

An Ansible playbook for automated deployment of full-stack Plone servers.

lcarva/atomic-reactor 0

Simple python library for building docker images.

lcarva/BuildSourceImage 0

Tool to build a source image based on an existing OCI image

lcarva/cachito 0

(Experimental) Caching service for source code

PullRequestReviewEvent

Pull request review commentrelease-engineering/iib

Set com.redhat.iib.pinned label to regenerated bundles

 def _adjust_operator_bundle(manifests_path, metadata_path, organization=None):     replacement_pullspecs = {}     for pullspec in found_pullspecs:         replacement_needed = False-        if ':' not in ImageName.parse(pullspec).tag:-            replacement_needed = True+        new_pullspec = ImageName.parse(pullspec.to_str()) -        # Always resolve the image to make sure it's valid-        resolved_image = ImageName.parse(_get_resolved_image(pullspec.to_str()))+        if not pinned_by_iib:+            # Resolve the image only if it has not already been process by IIB. This

How silly of me! Fixed it.

lcarva

comment created time in 3 days

PullRequestReviewEvent

push eventlcarva/iib

Luiz Carvalho

commit sha 41d62bec4e49acb308104f158b56721addf5c285

Set com.redhat.iib.pinned label to regenerated bundles * CLOUDBLD-2532 Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

push time in 3 days

PR opened release-engineering/iib

Reviewers
Set com.redhat.iib.pinned label to regenerated bundles
  • CLOUDBLD-2532

Signed-off-by: Luiz Carvalho lucarval@redhat.com

+137 -14

0 comment

3 changed files

pr created time in 3 days

push eventlcarva/iib

Luiz Carvalho

commit sha 141201faa8c773a065fcdf8008e6ba8c2d454a8f

Set com.redhat.iib.pinned label to regenerated bundles * CLOUDBLD-2532 Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

push time in 3 days

push eventlcarva/iib

Jan Lipovsky

commit sha 19f221c7dd55bda9d497b9442a13976fe51125f1

Return the OMPS-pushed version when IIB backports a bundle - log response from OMPS API for every successful request - collecting operator version for all packages - updated patch_request function to process request with omps_operator_version - adding propagating of OMPS response to _get_base_dir_and_pkg_name - adding omps_operator_version to RequestAdd database table model - added set_omps_operator_version; set the set_omps_operator_version of the request using the IIB API. - added call of set_omps_operator_version after collecting all versions For more information refer to [CLOUDDST-2201]

view details

Shawn

commit sha 1896e044b1961f82f342527dc5ca48feb3ce5b8f

Fixing adding empty list of bundles

view details

Luiz Carvalho

commit sha e6c3e0346498f78e245495236a972f3ad66700fe

wip Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

push time in 3 days

PullRequestReviewEvent

create barnchlcarva/iib

branch : pinned_by_iib

created branch time in 4 days

Pull request review commentrelease-engineering/cachito

Document Cachito support for Pip

 Nexus instead. The modified files will be accessible at the again in a future request, it will use it directly from Nexus rather than downloading it and uploading it again. This guarantees that any dependency used for a Cachito request can be used again in a future Cachito request.++### pip++The pip package manager works by parsing the `requirements.txt` and `requirements-build.txt` files+present in the source repository to determine what dependencies are required to build the+application. It is possible to specify different file path(s) for the requirements files as long+as the files use the expected format.++Cachito then creates two repositories in an instance of Nexus it manages that contain just the+dependencies discovered in the requirements files. PyPI dependencies are uploaded to a PyPI hosted+repository, external dependencies are uploaded to a raw repository. Connection information for the+hosted repository is provided as the `PIP_INDEX_URL` environment variable accessible at the+`/api/v1/requests/<id>/environment-variables` endpoint. To make external dependencies available,+Cachito modifies the requirements files for the request by replacing relevant entries with their+corresponding URLs from the raw repository. The modified requirements files are accessible at the+`/api/v1/requests/<id>/configuration-files` endpoint.++Cachito will produce a bundle that is downloadable at `/api/v1/requests/<id>/download`. This+bundle will contain the application source code in the `app` directory and individual source+archives of all the dependencies in the `deps/pip` directory. These archives are not meant to be+used to build the application. They are there for convenience so that the dependency sources can be+published alongside your application sources. In addition, they can be used to to install packages+directly from the filesystem with `pip install --no-index --no-deps <path/to/archive>` in the event+that the application needs to be built without Cachito and the Nexus instance it manages.

Offline installation means that you can take the tarball created by Cachito and use those dependencies without any network connection at all. This is true for gomod. It's not for NPM because network access to the Nexus repo is required.

In either case, the Cachito tarball always includes all sources. This is to fulfill source compliance requirements. It could also be used to perform offline operations, but that would require changes to the pkg manager commands to use them directly.

chmeliik

comment created time in 4 days

PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commentrelease-engineering/cachito

Support podman based development environment

+CACHITO_COMPOSE_ENGINE ?= docker-compose+ifeq ($(CACHITO_COMPOSE_ENGINE), docker-compose)+	DOWN_OPTS=-v

If we're not using named volumes, you probably don't need -v, right? But it doesn't hurt.

athos-ribeiro

comment created time in 5 days

PullRequestReviewEvent

push eventlcarva/iib

Yashvardhan Nanavati

commit sha 1ecfc8838fc3fa3356e3a4b3c26abe6dfac527b6

Fix backport bug where IIB did not parse YAML and backports even when the label is set to false Refers to CLOUDDST-2330

view details

Yashvardhan Nanavati

commit sha 4a60e0e5cc86bc28851e97a942031817ef0609ee

Release v3.5.0

view details

Shawn

commit sha d2f44775b899a31329d7a50da0ca562c9e0c1cca

Adding ability to update binary image of index image from the Add bundle command.

view details

Shawn

commit sha 989923b16d0e1f2f159d312b70700bf78cdce2f1

fixing docker-compose for worker to login to registry

view details

Jan Lipovsky

commit sha 3075833921f1b001bd9cf7048ec2b68b99e35722

Adding helper functions for creating args and safe_args For more information refer to [CLOUDDST-1705]

view details

dependabot[bot]

commit sha 11fb426d0ca65ae5c3098f3408f695852850e569

Bump coverage from 5.0.3 to 5.2.1 Bumps [coverage](https://github.com/nedbat/coveragepy) from 5.0.3 to 5.2.1. - [Release notes](https://github.com/nedbat/coveragepy/releases) - [Changelog](https://github.com/nedbat/coveragepy/blob/master/CHANGES.rst) - [Commits](https://github.com/nedbat/coveragepy/compare/coverage-5.0.3...coverage-5.2.1) Signed-off-by: dependabot[bot] <support@github.com>

view details

Luiz Carvalho

commit sha 7c88e7610f80bedfa65c1375f94a1928b4a1346e

Fix digest resolution for schema 1 images * CLOUDBLD-2538 Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

Luiz Carvalho

commit sha 8f86fd4a24200f1c3802a6d0f70c82f35b1893cc

Release v3.6.0 Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

Luiz Carvalho

commit sha 726cc5bc1dca799712656aa0b3e566ffd2daad1b

Use a copy of the host ca-bundle podman rootless cannot access the file directly due to selinux Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

Luiz Carvalho

commit sha 6c5f4801e1d2e5e330fe54d63d916372457392b3

Use podman in podman in docker-compose Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

Luiz Carvalho

commit sha 07427fb9563fe121d1762f918db9bf4e69d5adca

Avoid port conflict with podman-compose ActiveMQ and RabbitMQ both use the port 5762. Even though the port is not exposed on both containers, podman-compose's use of a pod requires the ports to be unique across all containers within the pod. Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

Luiz Carvalho

commit sha 0b3450f6e0e333ee8e26330f8e5e3ea7211436e5

Add a Makefile to faciliate podman-compose usage Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

push time in 8 days

push eventcontainerbuildsystem/operator-manifest

Luiz Carvalho

commit sha 7c9198cc3b0a85b16582beb074da57462e505bfa

Only one CVS file is allowed in manifests * CLOUDBLD-2230 Co-authored-by: Martin Bašti <mbasti@redhat.com> Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

Luiz Carvalho

commit sha 3d4cae6b78417f72d50729195937d100d4ca25ae

Don't set relatedImages if it already exists * CLOUDBLD-2230 Co-authored-by: Martin Bašti <mbasti@redhat.com> Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

Luiz Carvalho

commit sha b96e53b26efeb11fb05209b5a0442823c33b78b7

Enforce a single CSV file Because operators must have exactly one CSV file, this should be enforced by the library. * CLOUDBLD-2230 Co-authered-by: Martin Bašti <mbasti@redhat.com> Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

Luiz Carvalho

commit sha 1d2b0d5693d25d958b7096ac25f0fe4dba82e2ce

Expose relatedImages * CLOUDBLD-2244 Co-authored-by: Chenxiong Qi <cqi@redhat.com> Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

push time in 9 days

PR opened release-engineering/iib

Reviewers
Revert "Adding ability to update binary image of index image from the Add bundle command

This reverts commit d2f44775b899a31329d7a50da0ca562c9e0c1cca.

OPM does not like the parameter --bundles ''. It fails with:

time="2020-09-11T12:57:51-04:00" level=error msg="permissive mode disabled" bundles="['']" error="error resolving name : object required"
Error: error resolving name : object required
+27 -49

0 comment

4 changed files

pr created time in 9 days

create barnchlcarva/iib

branch : revert/empty-bundles

created branch time in 9 days

created tagrelease-engineering/iib

tagv3.6.0

A REST API to manage operator index container images (and some bundle images)

created time in 9 days

push eventrelease-engineering/iib

Luiz Carvalho

commit sha 8f86fd4a24200f1c3802a6d0f70c82f35b1893cc

Release v3.6.0 Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

push time in 9 days

delete branch lcarva/iib

delete branch : fix-digest-computation

delete time in 9 days

issue commentcontainers/skopeo

skopeo inspect: a way how to avoid fetching all tags from repository

This would be very useful if we want to use skopeo to simply determine the digest of the manfiest. Currently, I don't think there's a way to retrieve this information without also retrieving all the tags in the repository (which is very inefficient for repositories with many tags).

MartinBasti

comment created time in 10 days

PR opened release-engineering/iib

Reviewers
Fix digest resolution for schema 1 images
  • CLOUDBLD-2538

Signed-off-by: Luiz Carvalho lucarval@redhat.com

+68 -2

0 comment

2 changed files

pr created time in 10 days

create barnchlcarva/iib

branch : fix-digest-computation

created branch time in 10 days

Pull request review commentcontainerbuildsystem/operator-manifest

Rebase with atomic reactor

 def set_related_images(self):         """         named_pullspecs = self._named_pullspecs() +        if not named_pullspecs:+            log.info("No pullspecs, skipping updates of relatedImages section")

Right. I'm not changing any of the code on purpose. It's already painful as is to port these things back :)

lcarva

comment created time in 11 days

PullRequestReviewEvent

pull request commentcontainerbuildsystem/operator-manifest

Rebase with atomic reactor

Instead of doing such rebase, how about abstract the code and make a shared library for the operator manifests?

Yup, that's the goal. This is just the solution until https://projects.engineering.redhat.com/browse/CLOUDBLD-619 is done.

lcarva

comment created time in 11 days

Pull request review commentoperator-framework/enhancements

describe a process for resolving image references in manifests

+---+title: image-references-in-operator-bundles+authors:+  - "@stevekuznetsov"+reviewers:+  - "@jwforres"+  - "@shawn-hurley"+  - "@gallettilance"+  - "@lcarva"+approvers:+  - "@jwforres"+  - "@shawn-hurley"+creation-date: 2020-05-17+last-updated: 2020-05-17+status: implementable+see-also:+  - "/enhancements/olm/operator-bundle.md"+---++# Image References in Operator Bundle Manifests++## Release Signoff Checklist++- [ ] Enhancement is `implementable`+- [ ] Design details are appropriately documented from clear requirements+- [ ] Test plan is defined+- [ ] Graduation criteria for dev preview, tech preview, GA+- [ ] User-facing documentation is created in [openshift-docs](https://github.com/openshift/openshift-docs/)++## Summary++Manifests that make up an Operator Bundle for installation via the Operator+Lifecycle Manager refer to one or more container images by their pull+specifications: these container images define the operator and operands that+the manifests deploy and manage. As with OpenShift release payloads, operator+bundles must refer to images by digest in order to produce reproducible+installations. A shared process to build operator bundles that replaces image+references with fully-resolved pull specifications that pin images by digest+must be built; this process must allow for a number of separate build systems+to direct how these replacements occur in order to support a full-featured+build and test strategy.++## Motivation++### Goals++- there is one, canonical, method for building an operator bundle images+  from a directory of manifests+- it is possible to perform image resolution and pinning separate from bundle+  creation, but not the opposite+- building an operator bundle image does not require the use of a container+  runtime, elevated privileges or any capacities that are not present for +  containerized workloads on OpenShift+- it is as simple for a developer to build a bundle referring to test versions+  of operand images as it is for a CI or productized build pipleine to create+  bundle images for publication+- operator manifest authors dictate the set of image references in manifests+  that must be resolved and pinned+- operator bundle images may be inspected to determine the pull specifications+  that were used in the creation of the bundle+- operator manifest authors must not be required to define the registry from+  which any individual build system will resolve image references+- upstream operator manifests must not be required to know how common names or+  references change when built in a downstream pipeline++### Non-Goals++- no prescriptive statement is made about the specific format or contents of the+  bundle image layers; any will be transparently supported as an output++## Proposal++### User Stories++#### Story 1++As an author of a manifest, I would like to check in manifests to my upstream+repository that are self-consistent, valid and make no assumptions about the+build system that will eventually create a bundle image with them.++#### Story 2++As an author of an operand, I would like to create a bundle locally in order+to test my operator end-to-end on a cluster of my choosing without having to+edit the core configuration for the operator.++#### Story 3++As an author of a build system, I would like to operate with tooling that allows+me to clearly define the source of truth for image digests in order to keep the+build-system-specific configuration to an absolute minimum.++#### Story 4++As an engineer involved in publishing an optional operator, I would like to+configure semantically equivalent image pull specifications once, in order to not+need to configure each build system independently.++### Implementation Details/Notes/Constraints++The core problems that must be solved in the implementation of this proposal have+already been handled in the workflow used in `oc adm release new`. When implementing+improvements to the `operator-sdk generate bundle` process we will simply need to+create a shared library for the two tools to use. While the shape of the output+is slightly different and some of the semantics about how the output should be +formatted are dissmilar, the core image reference rewriting is identical and the+process of building a `FROM scratch` image layer is also identical.++Today, some prior art exists in the OSBS, IIB and CVP workflows for building+operator bundles. As we improve the Operator SDK tooling to create a straghtforward+process for creating bundle images, we must make sure a seamless migration is possible.++### Risks and Mitigations++It will be critical that the design be vetted by all of the concerned parties, from+operator manifest authors to CI system authors and productized pipeline authors to+ensure that the UX is appropriate in all cases. Furthermore, the largest risk in the+implementation here is not prioritizing a clean migration pathway for all current+users who create bundle images, which would lead to further fragmentation of the+ecosystem, which is directly opposed to the goal of this enhancement.++## Design Details++The definition of a minimally-viable operator bundle image will be changed to+ensure that all image references in the contained manifests have been resolved +to a digest and had the pull specifications rewritten to refer to those digests.++The only acceptable process for creating an operator bundle will be to run the+`operator-sdk generate bundle` CLI, providing the manifests, metadata and image+sources as input to the creation process.++### Proposed UX++A set of new files in the `metadata/` directory of a bundle will be authored by+the manifest authors and build systems to record image replacement intent and+execution.++Operator manifest authors write a manifest that refers to images using some+opaque string, and provide an `image-references.yaml` file alongside their+manifests that declares which strings inside of their manifest are referring+to pull specifications of images and names each occurence.++The `metadata/image-references.yaml` file holds data in the following format, mapping+common names of container images to their string placeholders in the manifests:++```yaml+imageReferences:+- name: common-name+  substitute: registry.svc.ci.openshift.org/openshift:image+```++This mapping, therefore, defines what needs to be replaced in manifests when+they are bundled and identifies each replacement with a name. When+`operator-sdk generate bundle` runs, it will require as input a second mapping+from those names to literal image pull specifications and will run the replacement.+In this manner, the configuration provided by the manifest author remains static+regardless of the eventual replacement that a build system will execute.++The second mapping required at build-time, known as `image-replacements.yaml`, will take+a similar form, mapping the common names of images to their explicitly resolved pull+specifications. This file will have the following format:++```yaml+imagePullSpecs:+- name: common-name+  pullSpec: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:298d0b496f67a0ec97a37c6e930d76db9ae69ee6838570b0cd0c688f07f53780

ImageStream is not listed anywhere in this doc. Did you mean ImageReference, ImagePullSpec, or something else?

I think we're on the same page regarding how image-replacements.yaml will be used by a build system. The maintainer creates it to point to some agreed upon location, e.g. a floating tag in the production registry, then the build system is responsible for altering the pull specs in image-replacements.yaml, "on the fly" as part of the build process, to use a digest for example.

stevekuznetsov

comment created time in 11 days

PullRequestReviewEvent

Pull request review commentoperator-framework/enhancements

describe a process for resolving image references in manifests

+---+title: image-references-in-operator-bundles+authors:+  - "@stevekuznetsov"+reviewers:+  - "@jwforres"+  - "@shawn-hurley"+  - "@gallettilance"+  - "@lcarva"+approvers:+  - "@jwforres"+  - "@shawn-hurley"+creation-date: 2020-05-17+last-updated: 2020-05-17+status: implementable+see-also:+  - "/enhancements/olm/operator-bundle.md"+---++# Image References in Operator Bundle Manifests++## Release Signoff Checklist++- [ ] Enhancement is `implementable`+- [ ] Design details are appropriately documented from clear requirements+- [ ] Test plan is defined+- [ ] Graduation criteria for dev preview, tech preview, GA+- [ ] User-facing documentation is created in [openshift-docs](https://github.com/openshift/openshift-docs/)++## Summary++Manifests that make up an Operator Bundle for installation via the Operator+Lifecycle Manager refer to one or more container images by their pull+specifications: these container images define the operator and operands that+the manifests deploy and manage. As with OpenShift release payloads, operator+bundles must refer to images by digest in order to produce reproducible+installations. A shared process to build operator bundles that replaces image+references with fully-resolved pull specifications that pin images by digest+must be built; this process must allow for a number of separate build systems+to direct how these replacements occur in order to support a full-featured+build and test strategy.++## Motivation++### Goals++- there is one, canonical, method for building an operator bundle images+  from a directory of manifests+- it is possible to perform image resolution and pinning separate from bundle+  creation, but not the opposite+- building an operator bundle image does not require the use of a container+  runtime, elevated privileges or any capacities that are not present for +  containerized workloads on OpenShift+- it is as simple for a developer to build a bundle referring to test versions+  of operand images as it is for a CI or productized build pipleine to create+  bundle images for publication+- operator manifest authors dictate the set of image references in manifests+  that must be resolved and pinned+- operator bundle images may be inspected to determine the pull specifications+  that were used in the creation of the bundle+- operator manifest authors must not be required to define the registry from+  which any individual build system will resolve image references+- upstream operator manifests must not be required to know how common names or+  references change when built in a downstream pipeline++### Non-Goals++- no prescriptive statement is made about the specific format or contents of the+  bundle image layers; any will be transparently supported as an output++## Proposal++### User Stories++#### Story 1++As an author of a manifest, I would like to check in manifests to my upstream+repository that are self-consistent, valid and make no assumptions about the+build system that will eventually create a bundle image with them.++#### Story 2++As an author of an operand, I would like to create a bundle locally in order+to test my operator end-to-end on a cluster of my choosing without having to+edit the core configuration for the operator.++#### Story 3++As an author of a build system, I would like to operate with tooling that allows+me to clearly define the source of truth for image digests in order to keep the+build-system-specific configuration to an absolute minimum.++#### Story 4++As an engineer involved in publishing an optional operator, I would like to+configure semantically equivalent image pull specifications once, in order to not+need to configure each build system independently.++### Implementation Details/Notes/Constraints++The core problems that must be solved in the implementation of this proposal have+already been handled in the workflow used in `oc adm release new`. When implementing+improvements to the `operator-sdk generate bundle` process we will simply need to+create a shared library for the two tools to use. While the shape of the output+is slightly different and some of the semantics about how the output should be +formatted are dissmilar, the core image reference rewriting is identical and the+process of building a `FROM scratch` image layer is also identical.++Today, some prior art exists in the OSBS, IIB and CVP workflows for building+operator bundles. As we improve the Operator SDK tooling to create a straghtforward+process for creating bundle images, we must make sure a seamless migration is possible.++### Risks and Mitigations++It will be critical that the design be vetted by all of the concerned parties, from+operator manifest authors to CI system authors and productized pipeline authors to+ensure that the UX is appropriate in all cases. Furthermore, the largest risk in the+implementation here is not prioritizing a clean migration pathway for all current+users who create bundle images, which would lead to further fragmentation of the+ecosystem, which is directly opposed to the goal of this enhancement.++## Design Details++The definition of a minimally-viable operator bundle image will be changed to+ensure that all image references in the contained manifests have been resolved +to a digest and had the pull specifications rewritten to refer to those digests.++The only acceptable process for creating an operator bundle will be to run the+`operator-sdk generate bundle` CLI, providing the manifests, metadata and image+sources as input to the creation process.++### Proposed UX++A set of new files in the `metadata/` directory of a bundle will be authored by+the manifest authors and build systems to record image replacement intent and+execution.++Operator manifest authors write a manifest that refers to images using some+opaque string, and provide an `image-references.yaml` file alongside their+manifests that declares which strings inside of their manifest are referring+to pull specifications of images and names each occurence.++The `metadata/image-references.yaml` file holds data in the following format, mapping+common names of container images to their string placeholders in the manifests:++```yaml+imageReferences:+- name: common-name+  substitute: registry.svc.ci.openshift.org/openshift:image+```++This mapping, therefore, defines what needs to be replaced in manifests when+they are bundled and identifies each replacement with a name. When+`operator-sdk generate bundle` runs, it will require as input a second mapping+from those names to literal image pull specifications and will run the replacement.+In this manner, the configuration provided by the manifest author remains static+regardless of the eventual replacement that a build system will execute.++The second mapping required at build-time, known as `image-replacements.yaml`, will take+a similar form, mapping the common names of images to their explicitly resolved pull+specifications. This file will have the following format:++```yaml+imagePullSpecs:+- name: common-name+  pullSpec: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:298d0b496f67a0ec97a37c6e930d76db9ae69ee6838570b0cd0c688f07f53780+```++`operator-sdk generate bundle` will use this chain of mapping to perform replacements+in the manifests before creating a bundle image layer. The layer creation will also+be shared logic with `oc adm release new` in order to allow both processes to build+image layers without requiring the use of a container runtime, other build system, any+elevated permissions, privileges, capacities or SELinux roles. As the output image+layer in both cases is `FROM scratch` and simply contains manifest data, this build+process is simple and producing the layer by creating the underlying tar bundle does+not come with risks.++The bundle creation process will commit the `image-replacements.yaml` file into the output+bundle, and will furthermore create as output a `release-metadata.yaml` metadata file+that will expose any further build-time inputs used to create the bundle, so that downstream+consumers can access this data as necessary. The format for the `release-metadata.yaml` file+is a loose key-value store:++```yaml+metadata:+  some-key: some-value+```++The `image-replacments.yaml` content will also be injected into the CSV using the extant+`relatedImages` stanza for backwards compatibility, but this data is not expected to be
  1. Raising an error if relatedImages is already defined in the CSV in source control sounds like a reasonable option here. Could you please update the doc to explicitly state this?
  2. Based on your answer to 1, question 2 is no applicable. Let's disregard it.
  3. The use case I had in mind is if disconnected deployments are not yet supported for a given operator. I'm not entirely sure what it means to actually support disconnected deployments. Is it just populating relatedImages? Or is there additional work, e.g. testing, that needs to be done?
stevekuznetsov

comment created time in 11 days

PullRequestReviewEvent

Pull request review commentrelease-engineering/iib

Inspect index image to check if bundles are present

 def _get_resolved_image(pull_spec):     return pull_spec_resolved  +def _get_index_database(from_index, base_dir):+    """+    Get database file from the specified index image and save it locally.++    :param str from_index: index image to get database file from.+    :param str base_dir: base directory to which the database file should be saved.+    :return: path to the copied database file.+    :rtype: str+    :raises IIBError: if any podman command fails.+    """+    data = skopeo_inspect(f'docker://{from_index}')+    try:+        db_path = data['Labels']['operators.operatorframework.io.index.database.v1']+    except KeyError:+        raise IIBError('Index image doesn\'t have the label specifying its database location.')+    _copy_files_from_image(from_index, db_path, base_dir)+    local_path = os.path.join(base_dir, os.path.basename(db_path))+    return local_path+++def _serve_index_registry(db_path):+    """+    Locally start OPM registry service, which can be communicated with using gRPC queries.++    Due to IIB's paralellism, the service can run multiple times, which could lead to port+    binding conflicts. Resolution of port conflicts is handled in this function as well.++    :param str db_path: path to index database containing the registry data.+    :return: tuple containing port number of the running service and the running Popen object.+    :rtype: (int, Popen)+    :raises IIBError: if all tried ports are in use, or the command failed for another reason.+    """+    conf = get_worker_config()+    port = conf['iib_grpc_start_port']+    for i in range(conf['iib_grpc_max_port_tries']):

Or better yet:

port_start = conf['iib_grpc_start_port']
port_end = port_start + conf['iib_grpc_max_port_tries']

for port in range(port_start, port_end):
  ...
querti

comment created time in 11 days

Pull request review commentrelease-engineering/iib

Inspect index image to check if bundles are present

 def _get_resolved_image(pull_spec):     return pull_spec_resolved  +def _get_index_database(from_index, base_dir):+    """+    Get database file from the specified index image and save it locally.++    :param str from_index: index image to get database file from.+    :param str base_dir: base directory to which the database file should be saved.+    :return: path to the copied database file.+    :rtype: str+    :raises IIBError: if any podman command fails.+    """+    data = skopeo_inspect(f'docker://{from_index}')+    try:+        db_path = data['Labels']['operators.operatorframework.io.index.database.v1']+    except KeyError:+        raise IIBError('Index image doesn\'t have the label specifying its database location.')+    _copy_files_from_image(from_index, db_path, base_dir)+    local_path = os.path.join(base_dir, os.path.basename(db_path))+    return local_path+++def _serve_index_registry(db_path):

The [cyclomatic complexity|https://en.wikipedia.org/wiki/Cyclomatic_complexity] of this function is high, 11 (give or take). Can we make use of custom exceptions and helper methods so it's easier to read?

def _serve_index_registry(db_path):
  for port in range(...):
    try:
      return (port, _serve_index_registry_at_port(db_path, port))
    except AddressAlreadyInUse:
      pass

  # If it got here, no ports are available
  raise IIBError(...)

def _serve_index_registry_at_port(db_path, port):
  for _ in range(...):
    rpc_proc = subprocess.Popen(...)
    ret = rpc_proc.poll()

    if ret is not None:
       # Process terminated unexpectedly 
       if 'address already in use' in stderr:
         raise AddressAlreadyInUse(...)
       raise IIBError(...)

     if 'serving registry' in stdout:
       return rpc_proc

     # We always want to kill the process before trying again, do it unconditionally
     rpc_proc.kill()

  # If it got here, index registry has not been initialized after all retries
  raise IIBError(...)
querti

comment created time in 11 days

Pull request review commentrelease-engineering/iib

Inspect index image to check if bundles are present

 def _get_resolved_image(pull_spec):     return pull_spec_resolved  +def _get_index_database(from_index, base_dir):+    """+    Get database file from the specified index image and save it locally.++    :param str from_index: index image to get database file from.+    :param str base_dir: base directory to which the database file should be saved.+    :return: path to the copied database file.+    :rtype: str+    :raises IIBError: if any podman command fails.+    """+    data = skopeo_inspect(f'docker://{from_index}')+    try:+        db_path = data['Labels']['operators.operatorframework.io.index.database.v1']+    except KeyError:+        raise IIBError('Index image doesn\'t have the label specifying its database location.')+    _copy_files_from_image(from_index, db_path, base_dir)+    local_path = os.path.join(base_dir, os.path.basename(db_path))+    return local_path+++def _serve_index_registry(db_path):+    """+    Locally start OPM registry service, which can be communicated with using gRPC queries.++    Due to IIB's paralellism, the service can run multiple times, which could lead to port+    binding conflicts. Resolution of port conflicts is handled in this function as well.++    :param str db_path: path to index database containing the registry data.+    :return: tuple containing port number of the running service and the running Popen object.+    :rtype: (int, Popen)+    :raises IIBError: if all tried ports are in use, or the command failed for another reason.+    """+    conf = get_worker_config()+    port = conf['iib_grpc_start_port']+    for i in range(conf['iib_grpc_max_port_tries']):+        cmd = ['opm', 'registry', 'serve', '-p', str(port), '-d', db_path]+        registry_initialized = False+        for j in range(conf['iib_grpc_max_tries']):+            rpc_proc = subprocess.Popen(+                cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True+            )+            time.sleep(conf['iib_grpc_init_wait_time'])+            ret = rpc_proc.poll()+            # process hasn't terminated+            if ret is None:+                stdout = get_running_subprocess_output(rpc_proc.stdout)+                if 'serving registry' in stdout:+                    registry_initialized = True+                    log.debug('Started the command "%s"', ' '.join(cmd))+                    break+                elif j == conf['iib_grpc_max_tries'] - 1:

Unnecessary elif because the previous if block ends with break. Let's change this to if instead?

querti

comment created time in 11 days

Pull request review commentrelease-engineering/iib

Inspect index image to check if bundles are present

 def _get_resolved_image(pull_spec):     return pull_spec_resolved  +def _get_index_database(from_index, base_dir):+    """+    Get database file from the specified index image and save it locally.++    :param str from_index: index image to get database file from.+    :param str base_dir: base directory to which the database file should be saved.+    :return: path to the copied database file.+    :rtype: str+    :raises IIBError: if any podman command fails.+    """+    data = skopeo_inspect(f'docker://{from_index}')+    try:+        db_path = data['Labels']['operators.operatorframework.io.index.database.v1']+    except KeyError:+        raise IIBError('Index image doesn\'t have the label specifying its database location.')+    _copy_files_from_image(from_index, db_path, base_dir)+    local_path = os.path.join(base_dir, os.path.basename(db_path))+    return local_path+++def _serve_index_registry(db_path):+    """+    Locally start OPM registry service, which can be communicated with using gRPC queries.++    Due to IIB's paralellism, the service can run multiple times, which could lead to port+    binding conflicts. Resolution of port conflicts is handled in this function as well.++    :param str db_path: path to index database containing the registry data.+    :return: tuple containing port number of the running service and the running Popen object.+    :rtype: (int, Popen)+    :raises IIBError: if all tried ports are in use, or the command failed for another reason.+    """+    conf = get_worker_config()+    port = conf['iib_grpc_start_port']+    for i in range(conf['iib_grpc_max_port_tries']):+        cmd = ['opm', 'registry', 'serve', '-p', str(port), '-d', db_path]+        registry_initialized = False+        for j in range(conf['iib_grpc_max_tries']):+            rpc_proc = subprocess.Popen(+                cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True+            )+            time.sleep(conf['iib_grpc_init_wait_time'])+            ret = rpc_proc.poll()+            # process hasn't terminated+            if ret is None:+                stdout = get_running_subprocess_output(rpc_proc.stdout)+                if 'serving registry' in stdout:+                    registry_initialized = True+                    log.debug('Started the command "%s"', ' '.join(cmd))+                    break+                elif j == conf['iib_grpc_max_tries'] - 1:+                    rpc_proc.kill()+                    raise IIBError(+                        'Index registry has not been initialized after '+                        f'{conf.get("iib_grpc_max_tries")} tries'+                    )+                # not yet initialized, retry the loop+                else:+                    # sometimes, the registry service doesn't output any messages despite running+                    # restarting it is an attempt to mitigate this issue+                    log.info(+                        'Restarting the image registry service, '+                        'as it has not been properly initialized.'+                    )+                    rpc_proc.kill()+                    continue

Unnecessary continue ?

querti

comment created time in 11 days

Pull request review commentrelease-engineering/iib

Inspect index image to check if bundles are present

 def _get_resolved_image(pull_spec):     return pull_spec_resolved  +def _get_index_database(from_index, base_dir):+    """+    Get database file from the specified index image and save it locally.++    :param str from_index: index image to get database file from.+    :param str base_dir: base directory to which the database file should be saved.+    :return: path to the copied database file.+    :rtype: str+    :raises IIBError: if any podman command fails.+    """+    data = skopeo_inspect(f'docker://{from_index}')+    try:+        db_path = data['Labels']['operators.operatorframework.io.index.database.v1']+    except KeyError:+        raise IIBError('Index image doesn\'t have the label specifying its database location.')+    _copy_files_from_image(from_index, db_path, base_dir)+    local_path = os.path.join(base_dir, os.path.basename(db_path))+    return local_path+++def _serve_index_registry(db_path):+    """+    Locally start OPM registry service, which can be communicated with using gRPC queries.++    Due to IIB's paralellism, the service can run multiple times, which could lead to port+    binding conflicts. Resolution of port conflicts is handled in this function as well.++    :param str db_path: path to index database containing the registry data.+    :return: tuple containing port number of the running service and the running Popen object.+    :rtype: (int, Popen)+    :raises IIBError: if all tried ports are in use, or the command failed for another reason.+    """+    conf = get_worker_config()+    port = conf['iib_grpc_start_port']+    for i in range(conf['iib_grpc_max_port_tries']):+        cmd = ['opm', 'registry', 'serve', '-p', str(port), '-d', db_path]+        registry_initialized = False+        for j in range(conf['iib_grpc_max_tries']):+            rpc_proc = subprocess.Popen(+                cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True+            )+            time.sleep(conf['iib_grpc_init_wait_time'])+            ret = rpc_proc.poll()+            # process hasn't terminated+            if ret is None:+                stdout = get_running_subprocess_output(rpc_proc.stdout)+                if 'serving registry' in stdout:

There's no need for this to be an if/elif/else statement because the first block calls break, and the second one raises an exception. Let's simplify the code:

if 'serving registry' in stdout:
  ...
  break

if j == conf['iib_grpc_max_tries'] - 1:
  ...
  raise IIBError(...)

log.info(...)
continue
querti

comment created time in 11 days

Pull request review commentrelease-engineering/iib

Inspect index image to check if bundles are present

 def _get_resolved_image(pull_spec):     return pull_spec_resolved  +def _get_index_database(from_index, base_dir):+    """+    Get database file from the specified index image and save it locally.++    :param str from_index: index image to get database file from.+    :param str base_dir: base directory to which the database file should be saved.+    :return: path to the copied database file.+    :rtype: str+    :raises IIBError: if any podman command fails.+    """+    data = skopeo_inspect(f'docker://{from_index}')+    try:+        db_path = data['Labels']['operators.operatorframework.io.index.database.v1']+    except KeyError:+        raise IIBError('Index image doesn\'t have the label specifying its database location.')+    _copy_files_from_image(from_index, db_path, base_dir)+    local_path = os.path.join(base_dir, os.path.basename(db_path))+    return local_path+++def _serve_index_registry(db_path):+    """+    Locally start OPM registry service, which can be communicated with using gRPC queries.++    Due to IIB's paralellism, the service can run multiple times, which could lead to port+    binding conflicts. Resolution of port conflicts is handled in this function as well.++    :param str db_path: path to index database containing the registry data.+    :return: tuple containing port number of the running service and the running Popen object.+    :rtype: (int, Popen)+    :raises IIBError: if all tried ports are in use, or the command failed for another reason.+    """+    conf = get_worker_config()+    port = conf['iib_grpc_start_port']+    for i in range(conf['iib_grpc_max_port_tries']):+        cmd = ['opm', 'registry', 'serve', '-p', str(port), '-d', db_path]+        registry_initialized = False+        for j in range(conf['iib_grpc_max_tries']):+            rpc_proc = subprocess.Popen(+                cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True+            )+            time.sleep(conf['iib_grpc_init_wait_time'])+            ret = rpc_proc.poll()+            # process hasn't terminated+            if ret is None:+                stdout = get_running_subprocess_output(rpc_proc.stdout)+                if 'serving registry' in stdout:+                    registry_initialized = True+                    log.debug('Started the command "%s"', ' '.join(cmd))+                    break+                elif j == conf['iib_grpc_max_tries'] - 1:+                    rpc_proc.kill()+                    raise IIBError(+                        'Index registry has not been initialized after '+                        f'{conf.get("iib_grpc_max_tries")} tries'+                    )+                # not yet initialized, retry the loop+                else:+                    # sometimes, the registry service doesn't output any messages despite running+                    # restarting it is an attempt to mitigate this issue+                    log.info(+                        'Restarting the image registry service, '+                        'as it has not been properly initialized.'+                    )+                    rpc_proc.kill()+                    continue+            # process has terminated, something went wrong+            else:+                break+        if registry_initialized:+            log.info('Index registry service has been initialized.')+            break

Can we just immediately return here? We could do away with the else below.

querti

comment created time in 11 days

Pull request review commentrelease-engineering/iib

Inspect index image to check if bundles are present

 def _get_resolved_image(pull_spec):     return pull_spec_resolved  +def _get_index_database(from_index, base_dir):+    """+    Get database file from the specified index image and save it locally.++    :param str from_index: index image to get database file from.+    :param str base_dir: base directory to which the database file should be saved.+    :return: path to the copied database file.+    :rtype: str+    :raises IIBError: if any podman command fails.+    """+    data = skopeo_inspect(f'docker://{from_index}')+    try:+        db_path = data['Labels']['operators.operatorframework.io.index.database.v1']+    except KeyError:+        raise IIBError('Index image doesn\'t have the label specifying its database location.')+    _copy_files_from_image(from_index, db_path, base_dir)+    local_path = os.path.join(base_dir, os.path.basename(db_path))+    return local_path+++def _serve_index_registry(db_path):+    """+    Locally start OPM registry service, which can be communicated with using gRPC queries.++    Due to IIB's paralellism, the service can run multiple times, which could lead to port+    binding conflicts. Resolution of port conflicts is handled in this function as well.++    :param str db_path: path to index database containing the registry data.+    :return: tuple containing port number of the running service and the running Popen object.+    :rtype: (int, Popen)+    :raises IIBError: if all tried ports are in use, or the command failed for another reason.+    """+    conf = get_worker_config()+    port = conf['iib_grpc_start_port']+    for i in range(conf['iib_grpc_max_port_tries']):+        cmd = ['opm', 'registry', 'serve', '-p', str(port), '-d', db_path]+        registry_initialized = False+        for j in range(conf['iib_grpc_max_tries']):+            rpc_proc = subprocess.Popen(+                cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True+            )+            time.sleep(conf['iib_grpc_init_wait_time'])+            ret = rpc_proc.poll()+            # process hasn't terminated+            if ret is None:

Consider switching the condition here to reduce the amount of nested code. For example:

if ret not None:
  break

# No need for an else-block
stdout = get_running_subprocess_output(rpc_proc.stdout)
...
querti

comment created time in 11 days

Pull request review commentrelease-engineering/iib

Inspect index image to check if bundles are present

 The custom configuration options for the Celery workers are listed below:   file though. This defaults to `~/.docker/config.json.template`. * `iib_greenwave_url` - the URL to the Greenwave REST API if gating is desired   (e.g. `https://greenwave.domain.local/api/v1.0/`). This defaults to `None`.+* `iib_grpc_init_wait_time` - time to wait for the index image service to be initialized. This+  defaults to `1` second.+* `iib_grpc_max_port_tries` - maximum ports to try when initializing the index image service.+  This defaults to `100` tries.+* `iib_grpc_start_port` - first port to try when starting the service (subsequent are increments).+  This defaults to `50051`.+* `iib_grpc_max_tries` - maximum number of times to try to start the index image service+  before giving up. This defaults to `5` attempts.+* `iib_grpc_wait_time` - time to wait between checking if index image service has initialized.

Should there be a iib_grpc_max_wait_time in case the index image service is never initialize?

querti

comment created time in 11 days

PullRequestReviewEvent
PullRequestReviewEvent

pull request commentcontainerbuildsystem/operator-manifest

Rebase with atomic reactor

@tkdchen, @MartinBasti, @chmeliik could you PTAL?

This will be used by IIB.

lcarva

comment created time in 12 days

create barnchlcarva/operator-manifest

branch : rebase-with-atomic-reactor

created branch time in 12 days

Pull request review commentoperator-framework/enhancements

describe a process for resolving image references in manifests

+---+title: image-references-in-operator-bundles+authors:+  - "@stevekuznetsov"+reviewers:+  - "@jwforres"+  - "@shawn-hurley"+  - "@gallettilance"+  - "@lcarva"+approvers:+  - "@jwforres"+  - "@shawn-hurley"+creation-date: 2020-05-17+last-updated: 2020-05-17+status: implementable+see-also:+  - "/enhancements/olm/operator-bundle.md"+---++# Image References in Operator Bundle Manifests++## Release Signoff Checklist++- [ ] Enhancement is `implementable`+- [ ] Design details are appropriately documented from clear requirements+- [ ] Test plan is defined+- [ ] Graduation criteria for dev preview, tech preview, GA+- [ ] User-facing documentation is created in [openshift-docs](https://github.com/openshift/openshift-docs/)++## Summary++Manifests that make up an Operator Bundle for installation via the Operator+Lifecycle Manager refer to one or more container images by their pull+specifications: these container images define the operator and operands that+the manifests deploy and manage. As with OpenShift release payloads, operator+bundles must refer to images by digest in order to produce reproducible+installations. A shared process to build operator bundles that replaces image+references with fully-resolved pull specifications that pin images by digest+must be built; this process must allow for a number of separate build systems+to direct how these replacements occur in order to support a full-featured+build and test strategy.++## Motivation++### Goals++- there is one, canonical, method for building an operator bundle images+  from a directory of manifests+- it is possible to perform image resolution and pinning separate from bundle+  creation, but not the opposite+- building an operator bundle image does not require the use of a container+  runtime, elevated privileges or any capacities that are not present for +  containerized workloads on OpenShift+- it is as simple for a developer to build a bundle referring to test versions+  of operand images as it is for a CI or productized build pipleine to create+  bundle images for publication+- operator manifest authors dictate the set of image references in manifests+  that must be resolved and pinned+- operator bundle images may be inspected to determine the pull specifications+  that were used in the creation of the bundle+- operator manifest authors must not be required to define the registry from+  which any individual build system will resolve image references+- upstream operator manifests must not be required to know how common names or+  references change when built in a downstream pipeline++### Non-Goals++- no prescriptive statement is made about the specific format or contents of the+  bundle image layers; any will be transparently supported as an output++## Proposal++### User Stories++#### Story 1++As an author of a manifest, I would like to check in manifests to my upstream+repository that are self-consistent, valid and make no assumptions about the+build system that will eventually create a bundle image with them.++#### Story 2++As an author of an operand, I would like to create a bundle locally in order+to test my operator end-to-end on a cluster of my choosing without having to+edit the core configuration for the operator.++#### Story 3++As an author of a build system, I would like to operate with tooling that allows+me to clearly define the source of truth for image digests in order to keep the+build-system-specific configuration to an absolute minimum.++#### Story 4++As an engineer involved in publishing an optional operator, I would like to+configure semantically equivalent image pull specifications once, in order to not+need to configure each build system independently.++### Implementation Details/Notes/Constraints++The core problems that must be solved in the implementation of this proposal have+already been handled in the workflow used in `oc adm release new`. When implementing+improvements to the `operator-sdk generate bundle` process we will simply need to+create a shared library for the two tools to use. While the shape of the output+is slightly different and some of the semantics about how the output should be +formatted are dissmilar, the core image reference rewriting is identical and the+process of building a `FROM scratch` image layer is also identical.++Today, some prior art exists in the OSBS, IIB and CVP workflows for building+operator bundles. As we improve the Operator SDK tooling to create a straghtforward+process for creating bundle images, we must make sure a seamless migration is possible.++### Risks and Mitigations++It will be critical that the design be vetted by all of the concerned parties, from+operator manifest authors to CI system authors and productized pipeline authors to+ensure that the UX is appropriate in all cases. Furthermore, the largest risk in the+implementation here is not prioritizing a clean migration pathway for all current+users who create bundle images, which would lead to further fragmentation of the+ecosystem, which is directly opposed to the goal of this enhancement.++## Design Details++The definition of a minimally-viable operator bundle image will be changed to+ensure that all image references in the contained manifests have been resolved +to a digest and had the pull specifications rewritten to refer to those digests.++The only acceptable process for creating an operator bundle will be to run the+`operator-sdk generate bundle` CLI, providing the manifests, metadata and image+sources as input to the creation process.++### Proposed UX++A set of new files in the `metadata/` directory of a bundle will be authored by+the manifest authors and build systems to record image replacement intent and+execution.++Operator manifest authors write a manifest that refers to images using some+opaque string, and provide an `image-references.yaml` file alongside their+manifests that declares which strings inside of their manifest are referring+to pull specifications of images and names each occurence.++The `metadata/image-references.yaml` file holds data in the following format, mapping+common names of container images to their string placeholders in the manifests:++```yaml+imageReferences:+- name: common-name+  substitute: registry.svc.ci.openshift.org/openshift:image+```++This mapping, therefore, defines what needs to be replaced in manifests when+they are bundled and identifies each replacement with a name. When+`operator-sdk generate bundle` runs, it will require as input a second mapping+from those names to literal image pull specifications and will run the replacement.+In this manner, the configuration provided by the manifest author remains static+regardless of the eventual replacement that a build system will execute.++The second mapping required at build-time, known as `image-replacements.yaml`, will take+a similar form, mapping the common names of images to their explicitly resolved pull+specifications. This file will have the following format:++```yaml+imagePullSpecs:+- name: common-name+  pullSpec: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:298d0b496f67a0ec97a37c6e930d76db9ae69ee6838570b0cd0c688f07f53780+```++`operator-sdk generate bundle` will use this chain of mapping to perform replacements+in the manifests before creating a bundle image layer. The layer creation will also+be shared logic with `oc adm release new` in order to allow both processes to build+image layers without requiring the use of a container runtime, other build system, any+elevated permissions, privileges, capacities or SELinux roles. As the output image+layer in both cases is `FROM scratch` and simply contains manifest data, this build+process is simple and producing the layer by creating the underlying tar bundle does+not come with risks.++The bundle creation process will commit the `image-replacements.yaml` file into the output+bundle, and will furthermore create as output a `release-metadata.yaml` metadata file+that will expose any further build-time inputs used to create the bundle, so that downstream+consumers can access this data as necessary. The format for the `release-metadata.yaml` file+is a loose key-value store:++```yaml+metadata:+  some-key: some-value+```++The `image-replacments.yaml` content will also be injected into the CSV using the extant+`relatedImages` stanza for backwards compatibility, but this data is not expected to be

Yes, populating relatedImages automatically is a great direction.

Could you clarify:

  1. What if relatedImages is already defined in the CSV in source control? Is it overwritten, merged, or is an error raised?
  2. If the answer to the above is yes, will image replacements in a pre-existing relatedImages block be supported?
  3. There could be cases where populating relatedImages is not desirable, but image replacements are. Would there be an option to turn off populating the relatedImages block?

Perhaps this is too low level for this design.

stevekuznetsov

comment created time in 16 days

Pull request review commentoperator-framework/enhancements

describe a process for resolving image references in manifests

+---+title: image-references-in-operator-bundles+authors:+  - "@stevekuznetsov"+reviewers:+  - "@jwforres"+  - "@shawn-hurley"+  - "@gallettilance"+  - "@lcarva"+approvers:+  - "@jwforres"+  - "@shawn-hurley"+creation-date: 2020-05-17+last-updated: 2020-05-17+status: implementable+see-also:+  - "/enhancements/olm/operator-bundle.md"+---++# Image References in Operator Bundle Manifests++## Release Signoff Checklist++- [ ] Enhancement is `implementable`+- [ ] Design details are appropriately documented from clear requirements+- [ ] Test plan is defined+- [ ] Graduation criteria for dev preview, tech preview, GA+- [ ] User-facing documentation is created in [openshift-docs](https://github.com/openshift/openshift-docs/)++## Summary++Manifests that make up an Operator Bundle for installation via the Operator+Lifecycle Manager refer to one or more container images by their pull+specifications: these container images define the operator and operands that+the manifests deploy and manage. As with OpenShift release payloads, operator+bundles must refer to images by digest in order to produce reproducible+installations. A shared process to build operator bundles that replaces image+references with fully-resolved pull specifications that pin images by digest+must be built; this process must allow for a number of separate build systems+to direct how these replacements occur in order to support a full-featured+build and test strategy.++## Motivation++### Goals++- there is one, canonical, method for building an operator bundle images+  from a directory of manifests+- it is possible to perform image resolution and pinning separate from bundle+  creation, but not the opposite+- building an operator bundle image does not require the use of a container+  runtime, elevated privileges or any capacities that are not present for +  containerized workloads on OpenShift+- it is as simple for a developer to build a bundle referring to test versions+  of operand images as it is for a CI or productized build pipleine to create+  bundle images for publication+- operator manifest authors dictate the set of image references in manifests+  that must be resolved and pinned+- operator bundle images may be inspected to determine the pull specifications+  that were used in the creation of the bundle+- operator manifest authors must not be required to define the registry from+  which any individual build system will resolve image references+- upstream operator manifests must not be required to know how common names or+  references change when built in a downstream pipeline++### Non-Goals++- no prescriptive statement is made about the specific format or contents of the+  bundle image layers; any will be transparently supported as an output++## Proposal++### User Stories++#### Story 1++As an author of a manifest, I would like to check in manifests to my upstream+repository that are self-consistent, valid and make no assumptions about the+build system that will eventually create a bundle image with them.++#### Story 2++As an author of an operand, I would like to create a bundle locally in order+to test my operator end-to-end on a cluster of my choosing without having to+edit the core configuration for the operator.++#### Story 3++As an author of a build system, I would like to operate with tooling that allows+me to clearly define the source of truth for image digests in order to keep the+build-system-specific configuration to an absolute minimum.++#### Story 4++As an engineer involved in publishing an optional operator, I would like to+configure semantically equivalent image pull specifications once, in order to not+need to configure each build system independently.++### Implementation Details/Notes/Constraints++The core problems that must be solved in the implementation of this proposal have+already been handled in the workflow used in `oc adm release new`. When implementing+improvements to the `operator-sdk generate bundle` process we will simply need to+create a shared library for the two tools to use. While the shape of the output+is slightly different and some of the semantics about how the output should be +formatted are dissmilar, the core image reference rewriting is identical and the+process of building a `FROM scratch` image layer is also identical.++Today, some prior art exists in the OSBS, IIB and CVP workflows for building+operator bundles. As we improve the Operator SDK tooling to create a straghtforward+process for creating bundle images, we must make sure a seamless migration is possible.++### Risks and Mitigations++It will be critical that the design be vetted by all of the concerned parties, from+operator manifest authors to CI system authors and productized pipeline authors to+ensure that the UX is appropriate in all cases. Furthermore, the largest risk in the+implementation here is not prioritizing a clean migration pathway for all current+users who create bundle images, which would lead to further fragmentation of the+ecosystem, which is directly opposed to the goal of this enhancement.++## Design Details++The definition of a minimally-viable operator bundle image will be changed to+ensure that all image references in the contained manifests have been resolved +to a digest and had the pull specifications rewritten to refer to those digests.++The only acceptable process for creating an operator bundle will be to run the+`operator-sdk generate bundle` CLI, providing the manifests, metadata and image+sources as input to the creation process.++### Proposed UX++A set of new files in the `metadata/` directory of a bundle will be authored by+the manifest authors and build systems to record image replacement intent and+execution.++Operator manifest authors write a manifest that refers to images using some+opaque string, and provide an `image-references.yaml` file alongside their+manifests that declares which strings inside of their manifest are referring+to pull specifications of images and names each occurence.++The `metadata/image-references.yaml` file holds data in the following format, mapping+common names of container images to their string placeholders in the manifests:++```yaml+imageReferences:+- name: common-name+  substitute: registry.svc.ci.openshift.org/openshift:image+```++This mapping, therefore, defines what needs to be replaced in manifests when+they are bundled and identifies each replacement with a name. When+`operator-sdk generate bundle` runs, it will require as input a second mapping+from those names to literal image pull specifications and will run the replacement.+In this manner, the configuration provided by the manifest author remains static+regardless of the eventual replacement that a build system will execute.++The second mapping required at build-time, known as `image-replacements.yaml`, will take+a similar form, mapping the common names of images to their explicitly resolved pull+specifications. This file will have the following format:++```yaml+imagePullSpecs:+- name: common-name+  pullSpec: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:298d0b496f67a0ec97a37c6e930d76db9ae69ee6838570b0cd0c688f07f53780

In most current cases, the maintainer provides a pull spec to a tag which the build system resolves to a digest. Having the build system do the digest pinning greatly improves the user experience.

I'm wondering how the build system could make the jump from registry.svc.ci.openshift.org/openshift:image to quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:298d0b496f67a0ec97a37c6e930d76db9ae69ee6838570b0cd0c688f07f53780. It seems like we need an intermediate pullspec, e.g. quay.io/openshift-release-dev/ocp-v4.0-art-dev:latest.

One possibility is for maintainers, the ones using the build system, to provide a non-digest pullspec in image-replacements.yaml. Then, the build system could simply resolve the unresolved pullspecs in it at build time.

Does that make sense?

stevekuznetsov

comment created time in 16 days

Pull request review commentoperator-framework/enhancements

describe a process for resolving image references in manifests

+---+title: image-references-in-operator-bundles+authors:+  - "@stevekuznetsov"+reviewers:+  - "@jwforres"+  - "@shawn-hurley"+  - "@gallettilance"+  - "@lcarva"+approvers:+  - "@jwforres"+  - "@shawn-hurley"+creation-date: 2020-05-17+last-updated: 2020-05-17+status: implementable+see-also:+  - "/enhancements/olm/operator-bundle.md"+---++# Image References in Operator Bundle Manifests++## Release Signoff Checklist++- [ ] Enhancement is `implementable`+- [ ] Design details are appropriately documented from clear requirements+- [ ] Test plan is defined+- [ ] Graduation criteria for dev preview, tech preview, GA+- [ ] User-facing documentation is created in [openshift-docs](https://github.com/openshift/openshift-docs/)++## Summary++Manifests that make up an Operator Bundle for installation via the Operator+Lifecycle Manager refer to one or more container images by their pull+specifications: these container images define the operator and operands that+the manifests deploy and manage. As with OpenShift release payloads, operator+bundles must refer to images by digest in order to produce reproducible+installations. A shared process to build operator bundles that replaces image+references with fully-resolved pull specifications that pin images by digest+must be built; this process must allow for a number of separate build systems+to direct how these replacements occur in order to support a full-featured+build and test strategy.++## Motivation++### Goals++- there is one, canonical, method for building an operator bundle images+  from a directory of manifests+- it is possible to perform image resolution and pinning separate from bundle+  creation, but not the opposite+- building an operator bundle image does not require the use of a container+  runtime, elevated privileges or any capacities that are not present for +  containerized workloads on OpenShift+- it is as simple for a developer to build a bundle referring to test versions+  of operand images as it is for a CI or productized build pipleine to create+  bundle images for publication+- operator manifest authors dictate the set of image references in manifests+  that must be resolved and pinned+- operator bundle images may be inspected to determine the pull specifications+  that were used in the creation of the bundle+- operator manifest authors must not be required to define the registry from+  which any individual build system will resolve image references+- upstream operator manifests must not be required to know how common names or+  references change when built in a downstream pipeline++### Non-Goals++- no prescriptive statement is made about the specific format or contents of the+  bundle image layers; any will be transparently supported as an output++## Proposal++### User Stories++#### Story 1++As an author of a manifest, I would like to check in manifests to my upstream+repository that are self-consistent, valid and make no assumptions about the+build system that will eventually create a bundle image with them.++#### Story 2++As an author of an operand, I would like to create a bundle locally in order+to test my operator end-to-end on a cluster of my choosing without having to+edit the core configuration for the operator.++#### Story 3++As an author of a build system, I would like to operate with tooling that allows+me to clearly define the source of truth for image digests in order to keep the+build-system-specific configuration to an absolute minimum.++#### Story 4++As an engineer involved in publishing an optional operator, I would like to+configure semantically equivalent image pull specifications once, in order to not+need to configure each build system independently.++### Implementation Details/Notes/Constraints++The core problems that must be solved in the implementation of this proposal have+already been handled in the workflow used in `oc adm release new`. When implementing+improvements to the `operator-sdk generate bundle` process we will simply need to+create a shared library for the two tools to use. While the shape of the output+is slightly different and some of the semantics about how the output should be +formatted are dissmilar, the core image reference rewriting is identical and the+process of building a `FROM scratch` image layer is also identical.++Today, some prior art exists in the OSBS, IIB and CVP workflows for building+operator bundles. As we improve the Operator SDK tooling to create a straghtforward+process for creating bundle images, we must make sure a seamless migration is possible.++### Risks and Mitigations++It will be critical that the design be vetted by all of the concerned parties, from+operator manifest authors to CI system authors and productized pipeline authors to+ensure that the UX is appropriate in all cases. Furthermore, the largest risk in the+implementation here is not prioritizing a clean migration pathway for all current+users who create bundle images, which would lead to further fragmentation of the+ecosystem, which is directly opposed to the goal of this enhancement.++## Design Details++The definition of a minimally-viable operator bundle image will be changed to+ensure that all image references in the contained manifests have been resolved +to a digest and had the pull specifications rewritten to refer to those digests.++The only acceptable process for creating an operator bundle will be to run the+`operator-sdk generate bundle` CLI, providing the manifests, metadata and image+sources as input to the creation process.++### Proposed UX++A set of new files in the `metadata/` directory of a bundle will be authored by+the manifest authors and build systems to record image replacement intent and+execution.++Operator manifest authors write a manifest that refers to images using some+opaque string, and provide an `image-references.yaml` file alongside their+manifests that declares which strings inside of their manifest are referring+to pull specifications of images and names each occurence.++The `metadata/image-references.yaml` file holds data in the following format, mapping+common names of container images to their string placeholders in the manifests:++```yaml+imageReferences:+- name: common-name+  substitute: registry.svc.ci.openshift.org/openshift:image+```++This mapping, therefore, defines what needs to be replaced in manifests when+they are bundled and identifies each replacement with a name. When+`operator-sdk generate bundle` runs, it will require as input a second mapping+from those names to literal image pull specifications and will run the replacement.+In this manner, the configuration provided by the manifest author remains static+regardless of the eventual replacement that a build system will execute.++The second mapping required at build-time, known as `image-replacements.yaml`, will take+a similar form, mapping the common names of images to their explicitly resolved pull+specifications. This file will have the following format:++```yaml+imagePullSpecs:+- name: common-name+  pullSpec: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:298d0b496f67a0ec97a37c6e930d76db9ae69ee6838570b0cd0c688f07f53780+```++`operator-sdk generate bundle` will use this chain of mapping to perform replacements+in the manifests before creating a bundle image layer. The layer creation will also+be shared logic with `oc adm release new` in order to allow both processes to build+image layers without requiring the use of a container runtime, other build system, any+elevated permissions, privileges, capacities or SELinux roles. As the output image+layer in both cases is `FROM scratch` and simply contains manifest data, this build+process is simple and producing the layer by creating the underlying tar bundle does+not come with risks.++The bundle creation process will commit the `image-replacements.yaml` file into the output+bundle, and will furthermore create as output a `release-metadata.yaml` metadata file+that will expose any further build-time inputs used to create the bundle, so that downstream+consumers can access this data as necessary. The format for the `release-metadata.yaml` file+is a loose key-value store:++```yaml+metadata:+  some-key: some-value+```++The `image-replacments.yaml` content will also be injected into the CSV using the extant

Missing "e" in image-replacments.yaml

stevekuznetsov

comment created time in 16 days

PullRequestReviewEvent

Pull request review commentoperator-framework/enhancements

describe a process for resolving image references in manifests

+---+title: image-references-in-operator-bundles+authors:+  - "@stevekuznetsov"+reviewers:+  - "@jwforres"+  - "@shawn-hurley"+  - "@gallettilance"+  - "@lcarva"+approvers:+  - "@jwforres"+  - "@shawn-hurley"+creation-date: 2020-05-17+last-updated: 2020-05-17+status: implementable+see-also:+  - "/enhancements/olm/operator-bundle.md"+---++# Image References in Operator Bundle Manifests++## Release Signoff Checklist++- [ ] Enhancement is `implementable`+- [ ] Design details are appropriately documented from clear requirements+- [ ] Test plan is defined+- [ ] Graduation criteria for dev preview, tech preview, GA+- [ ] User-facing documentation is created in [openshift-docs](https://github.com/openshift/openshift-docs/)++## Summary++Manifests that make up an Operator Bundle for installation via the Operator+Lifecycle Manager refer to one or more container images by their pull+specifications: these container images define the operator and operands that+the manifests deploy and manage. As with OpenShift release payloads, operator+bundles must refer to images by digest in order to produce reproducible+installations. A shared process to build operator bundles that replaces image+references with fully-resolved pull specifications that pin images by digest+must be built; this process must allow for a number of separate build systems+to direct how these replacements occur in order to support a full-featured+build and test strategy.++## Motivation++### Goals++- there is one, canonical, method for building an operator bundle images+  from a directory of manifests+- it is possible to perform image resolution and pinning separate from bundle+  creation, but not the opposite+- building an operator bundle image does not require the use of a container+  runtime, elevated privileges or any capacities that are not present for +  containerized workloads on OpenShift+- it is as simple for a developer to build a bundle referring to test versions+  of operand images as it is for a CI or productized build pipleine to create+  bundle images for publication+- operator manifest authors dictate the set of image references in manifests+  that must be resolved and pinned+- operator bundle images may be inspected to determine the pull specifications+  that were used in the creation of the bundle+- operator manifest authors must not be required to define the registry from+  which any individual build system will resolve image references+- upstream operator manifests must not be required to know how common names or+  references change when built in a downstream pipeline++### Non-Goals++- no prescriptive statement is made about the specific format or contents of the+  bundle image layers; any will be transparently supported as an output++## Proposal++### User Stories++#### Story 1++As an author of a manifest, I would like to check in manifests to my upstream+repository that are self-consistent, valid and make no assumptions about the+build system that will eventually create a bundle image with them.++#### Story 2++As an author of an operand, I would like to create a bundle locally in order+to test my operator end-to-end on a cluster of my choosing without having to+edit the core configuration for the operator.++#### Story 3++As an author of a build system, I would like to operate with tooling that allows+me to clearly define the source of truth for image digests in order to keep the+build-system-specific configuration to an absolute minimum.++#### Story 4++As an engineer involved in publishing an optional operator, I would like to+configure semantically equivalent image pull specifications once, in order to not+need to configure each build system independently.++### Implementation Details/Notes/Constraints++The core problems that must be solved in the implementation of this proposal have+already been handled in the workflow used in `oc adm release new`. When implementing

I'm not familiar with the oc adm release new command. Are there docs that explain this process further than the output of the --help parameter?

stevekuznetsov

comment created time in 16 days

PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commentrelease-engineering/cachito

Implement support for downloading URL pip deps

 def download_dependencies(request_id, requirements_file):             download_info = _download_vcs_package(                 req, bundle_dir.pip_deps_dir, pip_raw_repo_name, nexus_auth             )+        elif req.kind == "url":+            download_info = _download_url_package(+                req, bundle_dir.pip_deps_dir, pip_raw_repo_name, nexus_auth+            )         else:-            log.warning("Dependency type not yet supported: %s", req.download_line)-            continue+            # Should not happen+            raise RuntimeError(f"Unexpected requirement kind: {req.kind!r}")          log.info(             "Successfully downloaded %s to %s",             req.download_line,             download_info["path"].relative_to(bundle_dir),         ) +        # TODO: Always verify URL requirements?

We should verify the hash before uploading to nexus. A hash mismatch could indicate a wide range of issues, from user misconfiguration to a malicious attack.

chmeliik

comment created time in 17 days

pull request commentrelease-engineering/iib

Inspect index image to check if bundles are present

I would still suggest using the hash comparison approach. It's most likely to have less false negatives.

@lcarva Can I please ask if you have encountered any such instances before?

Sorry, instances of what? :smiley:

querti

comment created time in 17 days

pull request commentrelease-engineering/iib

Use the version of OPM based on the ocp version label of the index

Should we have a mechanism for setting up a default binary? Maybe that's not necessary.

Let's have the corresponding changes in the [dev environment|https://github.com/release-engineering/iib/blob/master/docker/Dockerfile-workers] though. Otherwise, it'll break.

shawn-hurley

comment created time in 17 days

Pull request review commentrelease-engineering/cachito

Add gomod environment variable: GOSUMDB

 def fetch_gomod_source(request_id, dep_replacements=None):     env_vars = {         "GOCACHE": {"value": "deps/gomod", "kind": "path"},         "GOPATH": {"value": "deps/gomod", "kind": "path"},+        "GOSUMDB": {"value": "off", "kind": "literal"},     }     env_vars.update(config.cachito_default_environment_variables.get("gomod", {}))

For the record, I'll write my understanding of the two mechanisms.

env_vars as defined directly in the Cachito code base, is meant to capture any environment variables that must be set and are not configurable. For example, GOPATH=deps/gomod, cannot hold a different value because this maps to where within the tarball the go module dependencies are found. Changing this value means changing how the source tarball is constructed.

cachito_default_environment_variables has a similar purpose but it's meant for items that could be configured. See the default NPM values for example. Adding GOSUMBDB=off to this makes sense because it depends on the pipeline environment.

Hopefully this makes it clearer!

akhmanova

comment created time in 20 days

PullRequestReviewEvent

Pull request review commentrelease-engineering/operators-manifests-push-service

Added missing requirements for operator-courier

 RUN if [ "$cacert_url" != "undefined" ]; then \ # This will allow a non-root user to install a custom root CA at run-time RUN chmod 777 /etc/pki/tls/certs/ca-bundle.crt COPY . .-RUN pip3 install --require-hashes --no-deps -r requirements-operator-courier.txt+RUN pip3 install --require-hashes -r requirements-operator-courier.txt

Why remove the --no-deps ? Omitting this may cause random packages to be installed.

midnightercz

comment created time in 24 days

PullRequestReviewEvent

pull request commentrelease-engineering/iib

Add support for podman compose

@lcarva Is this still a draft? Or is it ready for review?

I think the one thing I'd like to do is create a little Makefile so it's a little bit easier to run, and document how to user podman-compose vs docker-compose. I'll do that in the next couple of weeks.

lcarva

comment created time in 25 days

pull request commentcontainerbuildsystem/atomic-reactor

Export operator manifest metadata

Odd python2 error in unit tests

MartinBasti

comment created time in a month

PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commentcontainerbuildsystem/atomic-reactor

Export operator manifest metadata

 def run_in_orchestrator(self):         compute their replacements and set build arg for worker.          Exclude CSVs which already have a relatedImages section.++        Returns operator metadata in format+        related_images:+          pullspecs:  # list of all related_images_pullspecs+            - original: <original-pullspec1>  # original pullspec in CSV file+              new: <new pullspec>   # new pullspec computed by this plugin+              pinned: <bool>  # plugin pinned tag to digest+              replaced: <bool>  # plugin modified pullspec (repo/registry/tag changed)+            - original: ........+          created_by_osbs: <bool>         """+        related_images_metadata = {+            'pullspecs': [],+            'created_by_osbs': True,+        }+        operator_manifests_metadata = {+            'related_images': related_images_metadata+        }+         operator_manifest = self._get_operator_manifest()-        pullspecs = self._get_pullspecs(operator_manifest)+        if operator_manifest.csv:

I wanna say yes, but could we quickly verify that this won't break any existing builds?

MartinBasti

comment created time in a month

PullRequestReviewEvent
PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commentrelease-engineering/iib

Adding ability to update binary image of index image from Add

 services:         cp /host-ca-bundle.crt /etc/pki/tls/certs/ca-bundle.crt &&         cat /registry-certs/root_ca.crt >> /etc/pki/tls/certs/ca-bundle.crt &&         podman login --authfile ~/.docker/config.json.template -u iib \-        -p iibpassword registry:8443 &&+          -p iibpassword registry:8443 &&

Nit: it would be nice to use two spaces to stay consistent with --celery two lines below.

shawn-hurley

comment created time in a month

Pull request review commentrelease-engineering/iib

Adding ability to update binary image of index image from Add

 def from_json(cls, kwargs, batch=None):         request_kwargs = deepcopy(kwargs)          bundles = request_kwargs.get('bundles', [])-        if (-            not isinstance(bundles, list)-            or len(bundles) == 0-            or any(not item or not isinstance(item, str) for item in bundles)+        if not isinstance(bundles, list) or any(+            not item or not isinstance(item, str) for item in bundles         ):-            raise ValidationError(f'"bundles" should be a non-empty array of strings')+            raise ValidationError(+                '"bundles" should be either an empty array or an array ofnon-empty strings'

s/ofnon/of non

shawn-hurley

comment created time in a month

push eventlcarva/cachito

Luiz Carvalho

commit sha ccd5011c0b2accb4b5bb6d71e15b8d808a7c7106

Add cert auth to integration tests Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

push time in a month

Pull request review commentrelease-engineering/cachito

Add cert auth to integration tests

 def create_new_request(self, payload):         :rtype: Response         :raises requests.exceptions.HTTPError: if the request to the Cachito API fails         """-        authentication_mapping = {"kerberos": HTTPKerberosAuth()}         resp = requests.post(             f"{self._cachito_api_url}/requests",-            auth=authentication_mapping.get(self._cachito_api_auth_type),             headers={"Content-Type": "application/json"},             json=payload,+            **self._get_authentication_params()

Doh! Rookie mistake...

lcarva

comment created time in a month

PullRequestReviewEvent

delete branch nirzari/cachito_test_repo

delete branch : test-WxN0nciVay

delete time in a month

create barnchnirzari/cachito_test_repo

branch : test-WxN0nciVay

created branch time in a month

create barnchnirzari/cachito_test_repo

branch : test-dEtZYT75js

created branch time in a month

delete branch nirzari/cachito_test_repo

delete branch : test-dRYzRkLF1U

delete time in a month

create barnchnirzari/cachito_test_repo

branch : test-dRYzRkLF1U

created branch time in a month

PR opened release-engineering/cachito

Reviewers
Add cert auth to integration tests

Signed-off-by: Luiz Carvalho lucarval@redhat.com

+20 -3

0 comment

3 changed files

pr created time in a month

create barnchlcarva/cachito

branch : cert-auth-tests

created branch time in a month

PR opened release-engineering/cachito

Re-arrange Feature Support table

We're more likely to have more features than supported package managers. Transpose the table to it better represents this.

Signed-off-by: Luiz Carvalho lucarval@redhat.com

+10 -4

0 comment

1 changed file

pr created time in a month

create barnchlcarva/cachito

branch : arrange-pkg-mgrs-table

created branch time in a month

issue commentthelounge/thelounge

Typing messages slow in chrome with a lot of history

Custom CSS fix works for me. Thank you so much!

jhd85

comment created time in a month

issue openedthelounge/thelounge

Slow typing response

  • Node version: v12.18.3
  • Browser version: Version 84.0.4147.125 (Official Build) (64-bit)
  • Device, operating system: Fedora 31 (Linux localhost.localdomain 5.7.15-100.fc31.x86_64 #1 SMP Tue Aug 11 17:18:01 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux)
  • The Lounge version: v4.1.0

When I type a message, there's a really long delay between hitting the key and the character being displayed on the screen. The faster I type the slower it gets.

I'm using the docker.io/thelounge/thelounge:latest (sha256:f4b6e0831190b35596fb16bb3c683695338ed9a4ad5b4038270ea7f0430cba2a) image to host the server.

The profiler looks like this: image

image

image

created time in a month

PullRequestReviewEvent

Pull request review commentcontainerbuildsystem/atomic-reactor

Export operator manifest metadata

 def set_go_metadata(self, extra):             extra['image']['go'] = go      def set_operators_metadata(self, extra, worker_metadatas):+        # upload metadata from bundle (part of image)+        op_bundle_metadata = self.workflow.prebuild_results.get(PLUGIN_PIN_OPERATOR_DIGESTS_KEY)+        if op_bundle_metadata:+            op_related_images = op_bundle_metadata['related_images']+            pullspecs = [+                {+                    'original': p['original'].to_str(),

Should we use str instead of to_str here? It should have the same result, but I think it's a bit more succinct.

MartinBasti

comment created time in a month

Pull request review commentcontainerbuildsystem/atomic-reactor

Export operator manifest metadata

 def test_orchestrator(self, docker_tasker, tmpdir, caplog):         }         assert self._get_worker_arg(runner.workflow) == replacement_pullspecs +        expected_result = {+            'related_images': {+                'pullspecs': [+                    {+                        'original': ImageName.parse('old-registry/ns/spam:1'),+                        'new': ImageName.parse('new-registry/new-ns/new-spam@sha256:4'),+                        'pinned': True,+                        'replaced': True+                    }, {+                        'original': ImageName.parse('old-registry/ns/spam@sha256:4'),+                        'new': ImageName.parse('new-registry/new-ns/new-spam@sha256:4'),+                        'pinned': False,+                        'replaced': True+                    }, {+                        'original': ImageName.parse('private-registry/ns/baz:1'),+                        'new': ImageName.parse('public-registry/ns/baz@sha256:3'),+                        'pinned': True,+                        'replaced': True+                    }, {+                        'original': ImageName.parse('private-registry/ns/baz@sha256:3'),+                        'new': ImageName.parse('public-registry/ns/baz@sha256:3'),+                        'pinned': False,+                        'replaced': True+                    }, {+                        'original': ImageName.parse('registry.private.example.com/ns/foo:1'),+                        'new': ImageName.parse('registry.private.example.com/ns/foo@sha256:1'),+                        'pinned': True,+                        'replaced': True+                    }, {+                        'original': ImageName.parse('registry.private.example.com/ns/foo@sha256:1'),+                        'new': ImageName.parse('registry.private.example.com/ns/foo@sha256:1'),+                        'pinned': False,+                        'replaced': False+                    }, {+                        'original': ImageName.parse('weird-registry/ns/bar:1'),+                        'new': ImageName(+                            registry='weird-registry', repo='new-bar', tag='sha256:2'),+                        'pinned': True,+                        'replaced': True+                    }, {+                        'original': ImageName.parse('weird-registry/ns/bar@sha256:2'),+                        'new': ImageName(+                            registry='weird-registry', repo='new-bar', tag='sha256:2'),+                        'pinned': False,+                        'replaced': True+                    },+                ],+                'created_by_osbs': True,+            }+        }++        assert result['pin_operator_digest'] == expected_result

Can we enhance test_orchestrator_no_pullspecs to verify the result is None ? right now it only verifies that arguments were not forwarded to the worker.

MartinBasti

comment created time in a month

Pull request review commentcontainerbuildsystem/atomic-reactor

Export operator manifest metadata

 def run_in_orchestrator(self):         compute their replacements and set build arg for worker.          Exclude CSVs which already have a relatedImages section.++        Returns operator metadata in format+        related_images:+          pullspecs:  # list of all related_images_pullspecs+            - original: <original-pullspec1>  # original pullspec in CSV file+              new: <new pullspec>   # new pullspec computed by this plugin+              pinned: <bool>  # plugin pinned digest

Can you clarify the difference between pinned and modified?

MartinBasti

comment created time in a month

Pull request review commentcontainerbuildsystem/atomic-reactor

Export operator manifest metadata

 def _get_replacement_pullspecs(self, pullspecs):              self.log.info("Final pullspec: %s", replaced) -            if replaced != original:-                replacements[original] = replaced+            replacements.append({+                'original': original,+                'new': replaced,+                'pinned': pinned,+                'replaced': replaced != original+            })          replacement_lines = "\n".join(-            "{} -> {}".format(p, replacements[p]) if p in replacements-            else "{} - no change".format(p)-            for p in pullspecs+            "{} -> {}".format(r['original'], r['new']) if r['replaced']

Consider "{original} -> {new}".format(**r) and "{original} - no change".format(**r)

MartinBasti

comment created time in a month

PullRequestReviewEvent
PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commentrelease-engineering/iib

[WIP]: Adding ability to update binary image of index image from Add

 def test_add_bundle_from_index_and_add_arches_missing(mock_smfsc, db, auth_env,     mock_smfsc.assert_not_called()  -@pytest.mark.parametrize('overwrite_from_index', (False, True))+@pytest.mark.parametrize(

That's true. Ignore this.

shawn-hurley

comment created time in a month

Pull request review commentrelease-engineering/iib

[WIP]: Adding ability to update binary image of index image from Add

 def test_rm_operators_overwrite_not_allowed(mock_smfsc, client, db):     'data, error_msg',     (         (-            {'from_index': 'pull:spec', 'binary_image': 'binary:image', 'add_arches': ['s390x']},-            '"bundles" should be a non-empty array of strings',+            {'bundles': ['some:thing'], 'from_index': 'pull:spec', 'add_arches': ['s390x']},+            'Missing required parameter(s): binary_image',         ),         (-            {'bundles': ['some:thing'], 'from_index': 'pull:spec', 'add_arches': ['s390x']},+            {'from_index': 'pull:spec', 'add_arches': ['s390x']},             'Missing required parameter(s): binary_image',         ),+        (+            {'add_arches': ['s390x'], 'binary_image': 'binary:image'},+            '"from_index" and "binary_image" must be specified if no bundles are specified',

(Those two links you sent refer to the same line, which is the one right above my comment)

This use case you added covers when bundles is not provided, but only binary_image is provided. Good.

We also need:

  1. bundles not provided and only from_index is provided. But the kicker is that, even before these changes, if binary_image is not provided, then add_arches is required. So this test case should be: from_index and add_arches is provided and nothing else. This should get past the error "Missing required parameter(s): binary_image" and raise instead ""from_index" and "binary_image" must be specified if no bundles are specified'
  2. None of bundles, from_index, binary_image are provided. For the same reason as the previous item, this would require the add_arches parameter to ensure the right error is being triggered.
shawn-hurley

comment created time in a month

PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commentrelease-engineering/iib

[WIP]: Adding ability to update binary image of index image from Add

 def test_add_bundles_overwrite_not_allowed(mock_smfsc, client, db): @pytest.mark.parametrize(     'data, error_msg',     (-        (-            {'from_index': 'pull:spec', 'binary_image': 'binary:image', 'add_arches': ['s390x']},-            '"operators" should be a non-empty array of strings',

This is for the test_rm_operators_invalid_params_format test and the error message is about the operators parameter. How is that related to these changes?

shawn-hurley

comment created time in a month

PullRequestReviewEvent
PullRequestReviewEvent
PullRequestReviewEvent
more