profile
viewpoint
Luiz Carvalho lcarva Red Hat, Inc. Eastern US

containerbuildsystem/dockerfile-parse 62

Python library for parsing Dockerfile files.

containerbuildsystem/koji-containerbuild 22

Container build support for Koji buildsystem

containerbuildsystem/osbs-box 8

A local OSBS development environment

containerbuildsystem/osbs-docs 3

Documentation for OSBS Project

lcarva/ansible-patterns 2

Ansible usage patterns that make you smile

fedora-modularity/message-tagging-service 1

Tag koji builds with the correct tags, triggered by the message bus

lcarva/ansible-playbook 0

An Ansible playbook for automated deployment of full-stack Plone servers.

lcarva/atomic-reactor 0

Simple python library for building docker images.

pull request commentrelease-engineering/resultsdb-updater

Limit size of error_reason

We should update the message spec to indicate the error message gets truncated.

hluk

comment created time in 11 days

Pull request review commentrelease-engineering/resultsdb-updater

Add support for brew-build-group type

 def handle_ci_umb(msg):             'system_provider': msg.system('provider', default=None),         } +    elif item_type == 'brew-build-group':+        item = msg.get('artifact', 'id')

so... for resultsdb-updater, do we want to just take the ID as provided in the message, or do we want to automatically generate it based on the given builds?

IMO, resultsdb-updater should do some sort of verification to ensure the ID corresponds to the builds in the group.

hluk

comment created time in 13 days

startedrelease-engineering/dist-git

started time in 16 days

pull request commentrelease-engineering/operators-manifests-push-service

Test altering package names

Nice! Thanks for doing this.

csomh

comment created time in 17 days

pull request commentrelease-engineering/resultsdb-updater

Add support for redhat-container-image type

@gnaponie, @lcarva I think we can merge this now and fix it later if anything changes in new not-yet-completed message schema (https://pagure.io/fedora-ci/messages/pull-request/89). Otherwise, this will be blocked at least until 15th January.

What do you think?

I'm happy with having this in an early preview state.

hluk

comment created time in 17 days

Pull request review commentsidpremkumar/Sync2Jira

Small fixes

 The config file is made up of multiple parts     * :code:`'component'`         * Downstream component to sync with     * :code:`sync`-        * This array contains information on what to sync from upstream repos (i.e. 'issue' and/or 'pullreuest')+        * This array contains information on what to sync from upstream repos (i.e. 'issue' and/or 'pullrequest')

My editor is configured to automatically remove trailing white spaces. Let me know if this makes this change to noisy and I can isolate this change.

lcarva

comment created time in 17 days

PR opened sidpremkumar/Sync2Jira

Small fixes
+21 -18

0 comment

2 changed files

pr created time in 17 days

create barnchlcarva/Sync2Jira

branch : small-fixes

created branch time in 17 days

fork lcarva/Sync2Jira

Service to sync upstream Tickets/PR's with downstream JIRA tickets!

fork in 17 days

issue closedfedora-modularity/message-tagging-service

This project has misleading name

Hi,

I was searching for the rhmsg library and got to this project where I was really confused what this project does because OSBS team has a project (Gutentag @lkolacek) for tagging images but the name of this project is so similar that I really had to ask @csomh what is this project about and if the project is not similar to Gutentag. Is it possible to change the name to something more descriptive (@csomh suggested module-tagging-service)?

Thank you

closed time in 17 days

pbortlov

issue commentfedora-modularity/message-tagging-service

This project has misleading name

This is a duplicate of #34.

pbortlov

comment created time in 17 days

PR opened containerbuildsystem/atomic-reactor

Use docker-archive for source container images

The oci-archive is currently not supported by pub.

  • OSBS-8464

Signed-off-by: Luiz Carvalho lucarval@redhat.com

Maintainers will complete the following section:

  • [ ] Commit messages are descriptive enough
  • [ ] "Signed-off-by:" line is present in each commit
  • [ ] Code coverage from testing does not decrease and new code is covered
  • [ ] JSON/YAML configuration changes are updated in the relevant schema
  • [ ] Changes to metadata also update the documentation for the metadata
  • [ ] Pull request includes link to an osbs-docs PR for user documentation updates
  • [ ] New feature can be disabled from a configuration file
+4 -4

0 comment

2 changed files

pr created time in a month

create barnchlcarva/atomic-reactor

branch : source-contaner-docker-archive

created branch time in a month

create barnchlcarva/operators-manifests-push-service

branch : tox-posargs

created branch time in a month

pull request commentrelease-engineering/operators-manifests-push-service

Use package_suffix instead of repository_suffix

The unit tests turned out to be longer than what I was hoping for. Let me know if you'd like me to rethink how they're done.

lcarva

comment created time in a month

create barnchlcarva/operators-manifests-push-service

branch : package-name-suffix

created branch time in a month

push eventlcarva/operators-manifests-push-service

Luiz Carvalho

commit sha 3f8b0a10b95bf1a4315bfd034655c1af02da87ea

Add repository_suffix organization config * OSBS-8412 Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

push time in a month

create barnchlcarva/operators-manifests-push-service

branch : repository-suffix

created branch time in a month

fork lcarva/operators-manifests-push-service

Service for pushing operators manifests to quay.io from various sources

fork in a month

pull request commentcontainerbuildsystem/atomic-reactor

Cachito integration plugin

Added issue ID to each commit. I think I addressed all the comments.

lcarva

comment created time in a month

Pull request review commentcontainerbuildsystem/atomic-reactor

Cachito integration plugin

 def get_smtp_session(workflow, fallback):     return smtplib.SMTP(config['host'])  +def get_cachito(workflow):+    return get_value(workflow, 'cachito', NO_FALLBACK)+++def get_cachito_session(workflow):+    config = get_cachito(workflow)+    from atomic_reactor.cachito_util import CachitoAPI++    api_kwargs = {'insecure': config.get('insecure', False)}++    ssl_certs_dir = config['auth'].get('ssl_certs_dir')+    if ssl_certs_dir:+        cert_path = os.path.join(ssl_certs_dir, 'cert')+        if os.path.exists(cert_path):+            api_kwargs['cert'] = cert_path+        else:+            raise KeyError("Cachito ssl_certs_dir doesn't exist")

You're right though. KeyError doesn't make any sense. I changed it to RuntimeError.

lcarva

comment created time in a month

Pull request review commentcontainerbuildsystem/atomic-reactor

Cachito integration plugin

 def get_smtp_session(workflow, fallback):     return smtplib.SMTP(config['host'])  +def get_cachito(workflow):+    return get_value(workflow, 'cachito', NO_FALLBACK)+++def get_cachito_session(workflow):+    config = get_cachito(workflow)+    from atomic_reactor.cachito_util import CachitoAPI

Done!

lcarva

comment created time in a month

push eventlcarva/atomic-reactor

Luiz Carvalho

commit sha 6bf42776181e1e6b803bc589782630a2aa2d970a

Use helper method to get pre plugin results * OSBS-8135 Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

Luiz Carvalho

commit sha 7d8b9117d580ad82df533bcc782eb59e58d5cd1c

Add support for Cachito configuration * OSBS-8135 Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

Luiz Carvalho

commit sha 71746832554a4973d36743c996d38d39726a788c

Add support for remote_source configuration * OSBS-8135 Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

Luiz Carvalho

commit sha 0399c4fb2ced228f4460c3036a86654352d60c5b

Enhance CachitoAPI to return the download URL * OSBS-8135 Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

Luiz Carvalho

commit sha f1914deef24973f97e2ea01061621fb4446a5cc4

Add the new pre plugin resolve_remote_source * OSBS-8135 Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

push time in a month

push eventlcarva/atomic-reactor

Adam Cmiel

commit sha fba205022ad4ecdc0b561212043e7b9962e11b01

Add `annotation` decorator to utils * OSBS-7828 Will be used to annotate build plugins. An annotated plugin will store the result of its `run()` method in its workflow. Signed-off-by: Adam Cmiel <acmiel@redhat.com>

view details

Adam Cmiel

commit sha 5870eade9e30d78cc9bad034f7b993c94959ff56

store_metadata: collect plugin annotations Signed-off-by: Adam Cmiel <acmiel@redhat.com>

view details

Adam Cmiel

commit sha 33a5e2fca5ba0a119cbc1ef5b8ad3c607023678c

tests: require responses < 0.10.8 Signed-off-by: Adam Cmiel <acmiel@redhat.com>

view details

Luiz Carvalho

commit sha ff5853bdc12930b658ffe2b7b467d24fd9bbe3e6

Use helper method to get pre plugin results Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

Luiz Carvalho

commit sha b1dff083a35861ee76c40783ea398c67a61bc0bc

Add support for Cachito configuration Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

Luiz Carvalho

commit sha 0c3330fdd9018a94075ea99ca2cc8e011e2d58b9

Add support for remote_source configuration Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

Luiz Carvalho

commit sha 46304381d0070e06d3a2472cb02d438525c80632

Enhance CachitoAPI to return the download URL Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

Luiz Carvalho

commit sha c4df0344769e45d2a53c44cb5b35cfe97943693f

Add the new pre plugin resolve_remote_source Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

push time in a month

Pull request review commentfedora-modularity/message-tagging-service

Check rule for scratch build correctly

 logger = logging.getLogger(__name__)  -def retrieve_modulemd_content(module_build_id):-    """Retrieve and return modulemd.txt from MBS+def load_module_md(module_build_id):+    """Load corresponding module metadata of a specific module build++    For handling the rule match easily, this function also inject some+    necessary module build properties into the loaded module metadata+    mapping.      :param int module_build_id: module build ID.-    :return: modulemd content.-    :rtype: str+    :return: a mapping representing the module metadata.+    :rtype: dict     """     api_url = conf.mbs_api_url.rstrip('/')     resp = requests.get(f'{api_url}/module-builds/{module_build_id}', params={         'verbose': True     })     resp.raise_for_status()-    return resp.json()['modulemd']+    module_build = resp.json()+    modulemd = yaml.safe_load(module_build['modulemd'])+    modulemd['data']['scratch'] = module_build['scratch']

I think at some point there were talks about creating builds in Brew for scratch builds as well. I think this was supposed to address that. We never got around to actually doing that, and it's unclear if it'll ever happen.

+1 for removing it from any rule definition, and docs.

We still need to make MTS ignore scratch builds.

tkdchen

comment created time in a month

Pull request review commentfedora-modularity/message-tagging-service

Check rule for scratch build correctly

 }  -@contextlib.contextmanager-def mock_get_rule_file(rule_file):-    with patch('requests.get') as get:-        with open(rule_file, 'r') as f:-            get.return_value.text = f.read()-        yield+def read_file(filename):

open(filename).read() also achieves the same thing :)

But given the usage of this function in your tests, it would be helpful if it took a list of paths:

def read_file(*paths):
  full_path = os.path.join(*paths)
  return open(full_path).read()

# Usage example
read_file(test_data_dir, 'mts-test-for-no-match.yaml')
tkdchen

comment created time in a month

Pull request review commentfedora-modularity/message-tagging-service

Check rule for scratch build correctly

 logger = logging.getLogger(__name__)  -def retrieve_modulemd_content(module_build_id):-    """Retrieve and return modulemd.txt from MBS+def load_module_md(module_build_id):+    """Load corresponding module metadata of a specific module build++    For handling the rule match easily, this function also inject some+    necessary module build properties into the loaded module metadata+    mapping.      :param int module_build_id: module build ID.-    :return: modulemd content.-    :rtype: str+    :return: a mapping representing the module metadata.+    :rtype: dict     """     api_url = conf.mbs_api_url.rstrip('/')     resp = requests.get(f'{api_url}/module-builds/{module_build_id}', params={         'verbose': True     })     resp.raise_for_status()-    return resp.json()['modulemd']+    module_build = resp.json()+    modulemd = yaml.safe_load(module_build['modulemd'])+    modulemd['data']['scratch'] = module_build['scratch']

We should probably document that creating a rule with scratch: yes will never work since MBS does not create a build in Brew in that case.

tkdchen

comment created time in a month

Pull request review commentcontainerbuildsystem/atomic-reactor

Cachito integration plugin

       "description": "build-step plugin to be used",       "enum": ["docker_api", "imagebuilder", "buildah_bud"]     },+    "remote_source": {

During the design phase we decided to only support a single remote source, but allow multiple package managers for it.

Supporting multiple remote sources can be challenging. For instance, if there are two remote sources using gomod, it's unclear what the value of GOCACHE/GOPATH should be set to. Should it point to the cache of the first remote source, or the second?

Given the complexities that can arise from this scenario, we decided to only support a single remote source until a real use case arises to makes us reconsider.

lcarva

comment created time in a month

Pull request review commentcontainerbuildsystem/atomic-reactor

Cachito integration plugin

       "description": "build-step plugin to be used",       "enum": ["docker_api", "imagebuilder", "buildah_bud"]     },+    "remote_source": {

During the design phase, we decided to only support a single remote source. But a single remote source could have multiple package managers. This is inline with upstream projects.

Having multiple remote sources is challenging because if you have, for example, two goloang apps, the value for GOCACHE/GOPATCH is ambiguous.

Given the complexities in implementing this, we decided that this could be added later on if there was need.

lcarva

comment created time in a month

Pull request review commentcontainerbuildsystem/atomic-reactor

Cachito integration plugin

 def get_smtp_session(workflow, fallback):     return smtplib.SMTP(config['host'])  +def get_cachito(workflow):+    return get_value(workflow, 'cachito', NO_FALLBACK)+++def get_cachito_session(workflow):+    config = get_cachito(workflow)+    from atomic_reactor.cachito_util import CachitoAPI++    api_kwargs = {'insecure': config.get('insecure', False)}++    ssl_certs_dir = config['auth'].get('ssl_certs_dir')+    if ssl_certs_dir:+        cert_path = os.path.join(ssl_certs_dir, 'cert')+        if os.path.exists(cert_path):+            api_kwargs['cert'] = cert_path+        else:+            raise KeyError("Cachito ssl_certs_dir doesn't exist")

It's KeyError because that's what get_value raises if a config for cachito doesn't exist at all. I suppose get_odcs_session does this so it's easier to use by only catching a single exception.

lcarva

comment created time in a month

Pull request review commentcontainerbuildsystem/atomic-reactor

Cachito integration plugin

 def get_smtp_session(workflow, fallback):     return smtplib.SMTP(config['host'])  +def get_cachito(workflow):+    return get_value(workflow, 'cachito', NO_FALLBACK)+++def get_cachito_session(workflow):+    config = get_cachito(workflow)+    from atomic_reactor.cachito_util import CachitoAPI

I followed the pattern in get_odcs_session. IIRC, this is so we can split up the plugin into subpackages. If that's not needed, I can move it to the top.

lcarva

comment created time in a month

PR opened containerbuildsystem/atomic-reactor

Cachito integration plugin

Maintainers will complete the following section:

  • [ ] Commit messages are descriptive enough
  • [ ] "Signed-off-by:" line is present in each commit
  • [ ] Code coverage from testing does not decrease and new code is covered
  • [ ] JSON/YAML configuration changes are updated in the relevant schema
  • [ ] Changes to metadata also update the documentation for the metadata
  • [ ] Pull request includes link to an osbs-docs PR for user documentation updates
  • [ ] New feature can be disabled from a configuration file
+715 -2

0 comment

12 changed files

pr created time in a month

push eventlcarva/atomic-reactor

Luiz Carvalho

commit sha ab2a48a914f936f78ecc2a63fb9ae57b306d146d

Enhance CachitoAPI to return the download URL Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

Luiz Carvalho

commit sha 23def77b16756617a58224d0cd0a08729c38154c

Add the new pre plugin resolve_remote_source Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

push time in a month

push eventlcarva/atomic-reactor

Luiz Carvalho

commit sha 30d79e3beb1ad065750ddddd6325f8751c8f639e

Enhance CachitoAPI to return the download URL Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

Luiz Carvalho

commit sha 4efc8c641b93f7c0560dc5fc00919b6205f01cda

Add the new pre plugin resolve_remote_source Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

push time in a month

push eventlcarva/atomic-reactor

Luiz Carvalho

commit sha d922a9c3b689403a64c143ecb1d924f02a68cf7a

Add the new pre plugin resolve_remote_source Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

push time in a month

push eventlcarva/atomic-reactor

Luiz Carvalho

commit sha 7a6e3a5f7b15a79ca28962fd8c9a4cf54661dc85

Enhance CachitoAPI to return the download URL Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

Luiz Carvalho

commit sha feb17c80523c09774d61a5cd9ac7b854c83d970b

Add the new pre plugin resolve_remote_source Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

push time in a month

push eventlcarva/atomic-reactor

Athos Ribeiro

commit sha 16cd322ce7870bc6d03dc76e68109fa6db009f00

Improve logs for SRPM URL checks Logging 404 errors for header checks on possible signing intent related URLs may be misleading unless followed by a success check or a build failure. Instead, we log successful requests. If not SRPM is found for a given package in the signing intent, we then log an error before raising. Signed-off-by: Athos Ribeiro <athos@redhat.com>

view details

Robert Cerven

commit sha 30e3d15b14a1e9a53b11b090879328be018f5865

1.6.47 release Signed-off-by: Robert Cerven <rcerven@redhat.com>

view details

Robert Cerven

commit sha f46b2c4264888d2e4deb1839496c6f67cd775985

spec: update hash for 1.6.47 Signed-off-by: Robert Cerven <rcerven@redhat.com>

view details

Robert Cerven

commit sha 945897e0347a1b9388717f8ebc2cac53b96f1cdb

remove_worker_metadata should not depend on fetch_worker_metadata * OSBS-7246 Signed-off-by: Robert Cerven <rcerven@redhat.com>

view details

Luiz Carvalho

commit sha 7b30be969a2b77c82e48681c2960213863d2ec4b

Allow explicit filename in download_url Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

Luiz Carvalho

commit sha 3ec4e7db0c862ccb30e48d38c73e754cb21a60a1

Add utility class to interact with Cachito Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

Athos Ribeiro

commit sha eea6ccdc93f9ec575a1e08946974432ebddbb7c7

Whitelist koji annotations from reactor_config_map Allow annotations whose names are listed in task_annotations_whitelist koji's configuration in the configmap to be included in a new koji_task_annotations_whitelist annotation. This may be used with koji integration to whitelist annotations to be processed in a koji task. * OSBS-8138 Signed-off-by: Athos Ribeiro <athos@redhat.com>

view details

Luiz Carvalho

commit sha 64367112d215445df2c4425e9da4ecf4a1b301d3

Use helper method to get pre plugin results Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

Luiz Carvalho

commit sha e952818daf9a76cdd3afff65b18cf7e18f11037e

Add support for Cachito configuration Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

Luiz Carvalho

commit sha 1a7535831af0b3a6727f10e94d75bb3dd156e50b

Add support for remote_source configuration Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

Luiz Carvalho

commit sha a65622f05115f923107e56618b96dce5a6956d37

Add the new pre plugin resolve_remote_source Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

push time in a month

push eventlcarva/atomic-reactor

Luiz Carvalho

commit sha bac077aecfe538042bce87ec076971512737176d

Add the new pre plugin resolve_remote_source Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

push time in a month

push eventlcarva/atomic-reactor

Luiz Carvalho

commit sha 7f94e2d2579b1991bf736295c70382216302582e

Add support for remote_source configuration Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

Luiz Carvalho

commit sha 5d3859267039c3f1c23b2fcc6ca5dfffb18b35c1

Add the new pre plugin resolve_remote_source Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

push time in a month

push eventlcarva/atomic-reactor

Luiz Carvalho

commit sha 67ef91ad022ad8e2afac6cdd75666baaff03acbf

Add support for Cachito configuration Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

Luiz Carvalho

commit sha 7607a7131c9f3f45064481ce3ac08ab110396932

Add support for remote_source configuration Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

Luiz Carvalho

commit sha 261c3147510346469d908a620b018afda4d9bc0c

Add the new pre plugin resolve_remote_source Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

push time in a month

Pull request review commentrelease-engineering/cachito

Add support for client SSL certificate authentication

 def user_loader(username):     return User.query.filter_by(username=username).first()  +def _get_kerberos_principal(request):+    """+    Get the Kerberos principal from the current request.++    This relies on the "REMOTE_USER" environment variable being set. This is usually set by the+    mod_auth_gssapi Apache authentication module.++    :param flask.Request request: the Flask request+    :return: the user's Kerberos principal or None+    :rtype: str+    """+    return request.environ.get('REMOTE_USER')+++def _get_cert_dn(request):+    """+    Get the client certificate's subject's distinguished name.++    This relies on the "SSL_CLIENT_S_DN" environment variable being set. This is set by the mod_ssl+    Apache module. If Apache is unable to verify the client certificate, no user will be returned.++    :param flask.Request request: the Flask request+    :return: the client certificate's subject's distinguished name or None+    :rtype: str+    """+    ssl_client_verify = request.environ.get('SSL_CLIENT_VERIFY')

This is answered by the fact that you're specifying the CA in the playbooks.

mprahl

comment created time in a month

Pull request review commentrelease-engineering/cachito

Add support for client SSL certificate authentication

 def user_loader(username):     return User.query.filter_by(username=username).first()  +def _get_kerberos_principal(request):+    """+    Get the Kerberos principal from the current request.++    This relies on the "REMOTE_USER" environment variable being set. This is usually set by the+    mod_auth_gssapi Apache authentication module.++    :param flask.Request request: the Flask request+    :return: the user's Kerberos principal or None+    :rtype: str+    """+    return request.environ.get('REMOTE_USER')+++def _get_cert_dn(request):+    """+    Get the client certificate's subject's distinguished name.++    This relies on the "SSL_CLIENT_S_DN" environment variable being set. This is set by the mod_ssl+    Apache module. If Apache is unable to verify the client certificate, no user will be returned.++    :param flask.Request request: the Flask request+    :return: the client certificate's subject's distinguished name or None+    :rtype: str+    """+    ssl_client_verify = request.environ.get('SSL_CLIENT_VERIFY')

Does this ensure the client cert has been issued by a trusted CA? Can we specify which CA should be used?

mprahl

comment created time in a month

create barnchlcarva/atomic-reactor

branch : cachito-integration-plugin

created branch time in a month

pull request commentcontainerbuildsystem/atomic-reactor

Add utility class to interact with Cachito

travis error: Error: Error downloading packages: Curl error (28): Timeout was reached for https://mirrors.fedoraproject.org/metalink?repo=updates-released-f29&arch=x86_64 [Connection timed out after 30000 milliseconds] Someone probably needs to restart that build.

lcarva

comment created time in a month

pull request commentrelease-engineering/cachito

Handle user set when auth is disabled

@lcarva could you please add a unit test for this? Otherwise, it looks good.

I can't figure out how to add a unit test. To trigger this behavior, we need an instance of the app with login disabled. However, simply setting LOGIN_DISABLED to True is not sufficient. Flask must be configuring the login handlers at creation time so changing this value does nothing.

The behavior described here seems to match what I observed.

I went as far as creating a new Config with LOGIN_DISABLE = True and create new app and client fixtures to create a new app based on that. However, this caused random "popped invalid context" errors in unrelated tests.

I'm open to ideas.

@mprahl, thanks for helping out. I removed the db fixture and changed the scope of the app* and client* fixtures to be function based. It seems to be working now.

lcarva

comment created time in a month

push eventlcarva/cachito

Luiz Carvalho

commit sha e1eaebf18b61e88ce2122fd805ece5a5f1813609

Allow API tests with auth disabled Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

Luiz Carvalho

commit sha 4c9789c77aec6af12f6e5970749dbf56c611a432

Handle user set when auth is disabled With this change, clients are given a meaningful error message when a request is made with a specified user. Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

push time in a month

push eventlcarva/cachito

Luiz Carvalho

commit sha 893ac03a3b595613b0d26833c90cf46c4d105aa4

Allow API tests with auth disabled Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

Luiz Carvalho

commit sha 7c55f9c3e686712b322131b69336ad6dd098df6b

Handle user set when auth is disabled With this change, clients are given a meaningful error message when a request is made with a specified user. Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

push time in a month

push eventlcarva/atomic-reactor

Luiz Carvalho

commit sha eaaf595c8cd982762466f223e85ab45975af0cd4

Add utility class to interact with Cachito Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

push time in a month

pull request commentrelease-engineering/cachito

Handle user set when auth is disabled

@lcarva could you please add a unit test for this? Otherwise, it looks good.

I can't figure out how to add a unit test. To trigger this behavior, we need an instance of the app with login disabled. However, simply setting LOGIN_DISABLED to True is not sufficient. Flask must be configuring the login handlers at creation time so changing this value does nothing.

The behavior described here seems to match what I observed.

I went as far as creating a new Config with LOGIN_DISABLE = True and create new app and client fixtures to create a new app based on that. However, this caused random "popped invalid context" errors in unrelated tests.

I'm open to ideas.

lcarva

comment created time in a month

Pull request review commentcontainerbuildsystem/atomic-reactor

Add utility class to interact with Cachito

+"""+Copyright (c) 2019 Red Hat, Inc+All rights reserved.++This software may be modified and distributed under the terms+of the BSD license. See the LICENSE file for details.+"""++from __future__ import absolute_import++from textwrap import dedent+import json+import logging+import time++from atomic_reactor.download import download_url+from atomic_reactor.util import get_retrying_requests_session+++logger = logging.getLogger(__name__)+++class CachitoAPIError(Exception):+    """Top level exception for errors in interacting with Cachito's API"""+++class CachitoAPIInvalidRequest(CachitoAPIError):+    """Invalid request made to Cachito's API"""+++class CachitoAPIUnsuccessfulRequest(CachitoAPIError):+    """Cachito's API request not completed successfully"""+++class CachitpAPIRequestTimeout(CachitoAPIError):+    """A request to Cachito's API took too long to complete"""+++class CachitoAPI(object):++    def __init__(self, api_url, insecure=False, token=None, cert=None):+        self.api_url = api_url+        self.session = self._make_session(insecure=insecure, cert=cert)++    def _make_session(self, insecure, cert):+        # method_whitelist=False allows retrying non-idempotent methods like POST+        session = get_retrying_requests_session(method_whitelist=False)+        session.verify = not insecure+        if cert:+            session.cert = cert+        return session++    def request_sources(self, repo, ref, flags=None, pkg_managers=None, user=None):+        """Start a new Cachito request++        :param repo: str, the URL to the SCM repository+        :param ref: str, the SCM reference to fetch+        :param pkg_managers: list<str>, list of package managers to be used for resolving+                             dependencies+        :param flags: list<str>, list of flag names++        :return: dict, representation of the created Cachito request+        :raise CachitoAPIInvalidRequest: if Cachito determines the request is invalid+        """+        payload = {+            'repo': repo,+            'ref': ref,+            'flags': flags,+            'pkg_managers': pkg_managers,+            'user': user,+        }+        # Remove None values+        payload = {k: v for k, v in payload.items() if v}++        url = '{}/api/v1/requests'.format(self.api_url)+        logger.debug('Making request %s with payload:\n%s', url, json.dumps(payload, indent=4))+        response = self.session.post(url, json=payload)+        if response.status_code == 400:+            raise CachitoAPIInvalidRequest(response.json()['error'])+        response.raise_for_status()+        return response.json()++    def wait_for_request(+            self, request_id, burst_retry=1, burst_length=30, slow_retry=10, timeout=3600):+        """Wait for a Cachito request to complete++        :param request_id: int, the Cachito request ID+        :param burst_retry: int, seconds to wait between retries prior to exceeding+                            the burst length+        :param burst_length: int, seconds to switch to slower retry period+        :param slow_retry: int, seconds to wait between retries after exceeding+                           the burst length+        :param timeout: int, when to give up waiting for compose request++        :return: dict, latest representation of the Cachito request+        :raise CachitoAPIUnsuccessfulRequest: if the request completes unsuccessfully+        :raise CachitpAPIRequestTimeout: if the request does not complete timely

Thank for catching this! :man_facepalming:

lcarva

comment created time in a month

Pull request review commentcontainerbuildsystem/atomic-reactor

Add utility class to interact with Cachito

+"""+Copyright (c) 2019 Red Hat, Inc+All rights reserved.++This software may be modified and distributed under the terms+of the BSD license. See the LICENSE file for details.+"""++from __future__ import absolute_import++from textwrap import dedent+import json+import logging+import time++from atomic_reactor.download import download_url+from atomic_reactor.util import get_retrying_requests_session+++logger = logging.getLogger(__name__)+++class CachitoAPIError(Exception):+    """Top level exception for errors in interacting with Cachito's API"""+++class CachitoAPIInvalidRequest(CachitoAPIError):+    """Invalid request made to Cachito's API"""+++class CachitoAPIUnsuccessfulRequest(CachitoAPIError):+    """Cachito's API request not completed successfully"""+++class CachitpAPIRequestTimeout(CachitoAPIError):+    """A request to Cachito's API took too long to complete"""+++class CachitoAPI(object):++    def __init__(self, api_url, insecure=False, token=None, cert=None):+        self.api_url = api_url+        self.session = self._make_session(insecure=insecure, cert=cert)++    def _make_session(self, insecure, cert):+        # method_whitelist=False allows retrying non-idempotent methods like POST+        session = get_retrying_requests_session(method_whitelist=False)+        session.verify = not insecure+        if cert:+            session.cert = cert+        return session++    def request_sources(self, repo, ref, flags=None, pkg_managers=None, user=None):+        """Start a new Cachito request++        :param repo: str, the URL to the SCM repository+        :param ref: str, the SCM reference to fetch+        :param pkg_managers: list<str>, list of package managers to be used for resolving+                             dependencies+        :param flags: list<str>, list of flag names++        :return: dict, representation of the created Cachito request+        :raise CachitoAPIInvalidRequest: if Cachito determines the request is invalid+        """+        payload = {+            'repo': repo,+            'ref': ref,+            'flags': flags,+            'pkg_managers': pkg_managers,+            'user': user,+        }+        # Remove None values+        payload = {k: v for k, v in payload.items() if v}++        url = '{}/api/v1/requests'.format(self.api_url)+        logger.debug('Making request %s with payload:\n%s', url, json.dumps(payload, indent=4))+        response = self.session.post(url, json=payload)+        if response.status_code == 400:+            raise CachitoAPIInvalidRequest(response.json()['error'])+        response.raise_for_status()+        return response.json()++    def wait_for_request(+            self, request_id, burst_retry=1, burst_length=30, slow_retry=10, timeout=3600):+        """Wait for a Cachito request to complete++        :param request_id: int, the Cachito request ID

Chose to accept either. Code updated.

lcarva

comment created time in a month

Pull request review commentcontainerbuildsystem/atomic-reactor

Add utility class to interact with Cachito

+"""+Copyright (c) 2019 Red Hat, Inc+All rights reserved.++This software may be modified and distributed under the terms+of the BSD license. See the LICENSE file for details.+"""++from __future__ import absolute_import++from textwrap import dedent+import json+import logging+import time++from atomic_reactor.download import download_url+from atomic_reactor.util import get_retrying_requests_session+++logger = logging.getLogger(__name__)+++class CachitoAPIError(Exception):+    """Top level exception for errors in interacting with Cachito's API"""+++class CachitoAPIInvalidRequest(CachitoAPIError):+    """Invalid request made to Cachito's API"""+++class CachitoAPIUnsuccessfulRequest(CachitoAPIError):+    """Cachito's API request not completed successfully"""+++class CachitpAPIRequestTimeout(CachitoAPIError):+    """A request to Cachito's API took too long to complete"""+++class CachitoAPI(object):++    def __init__(self, api_url, insecure=False, token=None, cert=None):+        self.api_url = api_url+        self.session = self._make_session(insecure=insecure, cert=cert)++    def _make_session(self, insecure, cert):+        # method_whitelist=False allows retrying non-idempotent methods like POST+        session = get_retrying_requests_session(method_whitelist=False)+        session.verify = not insecure+        if cert:+            session.cert = cert+        return session++    def request_sources(self, repo, ref, flags=None, pkg_managers=None, user=None):+        """Start a new Cachito request++        :param repo: str, the URL to the SCM repository+        :param ref: str, the SCM reference to fetch+        :param pkg_managers: list<str>, list of package managers to be used for resolving+                             dependencies+        :param flags: list<str>, list of flag names++        :return: dict, representation of the created Cachito request+        :raise CachitoAPIInvalidRequest: if Cachito determines the request is invalid+        """+        payload = {+            'repo': repo,+            'ref': ref,+            'flags': flags,+            'pkg_managers': pkg_managers,+            'user': user,+        }+        # Remove None values+        payload = {k: v for k, v in payload.items() if v}++        url = '{}/api/v1/requests'.format(self.api_url)+        logger.debug('Making request %s with payload:\n%s', url, json.dumps(payload, indent=4))+        response = self.session.post(url, json=payload)+        if response.status_code == 400:

Good idea! Fixed it.

Btw, I'm not familiar with github's suggestions. Should I just accept these instead of making them myself?

lcarva

comment created time in a month

push eventlcarva/atomic-reactor

Luiz Carvalho

commit sha 99c7abee4e995f92cefd746b729eb9a04e56f5fc

Add utility class to interact with Cachito Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

push time in a month

Pull request review commentcontainerbuildsystem/atomic-reactor

Add utility class to interact with Cachito

+"""+Copyright (c) 2019 Red Hat, Inc+All rights reserved.++This software may be modified and distributed under the terms+of the BSD license. See the LICENSE file for details.+"""++from __future__ import absolute_import++from atomic_reactor.cachito_util import (+    CachitoAPI, CachitoAPIInvalidRequest, CachitpAPIRequestTimeout, CachitoAPIUnsuccessfulRequest)++from requests.exceptions import HTTPError+import pytest+import responses+import json+import os.path+++CACHITO_URL = 'http://cachito.example.com'+CACHITO_REQUEST_ID = 123+CACHITO_REQUEST_REF = 'e1be527f39ec31323f0454f7d1422c6260b00580'+CACHITO_REQUEST_REPO = 'https://github.com/release-engineering/retrodep.git'+++@responses.activate+@pytest.mark.parametrize('additional_params', (+    {},+    {'flags': ['spam', 'bacon']},+    {'pkg_managers': ['gomod']},+    {'user': 'ham'},+))+def test_request_sources(additional_params):++    def handle_request_sources(http_request):+        body_json = json.loads(http_request.body)++        assert body_json['repo'] == CACHITO_REQUEST_REPO+        assert body_json['ref'] == CACHITO_REQUEST_REF+        for key, value in additional_params.items():+            assert body_json[key] == value++        return (201, {}, json.dumps({'id': CACHITO_REQUEST_ID}))++    responses.add_callback(+        responses.POST,+        '{}/api/v1/requests'.format(CACHITO_URL),+        content_type='application/json',+        callback=handle_request_sources)++    api = CachitoAPI(CACHITO_URL)+    response = api.request_sources(CACHITO_REQUEST_REPO, CACHITO_REQUEST_REF, **additional_params)+    assert response['id'] == CACHITO_REQUEST_ID+++@responses.activate+@pytest.mark.parametrize(('status_code', 'error', 'error_body'), (+    (400, CachitoAPIInvalidRequest, json.dumps({'error': 'read the docs, please'})),+    (500, HTTPError, 'Internal Server Error'),+))+def test_request_sources_error(status_code, error, error_body):+    responses.add(+        responses.POST,+        '{}/api/v1/requests'.format(CACHITO_URL),+        content_type='application/json',+        body=error_body,+        status=status_code,+    )++    with pytest.raises(error):+        CachitoAPI(CACHITO_URL).request_sources(CACHITO_REQUEST_REPO, CACHITO_REQUEST_REF)+++@responses.activate+@pytest.mark.parametrize('burst_params', (+    {'burst_retry': 0.01, 'burst_length': 0.5, 'slow_retry': 0.2},+    # Set the burst_retry to lower than burst_length to trigger the slow_retry :)

That's because I swapped burst_retry with burst_length. We want the length to be shorter to trigger the slow_retry. This should be fixed now.

lcarva

comment created time in a month

push eventlcarva/atomic-reactor

Luiz Carvalho

commit sha 2de36ad503edb7dca27125352e76132a39c7c1ec

Add utility class to interact with Cachito Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

push time in a month

push eventlcarva/atomic-reactor

Luiz Carvalho

commit sha 54b44b4b37d82d986540c7ba341750650e3e1d5d

Add utility class to interact with Cachito Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

push time in a month

Pull request review commentrelease-engineering/resultsdb-updater

Add support for brew-build-group type

 def handle_ci_umb(msg):             'system_provider': msg.system('provider', default=None),         } +    elif item_type == 'brew-build-group':+        item = msg.get('artifact', 'id')

I think that's fine. We can revisit this if it becomes problematic.

hluk

comment created time in a month

Pull request review commentcontainerbuildsystem/atomic-reactor

Add utility class to interact with Cachito

+"""+Copyright (c) 2019 Red Hat, Inc+All rights reserved.++This software may be modified and distributed under the terms+of the BSD license. See the LICENSE file for details.+"""++from __future__ import absolute_import++from textwrap import dedent+import json+import logging+import time++from atomic_reactor.util import get_retrying_requests_session+from atomic_reactor.constants import DEFAULT_DOWNLOAD_BLOCK_SIZE+++logger = logging.getLogger(__name__)+++class CachitoAPIError(Exception):+    """Top level exception for errors in interacting with Cachito's API"""+++class CachitoAPIInvalidRequest(CachitoAPIError):+    """Invalid request made to Cachito's API"""+++class CachitoAPIUnsuccessfulRequest(CachitoAPIError):+    """Cachito's API request not completed successfully"""+++class CachitpAPIRequestTimeout(CachitoAPIError):+    """A request to Cachito's API took too long to complete"""+++class CachitoAPI(object):++    def __init__(self, api_url, insecure=False, token=None, cert=None):+        self.api_url = api_url+        self.session = self._make_session(insecure=insecure, cert=cert)++    def _make_session(self, insecure, cert):+        # method_whitelist=False allows retrying non-idempotent methods like POST+        session = get_retrying_requests_session(method_whitelist=False)+        session.verify = not insecure+        if cert:+            session.cert = cert+        return session++    def request_sources(self, repo, ref, flags=None, pkg_managers=None, user=None):+        """Start a new Cachito request++        :param repo: str, the URL to the SCM repository+        :param ref: str, the SCM reference to fetch+        :param pkg_managers: list<str>, list of package managers to be used for resolving+                             dependencies+        :param flags: list<str>, list of flag names++        :return: dict, representation of the created Cachito request+        :raise CachitoAPIInvalidRequest: if Cachito determines the request is invalid+        """+        payload = {+            'repo': repo,+            'ref': ref,+            'flags': flags,+            'pkg_managers': pkg_managers,+            'user': user,+        }+        # Remove None values+        payload = {k: v for k, v in payload.items() if v}++        url = '{}/api/v1/requests'.format(self.api_url)+        logger.debug('Making request %s with payload:\n%s', url, json.dumps(payload, indent=4))+        response = self.session.post(url, json=payload)+        if response.status_code == 400:+            raise CachitoAPIInvalidRequest(response.json()['error'])+        response.raise_for_status()+        return response.json()++    def wait_for_request(+            self, request_id, burst_retry=1, burst_length=30, slow_retry=10, timeout=3600):+        """Wait for a Cachito request to complete++        :param request_id: int, the Cachito request ID+        :param burst_retry: int, seconds to wait between retries prior to exceeding+                            the burst length+        :param burst_length: int, seconds to switch to slower retry period+        :param slow_retry: int, seconds to wait between retries after exceeding+                           the burst length+        :param timeout: int, when to give up waiting for compose request++        :return: dict, latest representation of the Cachito request+        :raise CachitoAPIUnsuccessfulRequest: if the request completes unsuccessfully+        :raise CachitpAPIRequestTimeout: if the request does not complete timely+        """+        url = '{}/api/v1/requests/{}'.format(self.api_url, request_id)+        logger.info('Waiting for request %s to complete...', request_id)++        start_time = time.time()+        while True:+            response = self.session.get(url)+            response.raise_for_status()+            response_json = response.json()++            state = response_json['state']+            if state in ('stale', 'failed'):+                state_reason = response_json.get('state_reason') or 'Unknown'+                logger.error(dedent("""\+                   Request %s is in "%s" state: %s+                   Details: %s+                   """), request_id, state, state_reason, json.dumps(response_json, indent=4))+                raise CachitoAPIUnsuccessfulRequest(+                   'Request {} is in "{}" state: {}'.format(request_id, state, state_reason))++            if state == 'complete':+                logger.debug('Request %s is complete', request_id)+                return response_json++            # All other states are expected to be transient and are not checked.++            elapsed = time.time() - start_time+            if elapsed > timeout:+                raise CachitpAPIRequestTimeout(+                    'Request %s not completed after %s seconds' % (url, timeout))+            else:+                if elapsed > burst_length:+                    time.sleep(slow_retry)+                else:+                    time.sleep(burst_retry)++    def download_sources(self, request_id, dest_path):+        """Download the sources from a Cachito request++        :param request_id: int, the Cachito request ID+        :param dest_path: str, the path to save the sources+        """+        logger.debug('Downloading sources bundle from request %d to %s', request_id, dest_path)+        url = '{}/api/v1/requests/{}/download'.format(self.api_url, request_id)+        request = self.session.get(url, stream=True)+        request.raise_for_status()++        with open(dest_path, 'wb') as f:

Ah, nice! Changed it to use that.

lcarva

comment created time in a month

push eventlcarva/atomic-reactor

Luiz Carvalho

commit sha 90b85777c5da13895b791c831fc4b7d7ba758584

Allow explicit filename in download_url Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

Luiz Carvalho

commit sha 2e3ab4e15f7f7f0285d146a9a4d8f42bc5604d90

Add utility class to interact with Cachito Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

push time in a month

Pull request review commentcontainerbuildsystem/atomic-reactor

Add utility class to interact with Cachito

+"""+Copyright (c) 2019 Red Hat, Inc+All rights reserved.++This software may be modified and distributed under the terms+of the BSD license. See the LICENSE file for details.+"""++from __future__ import absolute_import++from textwrap import dedent+import json+import logging+import time++from atomic_reactor.util import get_retrying_requests_session+from atomic_reactor.constants import DEFAULT_DOWNLOAD_BLOCK_SIZE+++logger = logging.getLogger(__name__)+++class CachitoAPIError(Exception):+    """Top level exception for errors in interacting with Cachito's API"""+++class CachitoAPIInvalidRequest(CachitoAPIError):+    """Invalid request made to Cachito's API"""+++class CachitoAPIUnsuccessfulRequest(CachitoAPIError):+    """Cachito's API request not completed successfully"""+++class CachitpAPIRequestTimeout(CachitoAPIError):+    """A request to Cachito's API took too long to complete"""+++class CachitoAPI(object):++    def __init__(self, api_url, insecure=False, token=None, cert=None):+        self.api_url = api_url+        self.session = self._make_session(insecure=insecure, cert=cert)++    def _make_session(self, insecure, cert):+        # method_whitelist=False allows retrying non-idempotent methods like POST+        session = get_retrying_requests_session(method_whitelist=False)+        session.verify = not insecure+        if cert:+            session.cert = cert+        return session++    def request_sources(self, repo, ref, flags=None, pkg_managers=None, user=None):+        """Start a new Cachito request++        :param repo: str, the URL to the SCM repository+        :param ref: str, the SCM reference to fetch+        :param pkg_managers: list<str>, list of package managers to be used for resolving+                             dependencies+        :param flags: list<str>, list of flag names++        :return: dict, representation of the created Cachito request+        :raise CachitoAPIInvalidRequest: if Cachito determines the request is invalid+        """+        payload = {+            'repo': repo,+            'ref': ref,+            'flags': flags,+            'pkg_managers': pkg_managers,+            'user': user,+        }+        # Remove None values+        payload = {k: v for k, v in payload.items() if v}++        url = '{}/api/v1/requests'.format(self.api_url)+        logger.debug('Making request %s with payload:\n%s', url, json.dumps(payload, indent=4))+        response = self.session.post(url, json=payload)+        if response.status_code == 400:+            raise CachitoAPIInvalidRequest(response.json()['error'])+        response.raise_for_status()+        return response.json()++    def wait_for_request(+            self, request_id, burst_retry=1, burst_length=30, slow_retry=10, timeout=3600):+        """Wait for a Cachito request to complete++        :param request_id: int, the Cachito request ID+        :param burst_retry: int, seconds to wait between retries prior to exceeding+                            the burst length+        :param burst_length: int, seconds to switch to slower retry period+        :param slow_retry: int, seconds to wait between retries after exceeding+                           the burst length+        :param timeout: int, when to give up waiting for compose request++        :return: dict, latest representation of the Cachito request+        :raise CachitoAPIUnsuccessfulRequest: if the request completes unsuccessfully+        :raise CachitpAPIRequestTimeout: if the request does not complete timely+        """+        url = '{}/api/v1/requests/{}'.format(self.api_url, request_id)+        logger.info('Waiting for request %s to complete...', request_id)++        start_time = time.time()+        while True:+            response = self.session.get(url)+            response.raise_for_status()+            response_json = response.json()++            state = response_json['state']+            if state in ('stale', 'failed'):+                state_reason = response_json.get('state_reason') or 'Unknown'+                logger.error(dedent("""\+                   Request %s is in "%s" state: %s+                   Details: %s+                   """), request_id, state, state_reason, json.dumps(response_json, indent=4))+                raise CachitoAPIUnsuccessfulRequest(+                   'Request {} is in "{}" state: {}'.format(request_id, state, state_reason))++            if state == 'complete':+                logger.debug('Request %s is complete', request_id)+                return response_json++            # All other states are expected to be transient and are not checked.++            elapsed = time.time() - start_time+            if elapsed > timeout:+                raise CachitpAPIRequestTimeout(+                    'Request %s not completed after %s seconds' % (url, timeout))+            else:+                if elapsed > burst_length:+                    time.sleep(slow_retry)+                else:+                    time.sleep(burst_retry)++    def download_sources(self, request_id, dest_path):+        """Download the sources from a Cachito request++        :param request_id: int, the Cachito request ID+        :param dest_path: str, the path to save the sources+        """+        logger.debug('Downloading sources bundle from request %d to %s', request_id, dest_path)+        url = '{}/api/v1/requests/{}/download'.format(self.api_url, request_id)+        request = self.session.get(url, stream=True)+        request.raise_for_status()++        with open(dest_path, 'wb') as f:+            for chunk in request.iter_content(chunk_size=DEFAULT_DOWNLOAD_BLOCK_SIZE):+                f.write(chunk)+        logger.debug('Sources bundle for request %d downloaded successfully', request_id)+++if __name__ == '__main__':

I can remove this if deemed to be clutter. I found it useful while developing this module.

lcarva

comment created time in 2 months

pull request commentcontainerbuildsystem/atomic-reactor

Add utility class to interact with Cachito

This is the first step in getting cachito integration setup. I'm happy to wait for merging until all the pieces are in place, but I do consider this commit complete.

lcarva

comment created time in 2 months

PR opened containerbuildsystem/atomic-reactor

Add utility class to interact with Cachito

Signed-off-by: Luiz Carvalho lucarval@redhat.com

Maintainers will complete the following section:

  • [ ] Commit messages are descriptive enough
  • [ ] "Signed-off-by:" line is present in each commit
  • [ ] Code coverage from testing does not decrease and new code is covered
  • [ ] JSON/YAML configuration changes are updated in the relevant schema
  • [ ] Changes to metadata also update the documentation for the metadata
  • [ ] Pull request includes link to an osbs-docs PR for user documentation updates
  • [ ] New feature can be disabled from a configuration file
+318 -0

0 comment

2 changed files

pr created time in 2 months

push eventlcarva/atomic-reactor

Luiz Carvalho

commit sha 2d880db785507a5af684498638ef18a3415ee92d

Add utility class to interact with Cachito Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

push time in 2 months

create barnchlcarva/atomic-reactor

branch : cachito-integration

created branch time in 2 months

PR opened release-engineering/cachito

Handle user set when auth is disabled

With this change, clients are given a meaningful error message when a request is made with a specified user.

Signed-off-by: Luiz Carvalho lucarval@redhat.com

+3 -1

0 comment

1 changed file

pr created time in 2 months

create barnchlcarva/cachito

branch : handle-no-auth-user

created branch time in 2 months

push eventfedora-modularity/message-tagging-service

Hunor Csomortáni

commit sha e54dac5541f64643a6e6f68c8ec9e0d9e9336253

Upgrade base image to Fedora 31 Also explicitly install python3-pip, as it's not part of the Fedora image anymore. Signed-off-by: Hunor Csomortáni <csomh@redhat.com>

view details

push time in 2 months

PR merged fedora-modularity/message-tagging-service

Upgrade base image to Fedora 31

Also explicitly install python3-pip, as it's not part of the Fedora image anymore.

Signed-off-by: Hunor Csomortáni csomh@redhat.com

+2 -1

1 comment

2 changed files

csomh

pr closed time in 2 months

issue closedcontainerbuildsystem/atomic-reactor

Add pulp-repo-id annotation to OpenShift build

The pulp repo id to be used is determined dynamically at run time. The value can be pulled from the logs, but it would be very helpful if this was an annotation value in the openshift build.

The particular use case that comes to mind is for clearing scratch builds from the system.

closed time in 2 months

lcarva

issue commentcontainerbuildsystem/atomic-reactor

Add pulp-repo-id annotation to OpenShift build

OSBS no longer supports pulp. Let's close this.

lcarva

comment created time in 2 months

push eventrelease-engineering/cachito

mprahl

commit sha 65865ea73662423fb171f48a9b336998dab679b8

Update Athens to v0.7.0

view details

push time in 2 months

push eventlcarva/festoji

Luiz Carvalho

commit sha 15777ddffbc28e6a2573f0c22f26280bb472ac1f

Fix Thanksgiving calculation Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

push time in 2 months

pull request commentrelease-engineering/resultsdb-updater

Add support for ERROR outcome

@hluk, any objection to merging this?

hluk

comment created time in 2 months

pull request commentrelease-engineering/estuary

Correctly identify when the HTTP connection to the API failed

Why do we want to look for status 0? I looked at this, which has a workaround for when it returns 0 if the error is, say, 401, but I am not sure if we want the 0 or not. Thanks!

The status being 0 seems to indicate that the HTTP error occurred during the connection and it's not an actual response from the server. For instance, if the REST API is down, then no response is actually received by the HTTP server.

That's true, but the error message says to try again. This won't help in all cases this error is seen. I'm suggesting to alter the message to indicate other possible resolutions.

mprahl

comment created time in 2 months

Pull request review commentrelease-engineering/estuary

Correctly identify when the HTTP connection to the API failed

 export class HTTPErrorHandler implements HttpInterceptor {    */   displayError(error: HttpErrorResponse) {     let errorDisplayMsg: string;-    if (error.error instanceof ErrorEvent) {+    if (error.error instanceof ErrorEvent || error.status === 0) {

Should we enhance the error message to suggest that this could be due to a missing root CA?

mprahl

comment created time in 2 months

Pull request review commentrelease-engineering/resultsdb-updater

Add support for brew-build-group type

 def handle_ci_umb(msg):             'system_provider': msg.system('provider', default=None),         } +    elif item_type == 'brew-build-group':+        item = msg.get('artifact', 'id')

Either is fine with me. Is there a way to easily get the event ID?

hluk

comment created time in 2 months

push eventrelease-engineering/resultsdb-updater

Mike Bonnet

commit sha c4714a614f877a66fdab7bb6ff14f409fd27c64a

install python2-semantic_version in the Dockerfile Required for proper operation of the consumer.

view details

push time in 2 months

Pull request review commentrelease-engineering/resultsdb-updater

Add support for brew-build-group type

 def handle_ci_umb(msg):             'system_provider': msg.system('provider', default=None),         } +    elif item_type == 'brew-build-group':+        item = msg.get('artifact', 'id')

Is there a way we can validate that the ID is in fact derived from the builds?

hluk

comment created time in 2 months

Pull request review commentrelease-engineering/cachito

Make run_cmd more generic

 __all__ = ['resolve_gomod_deps']  log = logging.getLogger(__name__)+run_gomod_cmd = functools.partial(run_cmd, exc_msg='Processing gomod dependencies failed')

Good call using partial here.

mprahl

comment created time in 2 months

Pull request review commentrelease-engineering/cachito

Show the top level packages associated with a request

 def resolve_gomod_deps(app_source_path, request_id, dep_replacements=None):                 f'{", ".join(unused_dep_replacements)}'             ) +        if not module_name:+            # This should never occur, but it's here as a precaution+            raise CachitoError(go_module_name_error)++        module_version = get_golang_version(+            module_name, app_source_path, request['ref'], update_tags=True)+        module = {+            'name': module_name,+            'type': 'gomod',+            'version': module_version,+        }+         # Add the gomod cache to the bundle the user will later download         cache_path = os.path.join('pkg', 'mod', 'cache', 'download')         src_cache_path = os.path.join(temp_dir, cache_path)         dest_cache_path = os.path.join('gomod', cache_path)-        add_deps_to_bundle(src_cache_path, dest_cache_path, request_id)+        add_deps_to_bundle(src_cache_path, dest_cache_path, request['id'])++        return module, deps+++def _get_golang_pseudo_version(commit, tag=None, module_major_version=None):+    """+    Get the Go module's pseudo version when a non-version commit is used.++    For a description of the algorithm, see https://tip.golang.org/cmd/go/#hdr-Pseudo_versions.++    :param git.Commit commit: the commit object of the Go module+    :param git.Tag tag: the highest semantic version tag with a matching major version before the+        input commit. If this isn't specified, it is assumed there was no previous valid tag.+    :param int module_major_version: the Go module's major version as stated in its go.mod file. If+        this and "tag" are not provided, 0 is assumed.+    :return: the Go module's pseudo-version as returned by `go list`+    :rtype: str+    """+    # Use this instead of commit.committed_datetime so that the datetime object is UTC+    committed_dt = datetime.utcfromtimestamp(commit.committed_date)+    commit_timestamp = committed_dt.strftime(r'%Y%m%d%H%M%S')+    commit_hash = commit.hexsha[0:12]++    # vX.0.0-yyyymmddhhmmss-abcdefabcdef is used when there is no earlier versioned commit with an+    # appropriate major version before the target commit+    if tag is None:+        # If the major version isn't in the import path and there is not a versioned commit with the+        # version of 1, the major version defaults to 0.+        return f'v{module_major_version or "0"}.0.0-{commit_timestamp}-{commit_hash}'++    tag_semantic_version = semver.parse_version_info(tag.name[1:])+    # An example of a semantic version with a prerelease is v2.2.0-alpha+    if tag_semantic_version.prerelease:+        # vX.Y.Z-pre.0.yyyymmddhhmmss-abcdefabcdef is used when the most recent versioned commit+        # before the target commit is vX.Y.Z-pre+        version_seperator = '.'+        pseudo_semantic_version = tag_semantic_version+    else:+        # vX.Y.(Z+1)-0.yyyymmddhhmmss-abcdefabcdef is used when the most recent versioned commit+        # before the target commit is vX.Y.Z+        version_seperator = '-'+        pseudo_semantic_version = semver.bump_patch(str(tag_semantic_version))++    return f'v{pseudo_semantic_version}{version_seperator}0.{commit_timestamp}-{commit_hash}'+++def _get_highest_semver_tag(repo, target_commit, major_version, all_reachable=False):+    """+    Get the highest semantic version tag related to the input commit.++    :param Git.Repo repo: the Git repository object to search+    :param int major_version: the major version of the Go module as in the go.mod file to use as a+        filter for major version tags+    :param bool all_reachable: if False, the search is constrained to the input commit. If True,+        then the search is constrained to the input commit and preceding commits.+    :return: the highest semantic version tag if one is found+    :rtype: git.Tag+    """+    try:+        g = git.Git(repo.working_dir)+        if all_reachable:+            # Get all the tags on the input commit and all that precede it.+            # This is based on:+            # https://github.com/golang/go/blob/0ac8739ad5394c3fe0420cf53232954fefb2418f/src/cmd/go/internal/modfetch/codehost/git.go#L659-L695+            cmd = [+                'git',+                'for-each-ref',+                '--format',+                '%(refname:lstrip=-1)',+                'refs/tags',+                '--merged',+                target_commit.hexsha,+            ]+        else:+            # Get the tags that point to this commit+            cmd = ['git', 'tag', '--points-at', target_commit.hexsha]++        tag_names = g.execute(cmd).split('\n')

splitlines()

mprahl

comment created time in 2 months

issue closedfedora-modularity/pdc-updater

Deprecate this repo

@ralphbean, @pypingou, any reason not to deprecate this repository?

closed time in 2 months

lcarva

pull request commentfedora-modularity/pdc-updater

Use hash instead of variant_uid for koji_tag, otherwise we hit the 50 characters limit for koji_tag used by Koji.

Closing this since this repo is now deprecated.

hanzz

comment created time in 2 months

push eventfedora-modularity/pdc-updater

Luiz Carvalho

commit sha 7175c644bb0915f50bcd155b0dd74223e82bec94

Deprecate repo Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

push time in 2 months

push eventfedora-modularity/pdc-updater

Luiz Carvalho

commit sha 4bfbc5572cea2c03257ee75ed1b90d56445d2a11

Deprecate repo Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

push time in 2 months

push eventfedora-modularity/pdc-updater

Luiz Carvalho

commit sha 8443f1a7dcf5679f3568074f10e1be19e207a83e

Deprecate repo Signed-off-by: Luiz Carvalho <lucarval@redhat.com>

view details

push time in 2 months

Pull request review commentrelease-engineering/cachito

Show the top level packages associated with a request

 def resolve_gomod_deps(app_source_path, request_id, dep_replacements=None):                 f'{", ".join(unused_dep_replacements)}'             ) +        if not module_name:+            # This should never occur, but it's here as a precaution+            raise CachitoError('The Go module name could not be determined')++        module_version = get_golang_version(+            module_name, app_source_path, request['ref'], update_tags=True)+        module = {+            'name': module_name,+            'type': 'gomod',+            'version': module_version,+        }+         # Add the gomod cache to the bundle the user will later download         cache_path = os.path.join('pkg', 'mod', 'cache', 'download')         src_cache_path = os.path.join(temp_dir, cache_path)         dest_cache_path = os.path.join('gomod', cache_path)-        add_deps_to_bundle(src_cache_path, dest_cache_path, request_id)+        add_deps_to_bundle(src_cache_path, dest_cache_path, request['id'])++        return module, deps+++def _get_golang_pseudo_version(commit, semantic_version=None, module_major_version=None):+    """+    Get the Go module's pseudo version when a non-version commit is used.++    :param git.Commit commit: the commit object of the Go module+    :param semver.VersionInfo semantic_version: the semantic version object representing the closest+        versioned commit. If this isn't specified, it is assumed there was no previous valid+        versioned commit.+    :param int module_major_version: the Go module's major version as stated in its go.mod file. If+        this and semantic_version are not provided, 0 is assumed.+    :return: the Go module's pseudo-version as returned by `go list`+    :rtype: str+    """+    # Use this instead of commit.committed_datetime so that the datetime object is UTC+    committed_dt = datetime.utcfromtimestamp(commit.committed_date)+    commit_timestamp = committed_dt.strftime(r'%Y%m%d%H%M%S')+    commit_hash = commit.hexsha[0:12]++    # vX.0.0-yyyymmddhhmmss-abcdefabcdef is used when there is no earlier versioned commit with an+    # appropriate major version before the target commit+    if semantic_version is None:+        # If the major version isn't in the import path and there is not a versioned commit with the+        # version of 1, the major version defaults to 0.+        return f'v{module_major_version or "0"}.0.0-{commit_timestamp}-{commit_hash}'++    # An example of a semantic version with a prerelease is v2.2.0-alpha+    if semantic_version.prerelease:+        # vX.Y.Z-pre.0.yyyymmddhhmmss-abcdefabcdef is used when the most recent versioned commit+        # before the target commit is vX.Y.Z-pre+        version_seperator = '.'+        pseudo_semantic_version = semantic_version+    else:+        # vX.Y.(Z+1)-0.yyyymmddhhmmss-abcdefabcdef is used when the most recent versioned commit+        # before the target commit is vX.Y.Z+        version_seperator = '-'+        pseudo_semantic_version = semver.bump_patch(str(semantic_version))++    return f'v{pseudo_semantic_version}{version_seperator}0.{commit_timestamp}-{commit_hash}'+++def get_golang_version(module_name, git_path, target_ref, update_tags=False):+    """+    Get the version of the Go module in the input Git repository in the same format as `go list`.++    If target_ref doesn't point to a commit with a semantically versioned tag, a pseudo-version+    will be returned.++    :param str module_name: the Go module's name+    :param str git_path: the path to the Git repository+    :param str target_ref: the Git reference of the Go module to get the version for+    :param bool update_tags: determines if `git fetch --tags --force` should be run before+        determining the version+    :return: a version as `go list` would provide+    :rtype: str+    """+    # If the module is version v2 or higher, the major version of the module is included as /vN at+    # the end of the module path. If the module is version v0 or v1, the major version is omitted+    # from the module path.+    module_major_version = None+    match = re.match(r'(?:.+/v)(?P<major_version>\d+)$', module_name)+    if match:+        module_major_version = int(match.groupdict()['major_version'])++    repo = git.Repo(git_path)+    if update_tags:+        repo.remote().fetch(force=True, tags=True)++    commit_to_tags_info = {}+    not_semver_tag_msg = '%s is not a semantic version tag'+    for tag in repo.tags:+        if not tag.name.startswith('v'):+            log.debug(not_semver_tag_msg, tag.name)+            continue++        try:+            # Exclude the 'v' prefix since this is required by Go, but it is seen as invalid by+            # the semver Python package+            parsed_version = semver.parse_version_info(tag.name[1:])+        except ValueError:+            log.debug(not_semver_tag_msg, tag.name)+            continue++        commit_to_tags_info.setdefault(tag.commit.hexsha, []).append({+            'parsed_version': parsed_version,+            'tag_name': tag.name,+        })++    target_commit = None+    for commit in repo.iter_commits(target_ref):+        if not target_commit:+            target_commit = commit++        for tag_info in commit_to_tags_info.get(commit.hexsha, []):

It took me a while to realize that as soon as we hit a commit that is tagged with a semantic version, we stop processing additional commits. Can we do something like this to make it more obvious?

tags_info = commit_to_tags_info.get(commit.hexsha)

if not tags_info:
  # Skip commits without a semantic version tag
  continue

for tag_info in commit_to_tags_info.get(commit.hexsha, []):
  ...
mprahl

comment created time in 2 months

Pull request review commentrelease-engineering/cachito

Show the top level packages associated with a request

 def resolve_gomod_deps(app_source_path, request_id, dep_replacements=None):                 f'{", ".join(unused_dep_replacements)}'             ) +        if not module_name:+            # This should never occur, but it's here as a precaution+            raise CachitoError('The Go module name could not be determined')++        module_version = get_golang_version(+            module_name, app_source_path, request['ref'], update_tags=True)+        module = {+            'name': module_name,+            'type': 'gomod',+            'version': module_version,+        }+         # Add the gomod cache to the bundle the user will later download         cache_path = os.path.join('pkg', 'mod', 'cache', 'download')         src_cache_path = os.path.join(temp_dir, cache_path)         dest_cache_path = os.path.join('gomod', cache_path)-        add_deps_to_bundle(src_cache_path, dest_cache_path, request_id)+        add_deps_to_bundle(src_cache_path, dest_cache_path, request['id'])++        return module, deps+++def _get_golang_pseudo_version(commit, semantic_version=None, module_major_version=None):+    """+    Get the Go module's pseudo version when a non-version commit is used.++    :param git.Commit commit: the commit object of the Go module+    :param semver.VersionInfo semantic_version: the semantic version object representing the closest+        versioned commit. If this isn't specified, it is assumed there was no previous valid+        versioned commit.+    :param int module_major_version: the Go module's major version as stated in its go.mod file. If+        this and semantic_version are not provided, 0 is assumed.+    :return: the Go module's pseudo-version as returned by `go list`+    :rtype: str+    """+    # Use this instead of commit.committed_datetime so that the datetime object is UTC+    committed_dt = datetime.utcfromtimestamp(commit.committed_date)+    commit_timestamp = committed_dt.strftime(r'%Y%m%d%H%M%S')+    commit_hash = commit.hexsha[0:12]++    # vX.0.0-yyyymmddhhmmss-abcdefabcdef is used when there is no earlier versioned commit with an+    # appropriate major version before the target commit+    if semantic_version is None:+        # If the major version isn't in the import path and there is not a versioned commit with the+        # version of 1, the major version defaults to 0.+        return f'v{module_major_version or "0"}.0.0-{commit_timestamp}-{commit_hash}'++    # An example of a semantic version with a prerelease is v2.2.0-alpha+    if semantic_version.prerelease:+        # vX.Y.Z-pre.0.yyyymmddhhmmss-abcdefabcdef is used when the most recent versioned commit+        # before the target commit is vX.Y.Z-pre+        version_seperator = '.'+        pseudo_semantic_version = semantic_version+    else:+        # vX.Y.(Z+1)-0.yyyymmddhhmmss-abcdefabcdef is used when the most recent versioned commit+        # before the target commit is vX.Y.Z+        version_seperator = '-'+        pseudo_semantic_version = semver.bump_patch(str(semantic_version))++    return f'v{pseudo_semantic_version}{version_seperator}0.{commit_timestamp}-{commit_hash}'+++def get_golang_version(module_name, git_path, target_ref, update_tags=False):+    """+    Get the version of the Go module in the input Git repository in the same format as `go list`.++    If target_ref doesn't point to a commit with a semantically versioned tag, a pseudo-version+    will be returned.++    :param str module_name: the Go module's name+    :param str git_path: the path to the Git repository+    :param str target_ref: the Git reference of the Go module to get the version for+    :param bool update_tags: determines if `git fetch --tags --force` should be run before+        determining the version+    :return: a version as `go list` would provide+    :rtype: str+    """+    # If the module is version v2 or higher, the major version of the module is included as /vN at+    # the end of the module path. If the module is version v0 or v1, the major version is omitted+    # from the module path.+    module_major_version = None+    match = re.match(r'(?:.+/v)(?P<major_version>\d+)$', module_name)+    if match:+        module_major_version = int(match.groupdict()['major_version'])++    repo = git.Repo(git_path)+    if update_tags:+        repo.remote().fetch(force=True, tags=True)

Something to think about in the future: we may want to treat failure to update the tags due to network issues as a warning, and allow the rest to proceed. Otherwise, we lose the offline capability for things we have already processed. (I do understand that we should always try to update the tags.)

mprahl

comment created time in 2 months

Pull request review commentrelease-engineering/cachito

Show the top level packages associated with a request

 def resolve_gomod_deps(app_source_path, request_id, dep_replacements=None):                 f'{", ".join(unused_dep_replacements)}'             ) +        if not module_name:+            # This should never occur, but it's here as a precaution+            raise CachitoError('The Go module name could not be determined')++        module_version = get_golang_version(+            module_name, app_source_path, request['ref'], update_tags=True)+        module = {+            'name': module_name,+            'type': 'gomod',+            'version': module_version,+        }+         # Add the gomod cache to the bundle the user will later download         cache_path = os.path.join('pkg', 'mod', 'cache', 'download')         src_cache_path = os.path.join(temp_dir, cache_path)         dest_cache_path = os.path.join('gomod', cache_path)-        add_deps_to_bundle(src_cache_path, dest_cache_path, request_id)+        add_deps_to_bundle(src_cache_path, dest_cache_path, request['id'])++        return module, deps+++def _get_golang_pseudo_version(commit, semantic_version=None, module_major_version=None):+    """+    Get the Go module's pseudo version when a non-version commit is used.++    :param git.Commit commit: the commit object of the Go module+    :param semver.VersionInfo semantic_version: the semantic version object representing the closest+        versioned commit. If this isn't specified, it is assumed there was no previous valid+        versioned commit.+    :param int module_major_version: the Go module's major version as stated in its go.mod file. If+        this and semantic_version are not provided, 0 is assumed.+    :return: the Go module's pseudo-version as returned by `go list`+    :rtype: str+    """+    # Use this instead of commit.committed_datetime so that the datetime object is UTC+    committed_dt = datetime.utcfromtimestamp(commit.committed_date)+    commit_timestamp = committed_dt.strftime(r'%Y%m%d%H%M%S')+    commit_hash = commit.hexsha[0:12]++    # vX.0.0-yyyymmddhhmmss-abcdefabcdef is used when there is no earlier versioned commit with an+    # appropriate major version before the target commit+    if semantic_version is None:+        # If the major version isn't in the import path and there is not a versioned commit with the+        # version of 1, the major version defaults to 0.+        return f'v{module_major_version or "0"}.0.0-{commit_timestamp}-{commit_hash}'++    # An example of a semantic version with a prerelease is v2.2.0-alpha+    if semantic_version.prerelease:+        # vX.Y.Z-pre.0.yyyymmddhhmmss-abcdefabcdef is used when the most recent versioned commit+        # before the target commit is vX.Y.Z-pre+        version_seperator = '.'+        pseudo_semantic_version = semantic_version+    else:+        # vX.Y.(Z+1)-0.yyyymmddhhmmss-abcdefabcdef is used when the most recent versioned commit+        # before the target commit is vX.Y.Z+        version_seperator = '-'+        pseudo_semantic_version = semver.bump_patch(str(semantic_version))++    return f'v{pseudo_semantic_version}{version_seperator}0.{commit_timestamp}-{commit_hash}'+++def get_golang_version(module_name, git_path, target_ref, update_tags=False):+    """+    Get the version of the Go module in the input Git repository in the same format as `go list`.++    If target_ref doesn't point to a commit with a semantically versioned tag, a pseudo-version+    will be returned.++    :param str module_name: the Go module's name+    :param str git_path: the path to the Git repository+    :param str target_ref: the Git reference of the Go module to get the version for+    :param bool update_tags: determines if `git fetch --tags --force` should be run before+        determining the version+    :return: a version as `go list` would provide+    :rtype: str+    """+    # If the module is version v2 or higher, the major version of the module is included as /vN at+    # the end of the module path. If the module is version v0 or v1, the major version is omitted+    # from the module path.+    module_major_version = None+    match = re.match(r'(?:.+/v)(?P<major_version>\d+)$', module_name)+    if match:+        module_major_version = int(match.groupdict()['major_version'])++    repo = git.Repo(git_path)+    if update_tags:+        repo.remote().fetch(force=True, tags=True)++    commit_to_tags_info = {}+    not_semver_tag_msg = '%s is not a semantic version tag'+    for tag in repo.tags:+        if not tag.name.startswith('v'):+            log.debug(not_semver_tag_msg, tag.name)+            continue++        try:+            # Exclude the 'v' prefix since this is required by Go, but it is seen as invalid by+            # the semver Python package+            parsed_version = semver.parse_version_info(tag.name[1:])+        except ValueError:+            log.debug(not_semver_tag_msg, tag.name)+            continue++        commit_to_tags_info.setdefault(tag.commit.hexsha, []).append({+            'parsed_version': parsed_version,+            'tag_name': tag.name,+        })++    target_commit = None+    for commit in repo.iter_commits(target_ref):+        if not target_commit:+            target_commit = commit++        for tag_info in commit_to_tags_info.get(commit.hexsha, []):

What if there is a commit tagged with multiple semantic versions? I think it may just arbitrarily pick one of them? Should we consider this an error, pick the highest one, or something else?

mprahl

comment created time in 2 months

Pull request review commentrelease-engineering/cachito

Show the top level packages associated with a request

 def resolve_gomod_deps(app_source_path, request_id, dep_replacements=None):             parts = [part for part in line.split(' ') if part not in ('', '<nil>')]             if len(parts) == 1:                 # This is the application itself, not a dependency+                module_name = parts[0]

Can we add a sanity check if module_name is not None? We should raise an error as that is unsupported/unexpected behavior.

mprahl

comment created time in 2 months

Pull request review commentrelease-engineering/cachito

Show the top level packages associated with a request

 def resolve_gomod_deps(app_source_path, request_id, dep_replacements=None):                 f'{", ".join(unused_dep_replacements)}'             ) +        if not module_name:+            # This should never occur, but it's here as a precaution+            raise CachitoError('The Go module name could not be determined')++        module_version = get_golang_version(+            module_name, app_source_path, request['ref'], update_tags=True)+        module = {+            'name': module_name,+            'type': 'gomod',+            'version': module_version,+        }+         # Add the gomod cache to the bundle the user will later download         cache_path = os.path.join('pkg', 'mod', 'cache', 'download')         src_cache_path = os.path.join(temp_dir, cache_path)         dest_cache_path = os.path.join('gomod', cache_path)-        add_deps_to_bundle(src_cache_path, dest_cache_path, request_id)+        add_deps_to_bundle(src_cache_path, dest_cache_path, request['id'])++        return module, deps+++def _get_golang_pseudo_version(commit, semantic_version=None, module_major_version=None):+    """+    Get the Go module's pseudo version when a non-version commit is used.

Let's add a link here to where this algorithm is defined in golang docs.

mprahl

comment created time in 2 months

Pull request review commentrelease-engineering/resultsdb-updater

Add support for ERROR outcome

 def handle_ci_umb(msg):         # Old topics are allowed for now.         msg.log.warning(e) +    if outcome == 'ERROR':+        error_reason = msg.get('error', 'reason')

Oh! It's a path?

hluk

comment created time in 2 months

Pull request review commentrelease-engineering/resultsdb-updater

Add support for ERROR outcome

 def _test_result_outcome(topic, outcome):      Some systems generate outcomes that don't match spec. -    Test outcome is FAILED for messages with "*.error" topic.+    Test outcome is ERROR for messages with "*.error" topic.

Should we just drop this line since it's no longer being treated differently than the other states?

hluk

comment created time in 2 months

Pull request review commentrelease-engineering/resultsdb-updater

Add support for ERROR outcome

 def handle_ci_umb(msg):         # Old topics are allowed for now.         msg.log.warning(e) +    if outcome == 'ERROR':+        error_reason = msg.get('error', 'reason')

'reason' is a bad default value. An empty string would be better. But it's considered a required field in the schema: https://pagure.io/fedora-ci/messages/blob/master/f/schemas/error.yaml

Should messages be rejected instead?

hluk

comment created time in 2 months

more