profile
viewpoint
Daniel Jiang reasonerjt Beijing, China

denverdino/dockerclient 0

Docker client library in Go

reasonerjt/clair 0

Vulnerability Static Analysis for Containers

reasonerjt/cli 0

The Docker CLI

reasonerjt/community 0

Harbor community-related material

reasonerjt/conftest 0

Write tests against structured configuration data using the Open Policy Agent Rego query language

reasonerjt/dex 0

OpenID Connect Identity (OIDC) and OAuth 2.0 Provider with Pluggable Connectors

reasonerjt/distribution 0

The Docker toolset to pack, ship, store, and deliver content

reasonerjt/distribution-spec 0

OCI Distribution Specification

reasonerjt/docker-index 0

Open Source Docker Index (aka Docker Hub) written in Node.JS

pull request commentgoharbor/harbor

feature(tag) add a thread pool size for tags list

Could you also check the setting on redis to understand the limit? it may be a bottleneck for pulling images concurrently.

bitsf

comment created time in 13 hours

issue commentgoharbor/harbor

Customize the text of the OIDC login button

Since we have the name of OIDC provider we may render the button dynamically? cc @xaleeks @AllForNothing

Thoro

comment created time in 13 hours

Pull request review commentgoharbor/harbor

fix #10913: initialize oidc provider before calling Load

 func refreshToken(ctx context.Context, token *Token) (*Token, error) { // UserInfoFromToken tries to call the UserInfo endpoint of the OIDC provider, and consolidate with ID token // to generate a UserInfo object, if the ID token is not in the input token struct, some attributes will be empty func UserInfoFromToken(ctx context.Context, token *Token) (*UserInfo, error) {+	// #10913: preload the configuration, in case it was not previously loaded by the UI

Calling getOauthConf() involve extra actions. I think you should use this chunk instead, before loading the setting.

     _, err := provider.get()
	if err != nil {
		return nil, err
	}
Thoro

comment created time in 14 hours

PullRequestReviewEvent

issue commentgoharbor/harbor

Garbage Collection provide a way to track progress

@Thoro Good point I opened an issue https://github.com/goharbor/website/issues/121

Thoro

comment created time in 14 hours

issue openedgoharbor/website

The search result should point to latest doc

When I do a search "Garbage Collection" the first result points to the 2.0.0 doc.

Ideally, it should point to the latest released version.

I'm not sure if it's due to the limitation of netify.

@xaleeks and @a-mccarthy could you check?

created time in 14 hours

push eventgoharbor/website

Ziming Zhang

commit sha 0b14e01f572630fab7ec63e689842292efa0c0a0

update api url in configure-user-settings-cli.md Signed-off-by: Ziming Zhang <zziming@vmware.com>

view details

Daniel Jiang

commit sha e8008ca589948dfadac7d0dbb143521e4e3e3545

Merge pull request #117 from bitsf/fix_doc_api_url update api url in configure-user-settings-cli.md

view details

push time in 16 hours

PR merged goharbor/website

update api url in configure-user-settings-cli.md

fix https://github.com/goharbor/website/issues/116 api url change from /api to /api/v2.0

+5 -5

1 comment

1 changed file

bitsf

pr closed time in 16 hours

issue closedgoharbor/website

the harbor api url should be update

in doc https://github.com/goharbor/website/blob/master/docs/install-config/configure-user-settings-cli.md, the api url is not correct.

closed time in 16 hours

bitsf

issue commentgoharbor/harbor

Pull-though Image Pulls dont get triggered if container is asking for a new, non-cached Image

@stonezdj The API can behave differently if the HEAD is sent to a regular project or proxy-cache project?

MaxRink

comment created time in 3 days

issue commentgoharbor/harbor

Garbage Collection provide a way to track progress

@Thoro I don't think the latest doc still says Harbor will move to readonly during GC? And in short term, I suggest you monitor the log of gc job to track progress.

Thoro

comment created time in 3 days

PullRequestReviewEvent

pull request commentgoharbor/website

update api url in configure-user-settings-cli.md

This should be cherry picked to the released branch under the website repo.

bitsf

comment created time in 3 days

issue commentgoharbor/harbor

Pull-though Image Pulls dont get triggered if container is asking for a new, non-cached Image

OK... This seems a valid issue, b/c containerd will send HEAD to registry, if there's 404 it will fail directly.
If the client does not send GET to Harbor, Harbor will not start to proxy the content.

I currently don't think of a good solution for this scenario... we may discuss this limitation under this thread.

cc @xaleeks

MaxRink

comment created time in 3 days

issue commentgoharbor/harbor

client timeout calling core's api/v2.0/ping

@xtreme-conor-nosal In the OP you mentioned one instance is healthy, the other is not. Could you please clarify, are these two instances independent? It looks like core was requesting some external resources but blocked for some reason, like connecting to DB but the DB is not responding.

xtreme-conor-nosal

comment created time in 3 days

Pull request review commentgoharbor/harbor

Add tool for migration chart v2 to oci format

+# Chart Migrating Tool++Harbor supports two different ways to storage the chart data.++   1. stored in Harbor registry storage directly via OCI API.+   2. stored in Harbor hosted chartmuseum backend via chartmuseam's API++This tool used to migrating the helm charts stored in the chartmuseum backend to Harbor OCI registry backend.++## Usages++Run command below:++```+docker run -it --rm -v {{your_chart_data_location}}:/chartmuseum/ goharbor/prepare:{{version}}

wrong image name. And how does user specify the Harbor endpoint and username/password?

ninjadq

comment created time in 3 days

PullRequestReviewEvent

Pull request review commentnotaryproject/nv2

Proposal for generic reverse lookup

+# OCI Distribution
+
+To support [Notary v2 goals][notaryv2-goals], upload, persistence and discovery of signatures must be supported. Alternative designs were considered, as referenced in [persistance-discovery-options.md](./persistance-discovery-options.md).
+
+This document represents the current working prototype which:
+
+- Leverages [OCI Index][oci-index] to store Notary v2 Signatures
+- Implements `index.config` to align with the [OCI Artifacts][oci-artifacts] approach for artifact type differentiation within a registry.
+- Implements a referrer API to identify referenced artifacts. Such as what signatures refer to a specific container image.
+
+## Table of Contents
+
+- [Signature Persistence](#signature-persistence)
+- [Signature Discovery](#signature-discovery)
+- [Persisting Referrer Metadata (Signatures)](#persisting-referrer-metadata-signatures)
+- [Implementation](#implementation)
+- [Push, Discover, Pull Prototype](#push-discover-pull-prototype)
+
+## Signature Persistence
+
+Several [options for how to persist a signature were explored][signature-persistance-options] . We measure these options against the [goals of Notary v2][notaryv2-goals], specifically:
+
+- Maintain the original artifact digest and collection of associated tags, supporting existing dev through deployment workflows
+- Multiple signatures per artifact, enabling the originating vendor signature, public registry certification and user/environment signatures
+- Native Persistence within an OCI Artifact enabled, distribution spec based registry
+- Artifact and signature copying within and across OCI Artifact enabled, distribution spec based registries
+- Support multi-tenant registries enabling cloud providers and enterprises to support managed services at scale
+- Support private registries, where public content may be copied to, and new content originated within
+- Air-gapped environments, where the originating registry of content is not accessible
+
+To support the above requirements, signatures are stored as separate [OCI Artifacts][oci-artifacts], persisted as [OCI Index][oci-index] objects. They are maintained as any other artifact in a registry, supporting standard operations such as listing, deleting, garbage collection and any other content addressable operations within a registry.
+
+Following the [OCI Artifacts][oci-artifacts] design, [Notary v2 signatures][nv2-signature-spec] are identified with: `config.mediaType: "application/vnd.cncf.notary.config.v2+jwt"`.
+
+<img src="../../media/signature-as-index.png" width=650>
+
+The above represents the `net-monitor:v1` container image, signed by it's originating author (**wabbit-networks**) as well as **acme-rockets** which imported the image into their private registry.
+The signatures are persisted as OCI Indexes, with a new `index.config` object storing the signature content:
+
+1. **manifest digest for the `net-monitor:v1` image:** `sha256:111ma2d22ae5ef400769fa51c84717264cd1520ac8d93dc071374c1be49a111m`
+    ```JSON
+    {
+      "schemaVersion": 2,
+      "mediaType": "application/vnd.oci.image.manifest.v2+json",
+      "config": {
+        "mediaType": "application/vnd.oci.image.config.v1+json",
+        "digest": "sha256:111ca3788f3464fd9a06386c4d7a8e3018b525278ac4b9da872943d4cfea111c",
+        "size": 1906
+      },
+      "layers": [
+        {
+          "mediaType": "application/vnd.oci.image.layer.v1.tar+gzip",
+          "digest": "sha256:9834876dcfb05cb167a5c24953eba58c4ac89b1adf57f28f2f9d09af107ee8f0",
+          "size": 32654
+        },
+        {
+          "mediaType": "application/vnd.oci.image.layer.v1.tar+gzip",
+          "digest": "sha256:ec4b8955958665577945c89419d1af06b5f7636b4ac3da7f12184802ad867736",
+          "size": 73109
+        }
+      ]
+    }
+    ```
+
+2. **index digest for the wabbit-networks signature** `sha256:222ibbf80b44ce6be8234e6ff90a1ac34acbeb826903b02cfa0da11c82cb222i`
+
+    ```json
+    {
+      "schemaVersion": 2,
+      "mediaType": "application/vnd.oci.image.index.v2+json",
+      "config": {
+        "mediaType": "application/vnd.cncf.notary.config.v2+jwt",
+        "digest": "sha256:222cb130c152895905abe66279dd9feaa68091ba55619f5b900f2ebed38b222c",
+        "size": 1906
+      },
+      "manifests": [
+        {
+          "mediaType": "application/vnd.oci.image.manifest.v1+json",
+          "digest": "sha256:111ma2d22ae5ef400769fa51c84717264cd1520ac8d93dc071374c1be49a111m",
+          "size": 7023,
+          "platform": {
+            "architecture": "ppc64le",
+            "os": "linux"
+          }
+        }
+      ]
+    }
+    ```
+
+    The `index.config` contains the following signature information:
+`sha256:222cb130c152895905abe66279dd9feaa68091ba55619f5b900f2ebed38b222c`
+    ```json
+    {
+        "signed": {
+            "mediaType": "application/vnd.oci.image.manifest.v2+json",
+            "digest": "sha256:111ma2d22ae5ef400769fa51c84717264cd1520ac8d93dc071374c1be49a111m",
+            "size": 528,
+            "references": [
+                "registry.wabbit-networks.com/net-monitor:v1"
+            ],
+            "exp": 1627555319,
+            "nbf": 1596019319,
+            "iat": 1596019319
+        },
+        "signature": {
+            "typ": "x509",
+            "sig": "UFqN24K2fLj...",
+            "alg": "RS256",
+            "x5c": [
+                "MIIDszCCApugAwIBAgIUL1anEU/..."
+            ]
+        }
+    }
+    ```
+
+3. **index digest for the acme-rockets signature** `sha256:333ic0c33ebc4a74a0a554c86ac2b28ddf3454a5ad9cf90ea8cea9f9e75c333i`
+
+    ```json
+    {
+      "schemaVersion": 2,
+      "mediaType": "application/vnd.oci.image.index.v2+json",
+      "config": {
+        "mediaType": "application/vnd.cncf.notary.config.v2+jwt",
+        "digest": "sha256:333cc44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b785c333c",
+        "size": 1906
+      },
+      "manifests": [
+        {
+          "mediaType": "application/vnd.oci.image.manifest.v1+json",
+          "digest": "sha256:111ma2d22ae5ef400769fa51c84717264cd1520ac8d93dc071374c1be49a111m",
+          "size": 7023,
+          "platform": {
+            "architecture": "ppc64le",
+            "os": "linux"
+          }
+        }
+      ]
+    }
+    ```
+
+    The `index.config` contains the following signature information:
+`sha256:333cc44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b785c333c`
+    ```json
+    {
+        "signed": {
+            "mediaType": "application/vnd.oci.image.manifest.v2+json",
+            "digest": "sha256:111ma2d22ae5ef400769fa51c84717264cd1520ac8d93dc071374c1be49a111m",
+            "size": 528,
+            "references": [
+                "registry.acme-rockets.com/net-monitor:v1"
+            ],
+            "exp": 1627555319,
+            "nbf": 1596019319,
+            "iat": 1596019319
+        },
+        "signature": {
+            "typ": "x509",
+            "sig": "UFqN24K2fLj...",
+            "alg": "RS256",
+            "x5c": [
+                "MIIDszCCApugAwIBAgIUL1anEU/..."
+            ]
+        }
+    }
+    ```
+
+**Pros with this approach:**
+
+- Utilize the existing `index.manifests` collection for linking artifacts
+- Registries that support oci index already have infrastructure for tracking `index.manifests`, including delete operations and garbage collection
+- Existing distribution-spec upload APIs are utilized
+- Based on the artifact type:  `index.config.mediaType: "application/vnd.cncf.notary.config.v2+jwt"`, a registry may implement role checking to confirm the identity pushing the Notary v2 artifact type has the registries equivalent of a signer role
+- As registry operators may offer role checking for different artifact types, Notary v2 Signatures are just one of many types they may want to authorize
+
+**Cons with this approach:**
+
+- OCI index does not yet support the [OCI config descriptor][oci-descriptor]. This would require a schema change to oci-index, with a version bump.
+  - This has been a [desired item for OCI Artifacts][oci-artifacts-index] to support other artifact types which would base on Index.
+
+### Signature Persistence - Signing Multi-arch Manifests
+
+Taking the above scenario further, a signature can be associated with an individual manifest, or a signature can be applied to an index. The index could be a multi-arch index (windows & linux), or the index might represent a [CNAB][cnab].
+
+In the below case, the `net-monitor` software is available as windows (`net-monitor:v1-win`) and linux (`net-monitor:v1-lin`) images, as well as a multi-arch index (`net-monitor:v1`)
+The platform specific images, and the multi-arch index are all signed by **wabbit-networks** and **acme-rockets**.
+
+<img src="../../media/signature-as-index-signing-multi-arch-index.png" width=1100>
+
+- Objects (1-3) are equivalent to above references, with the exception that (1) is changed from a platform specific manifest to a multi-arch index
+- Objects (4-5) represent architecture specific manifests for the multi-arch manifest (1)
+- Objects (6-9) are Notary v2 signatures by the originating author (**wabbit-networks**) and the consumer (**acme-rockets**)
+
+## Signature Discovery
+
+The [OCI distribution-spec][oci-distribution] describes the action of [pushing content][oci-distribution-push] and [pulling of content][oci-distribution-pull].  Pulling a manifest and the associated layers implies a registry must store some linkage between the manifest and its references to layers and config. There are implied additional references between an [OCI Index][oci-index] and its referenced manifest.
+
+To support the [Notary v2 workflow][notaryv2-workflow], where the system knows of the artifact being referenced (1), but doesn't know what signatures might exist on that artifact (2-3, 6-9), a discovery API is required to return the objects that refer to the target artifact (1).
+
+Similar to pulling of an artifact, the referrer API implies a reverse lookup is possible. Based on a given artifact digest, what other objects are referencing that object.
+
+To generalize discovery, a `referrer-metadata` API is proposed to enable discovery of an referenced objects. To support this reverse lookup prototype, additions are proposed to the Notary v2 fork of the reference implementation [docker/distribution][notaryv2-distribution] through [notaryv2-referrer-api].
+
+A referrer is any registry artifact that has an immutable reference to a manifest. An OCI index is a referrer to each manifest it references. The [OCI image spec][oci-image] does not include a config property for an OCI index and there is no reverse lookup of referrers in docker distribution.
+
+A modified OCI index with an `index.config` property that references a collection of manifests allows us to associate a "type" to the referrer-referenced relationship, where the `index.config.mediaType` = `application/vnd.cncf.notary.config.v2+jwt`.
+
+### referrer-metadata API
+
+```HTTP
+`GET http://localhost:5000/v2/net-monitor/manifests/11wma2d22ae5ef400769fa51c84717264cd1520ac8d93dc071374c1be49a11wm/referrer-metadata`

From API's perspective, IMHO, it would be a better design if we first introduce the referrer API, which returns all referrer, including the optionally selected metadata, such as the digest of config blob. The caller may filter the result by adding config-mediaType to the query string.

aviral26

comment created time in 3 days

PullRequestReviewEvent

Pull request review commentnotaryproject/nv2

Proposal for generic reverse lookup

+# OCI Distribution
+
+To support [Notary v2 goals][notaryv2-goals], upload, persistence and discovery of signatures must be supported. Alternative designs were considered, as referenced in [persistance-discovery-options.md](./persistance-discovery-options.md).
+
+This document represents the current working prototype which:
+
+- Leverages [OCI Index][oci-index] to store Notary v2 Signatures
+- Implements `index.config` to align with the [OCI Artifacts][oci-artifacts] approach for artifact type differentiation within a registry.
+- Implements a referrer API to identify referenced artifacts. Such as what signatures refer to a specific container image.
+
+## Table of Contents
+
+- [Signature Persistence](#signature-persistence)
+- [Signature Discovery](#signature-discovery)
+- [Persisting Referrer Metadata (Signatures)](#persisting-referrer-metadata-signatures)
+- [Implementation](#implementation)
+- [Push, Discover, Pull Prototype](#push-discover-pull-prototype)
+
+## Signature Persistence
+
+Several [options for how to persist a signature were explored][signature-persistance-options] . We measure these options against the [goals of Notary v2][notaryv2-goals], specifically:
+
+- Maintain the original artifact digest and collection of associated tags, supporting existing dev through deployment workflows
+- Multiple signatures per artifact, enabling the originating vendor signature, public registry certification and user/environment signatures
+- Native Persistence within an OCI Artifact enabled, distribution spec based registry
+- Artifact and signature copying within and across OCI Artifact enabled, distribution spec based registries
+- Support multi-tenant registries enabling cloud providers and enterprises to support managed services at scale
+- Support private registries, where public content may be copied to, and new content originated within
+- Air-gapped environments, where the originating registry of content is not accessible
+
+To support the above requirements, signatures are stored as separate [OCI Artifacts][oci-artifacts], persisted as [OCI Index][oci-index] objects. They are maintained as any other artifact in a registry, supporting standard operations such as listing, deleting, garbage collection and any other content addressable operations within a registry.
+
+Following the [OCI Artifacts][oci-artifacts] design, [Notary v2 signatures][nv2-signature-spec] are identified with: `config.mediaType: "application/vnd.cncf.notary.config.v2+jwt"`.
+
+<img src="../../media/signature-as-index.png" width=650>
+
+The above represents the `net-monitor:v1` container image, signed by it's originating author (**wabbit-networks**) as well as **acme-rockets** which imported the image into their private registry.
+The signatures are persisted as OCI Indexes, with a new `index.config` object storing the signature content:
+
+1. **manifest digest for the `net-monitor:v1` image:** `sha256:111ma2d22ae5ef400769fa51c84717264cd1520ac8d93dc071374c1be49a111m`
+    ```JSON
+    {
+      "schemaVersion": 2,
+      "mediaType": "application/vnd.oci.image.manifest.v2+json",
+      "config": {
+        "mediaType": "application/vnd.oci.image.config.v1+json",
+        "digest": "sha256:111ca3788f3464fd9a06386c4d7a8e3018b525278ac4b9da872943d4cfea111c",
+        "size": 1906
+      },
+      "layers": [
+        {
+          "mediaType": "application/vnd.oci.image.layer.v1.tar+gzip",
+          "digest": "sha256:9834876dcfb05cb167a5c24953eba58c4ac89b1adf57f28f2f9d09af107ee8f0",
+          "size": 32654
+        },
+        {
+          "mediaType": "application/vnd.oci.image.layer.v1.tar+gzip",
+          "digest": "sha256:ec4b8955958665577945c89419d1af06b5f7636b4ac3da7f12184802ad867736",
+          "size": 73109
+        }
+      ]
+    }
+    ```
+
+2. **index digest for the wabbit-networks signature** `sha256:222ibbf80b44ce6be8234e6ff90a1ac34acbeb826903b02cfa0da11c82cb222i`
+
+    ```json
+    {
+      "schemaVersion": 2,
+      "mediaType": "application/vnd.oci.image.index.v2+json",
+      "config": {
+        "mediaType": "application/vnd.cncf.notary.config.v2+jwt",
+        "digest": "sha256:222cb130c152895905abe66279dd9feaa68091ba55619f5b900f2ebed38b222c",
+        "size": 1906
+      },
+      "manifests": [
+        {
+          "mediaType": "application/vnd.oci.image.manifest.v1+json",
+          "digest": "sha256:111ma2d22ae5ef400769fa51c84717264cd1520ac8d93dc071374c1be49a111m",
+          "size": 7023,
+          "platform": {
+            "architecture": "ppc64le",
+            "os": "linux"
+          }
+        }
+      ]
+    }
+    ```
+
+    The `index.config` contains the following signature information:
+`sha256:222cb130c152895905abe66279dd9feaa68091ba55619f5b900f2ebed38b222c`
+    ```json
+    {
+        "signed": {
+            "mediaType": "application/vnd.oci.image.manifest.v2+json",
+            "digest": "sha256:111ma2d22ae5ef400769fa51c84717264cd1520ac8d93dc071374c1be49a111m",
+            "size": 528,
+            "references": [
+                "registry.wabbit-networks.com/net-monitor:v1"
+            ],
+            "exp": 1627555319,
+            "nbf": 1596019319,
+            "iat": 1596019319
+        },
+        "signature": {
+            "typ": "x509",
+            "sig": "UFqN24K2fLj...",
+            "alg": "RS256",
+            "x5c": [
+                "MIIDszCCApugAwIBAgIUL1anEU/..."
+            ]
+        }
+    }
+    ```
+
+3. **index digest for the acme-rockets signature** `sha256:333ic0c33ebc4a74a0a554c86ac2b28ddf3454a5ad9cf90ea8cea9f9e75c333i`
+
+    ```json
+    {
+      "schemaVersion": 2,
+      "mediaType": "application/vnd.oci.image.index.v2+json",
+      "config": {
+        "mediaType": "application/vnd.cncf.notary.config.v2+jwt",
+        "digest": "sha256:333cc44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b785c333c",
+        "size": 1906
+      },
+      "manifests": [
+        {
+          "mediaType": "application/vnd.oci.image.manifest.v1+json",
+          "digest": "sha256:111ma2d22ae5ef400769fa51c84717264cd1520ac8d93dc071374c1be49a111m",
+          "size": 7023,
+          "platform": {
+            "architecture": "ppc64le",
+            "os": "linux"
+          }
+        }
+      ]
+    }
+    ```
+
+    The `index.config` contains the following signature information:
+`sha256:333cc44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b785c333c`
+    ```json
+    {
+        "signed": {
+            "mediaType": "application/vnd.oci.image.manifest.v2+json",
+            "digest": "sha256:111ma2d22ae5ef400769fa51c84717264cd1520ac8d93dc071374c1be49a111m",
+            "size": 528,
+            "references": [
+                "registry.acme-rockets.com/net-monitor:v1"
+            ],
+            "exp": 1627555319,
+            "nbf": 1596019319,
+            "iat": 1596019319
+        },
+        "signature": {
+            "typ": "x509",
+            "sig": "UFqN24K2fLj...",
+            "alg": "RS256",
+            "x5c": [
+                "MIIDszCCApugAwIBAgIUL1anEU/..."
+            ]
+        }
+    }
+    ```
+
+**Pros with this approach:**
+
+- Utilize the existing `index.manifests` collection for linking artifacts
+- Registries that support oci index already have infrastructure for tracking `index.manifests`, including delete operations and garbage collection
+- Existing distribution-spec upload APIs are utilized
+- Based on the artifact type:  `index.config.mediaType: "application/vnd.cncf.notary.config.v2+jwt"`, a registry may implement role checking to confirm the identity pushing the Notary v2 artifact type has the registries equivalent of a signer role
+- As registry operators may offer role checking for different artifact types, Notary v2 Signatures are just one of many types they may want to authorize
+
+**Cons with this approach:**
+
+- OCI index does not yet support the [OCI config descriptor][oci-descriptor]. This would require a schema change to oci-index, with a version bump.
+  - This has been a [desired item for OCI Artifacts][oci-artifacts-index] to support other artifact types which would base on Index.
+
+### Signature Persistence - Signing Multi-arch Manifests
+
+Taking the above scenario further, a signature can be associated with an individual manifest, or a signature can be applied to an index. The index could be a multi-arch index (windows & linux), or the index might represent a [CNAB][cnab].
+
+In the below case, the `net-monitor` software is available as windows (`net-monitor:v1-win`) and linux (`net-monitor:v1-lin`) images, as well as a multi-arch index (`net-monitor:v1`)
+The platform specific images, and the multi-arch index are all signed by **wabbit-networks** and **acme-rockets**.
+
+<img src="../../media/signature-as-index-signing-multi-arch-index.png" width=1100>
+
+- Objects (1-3) are equivalent to above references, with the exception that (1) is changed from a platform specific manifest to a multi-arch index
+- Objects (4-5) represent architecture specific manifests for the multi-arch manifest (1)
+- Objects (6-9) are Notary v2 signatures by the originating author (**wabbit-networks**) and the consumer (**acme-rockets**)
+
+## Signature Discovery
+
+The [OCI distribution-spec][oci-distribution] describes the action of [pushing content][oci-distribution-push] and [pulling of content][oci-distribution-pull].  Pulling a manifest and the associated layers implies a registry must store some linkage between the manifest and its references to layers and config. There are implied additional references between an [OCI Index][oci-index] and its referenced manifest.
+
+To support the [Notary v2 workflow][notaryv2-workflow], where the system knows of the artifact being referenced (1), but doesn't know what signatures might exist on that artifact (2-3, 6-9), a discovery API is required to return the objects that refer to the target artifact (1).
+
+Similar to pulling of an artifact, the referrer API implies a reverse lookup is possible. Based on a given artifact digest, what other objects are referencing that object.
+
+To generalize discovery, a `referrer-metadata` API is proposed to enable discovery of an referenced objects. To support this reverse lookup prototype, additions are proposed to the Notary v2 fork of the reference implementation [docker/distribution][notaryv2-distribution] through [notaryv2-referrer-api].
+
+A referrer is any registry artifact that has an immutable reference to a manifest. An OCI index is a referrer to each manifest it references. The [OCI image spec][oci-image] does not include a config property for an OCI index and there is no reverse lookup of referrers in docker distribution.
+
+A modified OCI index with an `index.config` property that references a collection of manifests allows us to associate a "type" to the referrer-referenced relationship, where the `index.config.mediaType` = `application/vnd.cncf.notary.config.v2+jwt`.
+
+### referrer-metadata API
+
+```HTTP
+`GET http://localhost:5000/v2/net-monitor/manifests/11wma2d22ae5ef400769fa51c84717264cd1520ac8d93dc071374c1be49a11wm/referrer-metadata`
+```
+
+Using the diagram above, the `net-monitor:v1` manifest tag (4) has a digest of: `sha256:11wma2d22ae5ef400769fa51c84717264cd1520ac8d93dc071374c1be49a11wm`. When requesting the referenced objects, we see 2 signature objects being returned (wabbit-networks (6) & acme-rockets(7)), and an OCI multi-arch index (1).
+
+The response could be in the following format. Note the additional `config-mediaType` to identify the specific artifact type in the results.
+
+```HTTP
+200 OK
+Content-Type: application/json
+{
+  "digest": "sha256:11wma2d22ae5ef400769fa51c84717264cd1520ac8d93dc071374c1be49a11wm",
+  "references": [
+    {
+      "digest": "sha256:222mbbf80b44ce6be8234e6ff90a1ac34acbeb826903b02cfa0da11c82cb222m",
+      "mediaType": "application/vnd.oci.image.index.v1+json",
+      "size": "1024",
+      "config-mediaType": "application/vnd.cncf.notary.config.v2+jwt"
+    },
+    {
+      "digest": "sha256:333mc0c33ebc4a74a0a554c86ac2b28ddf3454a5ad9cf90ea8cea9f9e75c333m",
+      "mediaType": "application/vnd.oci.image.index.v1+json",
+      "size": "1025",
+      "config-mediaType": "application/vnd.cncf.notary.config.v2+jwt"
+    },
+    {
+      "digest": "sha256:111ia2d22ae5ef400769fa51c84717264cd1520ac8d93dc071374c1be49a111i",
+      "mediaType": "application/vnd.oci.image.index.v1+json",
+      "size": "1025",
+      "config-mediaType": "application/vnd.oci.image.index.v1+json"
+    }
+  ]
+}
+```
+
+## Persisting Referrer Metadata (Signatures)
+
+The proposal implements a referrer metadata store for manifests that is essentially a reverse-lookup, by `mediaType`, to referrer config objects. For example, when an OCI index is pushed, if it references a config object of media type `application/vnd.cncf.notary.config.v2+jwt`, a link to the config object is recorded in the referrer metadata store of each referenced manifest.
+
+> See [Issue #13](https://github.com/notaryproject/nv2/issues/13) related to persisting manifest or config references.
+> See [Artifacts submitted to a registry](https://github.com/notaryproject/nv2/blob/prototype-1/docs/distribution/persistance-discovery-options.md#artifacts-submitted-to-a-registry) for each digest reference in this example.
+
+### Put an OCI index by digest, linking a signature to a collection of manifests
+
+Using the existing [OCI distribution-spec push][oci-distribution-push-manifest] api to push an [OCI index][oci-index] with the added `index.config` to describe the type as Notary v2.
+
+`PUT https://localhost:5000/v2/net-monitor/manifests/sha256:222ibbf80b44ce6be8234e6ff90a1ac34acbeb826903b02cfa0da11c82cb222i`
+
+```json
+{
+  "schemaVersion": 2,
+  "mediaType": "application/vnd.oci.image.index.v2+json",
+  "config": {
+    "mediaType": "application/vnd.cncf.notary.config.v2+jwt",
+    "digest": "sha256:222cb130c152895905abe66279dd9feaa68091ba55619f5b900f2ebed38b222c",
+    "size": 1906
+  },
+  "manifests": [
+    {
+      "mediaType": "application/vnd.oci.image.manifest.v1+json",
+      "digest": "sha256:111ma2d22ae5ef400769fa51c84717264cd1520ac8d93dc071374c1be49a111m",
+      "size": 7023,
+      "platform": {
+        "architecture": "ppc64le",
+        "os": "linux"
+      }
+    }
+  ]
+}
+```
+
+PUT index would result in the creation of a link between the index config object `sha256:222cb130c152895905abe66279dd9feaa68091ba55619f5b900f2ebed38b222c`and the `net-monitor` manifest `sha256:111ma2d22ae5ef400769fa51c84717264cd1520ac8d93dc071374c1be49a111m`, of type `application/vnd.cncf.notary.config.v2+jwt`.
+
+## Implementation
+
+Using [docker-distribution][notaryv2-distribution], backed by file storage, the `net-monitor:v1` image is already persisted:
+
+- repository: `net-monitor`
+- digest: `sha256:111ma2d22ae5ef400769fa51c84717264cd1520ac8d93dc071374c1be49a111m`
+- tag: `v1.0`
+
+The storage layout is represented as:
+
+```bash
+<root>
+└── v2
+    └── repositories
+        └── net-monitor
+            └── _manifests
+                └── revisions
+                    └── sha256
+                        └── 111ma2d22ae5ef400769fa51c84717264cd1520ac8d93dc071374c1be49a111m
+                            └── link
+```
+
+Push a signature artifact and an OCI index that contains a config property referencing the signature:
+
+- signature index digest: `sha256:222ibbf80b44ce6be8234e6ff90a1ac34acbeb826903b02cfa0da11c82cb222i`
+- index json:
+    ```json
+    {
+        "schemaVersion": 2,
+        "mediaType": "application/vnd.oci.image.index.v2+json",
+        "config": {
+            "mediaType": "application/vnd.cncf.notary.config.v2+jwt",
+            "digest": "sha256:222cb130c152895905abe66279dd9feaa68091ba55619f5b900f2ebed38b222c",
+            "size": 1906
+        },
+        "manifests": [
+            {
+              "mediaType": "application/vnd.oci.image.manifest.v1+json",
+              "digest": "sha256:111ma2d22ae5ef400769fa51c84717264cd1520ac8d93dc071374c1be49a111m",
+              "size": 7023,
+              "platform": {
+                "architecture": "ppc64le",
+                "os": "linux"
+              }
+            }
+        ]
+    }
+    ```
+
+Consistent with the current distribution implementation, on `PUT`, the index appears as a manifest revision.
+
+The Notary v2 prototype adds referrer metadata for the **wabbit-networks** signature:
+
+```
+<root>
+└── v2
+    └── repositories
+        └── net-monitor
+            └── _manifests
+                └── revisions
+                    └── sha256
+                        ├── 111ma2d22ae5ef400769fa51c84717264cd1520ac8d93dc071374c1be49a111m
+                        │   ├── link
+                        │   └── ref
+                        │       └── application/vnd.cncf.notary.config.v2+jwt
+                        │           └── sha256
+                        │               └── 222cb130c152895905abe66279dd9feaa68091ba55619f5b900f2ebed38b222c
+                        │                   └── link
+                        └── 222ibbf80b44ce6be8234e6ff90a1ac34acbeb826903b02cfa0da11c82cb222i
+                            └── link
+```
+
+Adding the **acme-rockets** signature:
+
+- signature index digest: `sha256:333ic0c33ebc4a74a0a554c86ac2b28ddf3454a5ad9cf90ea8cea9f9e75c333i`
+- index json:
+
+  ```json
+  {
+    "schemaVersion": 2,
+    "mediaType": "application/vnd.oci.image.index.v2+json",
+    "config": {
+      "mediaType": "application/vnd.cncf.notary.config.v2+jwt",
+      "digest": "sha256:333cc44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b785c333c",
+      "size": 1906
+    },
+    "manifests": [
+      {
+        "mediaType": "application/vnd.oci.image.manifest.v1+json",
+        "digest": "sha256:111ma2d22ae5ef400769fa51c84717264cd1520ac8d93dc071374c1be49a111m",
+        "size": 7023,
+        "platform": {
+          "architecture": "ppc64le",
+          "os": "linux"
+        }
+      }
+    ]
+  }
+  ```
+
+The Notary v2 storage layout of 2 signature index objects:
+
+```
+<root>
+└── v2
+    └── repositories
+        └── net-monitor
+            └── _manifests
+                └── revisions
+                    └── sha256
+                        ├── 111ma2d22ae5ef400769fa51c84717264cd1520ac8d93dc071374c1be49a111m
+                        │   ├── link
+                        │   └── ref
+                        │       └── application/vnd.cncf.notary.config.v2+jwt
+                        │           └── sha256
+                        │               ├── 222cb130c152895905abe66279dd9feaa68091ba55619f5b900f2ebed38b222c
+                        │               │    └── link
+                        │               └── 333cc44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b785c333c
+                        │                   └── link
+                        ├── 222ibbf80b44ce6be8234e6ff90a1ac34acbeb826903b02cfa0da11c82cb222i
+                        │   └── link
+                        └── 333ic0c33ebc4a74a0a554c86ac2b28ddf3454a5ad9cf90ea8cea9f9e75c333i
+                            └── link
+```
+
+## Push, Discover, Pull Prototype
+
+Available here: https://github.com/notaryproject/distribution/tree/prototype-1
+
+The following steps illustrate how signatures can be stored and retrieved from a registry.
+
+### Prerequisites
+
+- Local registry prototype instance
+- [docker-generate](https://github.com/shizhMSFT/docker-generate)
+- [nv2](https://github.com/notaryproject/nv2)
+- `curl`
+- `jq`
+- `python3`
+
+### Push an image to your registry
+
+```shell
+# Local registry
+regIp="127.0.0.1" && \
+  regPort="5000" && \
+  registry="$regIp:$regPort" && \
+  repo="busybox" && \
+  tag="latest" && \
+  image="$repo:$tag" && \
+  reference="$registry/$image"
+
+# Pull image from docker hub and push to local registry
+docker pull $image && \
+  docker tag $image $reference && \
+  docker push $reference
+```
+
+### Generate image manifest and sign it
+
+```shell
+# Generate self-signed certificates
+openssl req \
+  -x509 \
+  -sha256 \
+  -nodes \
+  -newkey rsa:2048 \
+  -days 365 \
+  -subj "/CN=$regIp/O=example inc/C=IN/ST=Haryana/L=Gurgaon" \
+  -addext "subjectAltName=IP:$regIp" \
+  -keyout example.key \
+  -out example.crt
+
+# Generate image manifest
+manifestFile="manifest-to-sign.json" && \
+  docker generate manifest $image > $manifestFile
+
+# Sign manifest
+signatureFile="manifest-signature.jwt" && \
+  nv2 sign --method x509 \
+    -k example.key \
+    -c example.crt \
+    -r $reference \
+    -o $signatureFile \
+    file:$manifestFile
+```
+
+### Obtain manifest and signature digests
+
+```shell
+manifestDigest="sha256:$(sha256sum $manifestFile | cut -d " " -f 1)" && \
+  signatureDigest="sha256:$(sha256sum $signatureFile | cut -d " " -f 1)"
+```
+
+### Create an OCI index file referencing the manifest that was signed and its signature as config
+
+```shell
+indexFile="index.json" && \
+  indexMediaType="application/vnd.oci.image.index.v2+json" && \
+  configMediaType="application/vnd.cncf.notary.config.v2+jwt" && \
+  signatureFileSize=`wc -c < $signatureFile` && \
+  manifestMediaType="$(cat $manifestFile | jq -r '.mediaType')" && \
+  manifestFileSize=`wc -c < $manifestFile`
+
+cat <<EOF > $indexFile
+{
+  "schemaVersion": 2,
+  "mediaType": "$indexMediaType",
+  "config": {
+    "mediaType": "$configMediaType",
+    "digest": "$signatureDigest",
+    "size": $signatureFileSize
+  },
+  "manifests": [
+    {
+      "mediaType": "$manifestMediaType",
+      "digest": "$manifestDigest",
+      "size": $manifestFileSize
+    }
+  ]
+}
+EOF
+```
+
+### Obtain index digest
+
+```shell
+indexDigest="sha256:$(sha256sum $indexFile | cut -d " " -f 1)"
+```
+
+### Push signature and index
+
+```shell
+# Initiate blob upload and obtain PUT location
+configPutLocation=`curl -I -X POST -s http://$registry/v2/$repo/blobs/uploads/ | grep "Location: " | sed -e "s/Location: //;s/$/\&digest=$signatureDigest/;s/\r//"`
+
+# Push signature blob
+curl -X PUT -H "Content-Type: application/octet-stream" --data-binary @"$signatureFile" $configPutLocation
+
+# Push index
+curl -X PUT --data-binary @"$indexFile" -H "Content-Type: $indexMediaType" "http://$registry/v2/$repo/manifests/$indexDigest"
+```
+
+### Retrieve signatures of a manifest as referrer metadata
+
+```shell
+# URL encode index config media type
+metadataMediaType=`python3 -c "import urllib.parse, sys; print(urllib.parse.quote(sys.argv[1]))" $configMediaType`
+
+# Retrieve referrer metadata
+curl -s "http://$registry/v2/$repo/manifests/$manifestDigest/referrer-metadata?media-type=$metadataMediaType" | jq
+```
+
+### Verify signature
+
+```shell
+# Retrieve first signature and store it locally
+metadataDigest=`curl -s "http://$registry/v2/$repo/manifests/$manifestDigest/referrer-metadata?media-type=$metadataMediaType" | jq -r '.referrerMetadata[0]'` && \

It seems the response is not consistent with the one in the section referrer-metadata API

aviral26

comment created time in 3 days

PullRequestReviewEvent

issue openednotaryproject/nv2

How to sign an index (manifest list)

More specifically, when signing an index (manifest list), should all referenced artifacts be signed?

By looking at the doc, in this section: https://github.com/notaryproject/nv2/blob/prototype-1/docs/distribution/persistance-discovery-options.md#signature-persistence---option-2a-oci-index-signing-a-multi-arch-manifest

multi-arch-signature It's not very clear to me whether the objects 6~9 are required when signing the multi-arch manifest.
I'm afraid it may generate too many indexes if we require all referenced artifacts are signed when signing an index.

On the other hand, if we only require object 2,3, it will be hard to tell if the artifact is signed, when registry receives a request to pull manifest of one particular manifest, e.g. object 4

created time in 3 days

issue commentgoharbor/harbor

CSRF token invalid :/

@Fruchtgummi I see in the log it's not printing the version, could clarify how did you install Harbor? Have you compiled the source code?

By your latest comment do you mean you fixed the issue?

Fruchtgummi

comment created time in 3 days

issue commentgoharbor/harbor

CSRF token invalid

@honzasara The CSRF token validation is done by comparing the CSRF token and a secret cookie that is only assigned when you use UI. SO you should not pass the CSRF token when you curl the API.

honzasara

comment created time in 3 days

Pull request review commentnotaryproject/nv2

Proposal for generic reverse lookup

+# OCI Distribution
+
+To support [Notary v2 goals][notaryv2-goals], upload, persistence and discovery of signatures must be supported. Alternative designs were considered, as referenced in [persistance-discovery-options.md](./persistance-discovery-options.md).
+
+This document represents the current working prototype which:
+
+- Leverages [OCI Index][oci-index] to store Notary v2 Signatures
+- Implements `index.config` to align with the [OCI Artifacts][oci-artifacts] approach for artifact type differentiation within a registry.
+- Implements a referrer API to identify referenced artifacts. Such as what signatures refer to a specific container image.
+
+## Table of Contents
+
+- [Signature Persistence](#signature-persistence)
+- [Signature Discovery](#signature-discovery)
+- [Persisting Referrer Metadata (Signatures)](#persisting-referrer-metadata-signatures)
+- [Implementation](#implementation)
+- [Push, Discover, Pull Prototype](#push-discover-pull-prototype)
+
+## Signature Persistence
+
+Several [options for how to persist a signature were explored][signature-persistance-options] . We measure these options against the [goals of Notary v2][notaryv2-goals], specifically:
+
+- Maintain the original artifact digest and collection of associated tags, supporting existing dev through deployment workflows
+- Multiple signatures per artifact, enabling the originating vendor signature, public registry certification and user/environment signatures
+- Native Persistence within an OCI Artifact enabled, distribution spec based registry
+- Artifact and signature copying within and across OCI Artifact enabled, distribution spec based registries
+- Support multi-tenant registries enabling cloud providers and enterprises to support managed services at scale
+- Support private registries, where public content may be copied to, and new content originated within
+- Air-gapped environments, where the originating registry of content is not accessible
+
+To support the above requirements, signatures are stored as separate [OCI Artifacts][oci-artifacts], persisted as [OCI Index][oci-index] objects. They are maintained as any other artifact in a registry, supporting standard operations such as listing, deleting, garbage collection and any other content addressable operations within a registry.
+
+Following the [OCI Artifacts][oci-artifacts] design, [Notary v2 signatures][nv2-signature-spec] are identified with: `config.mediaType: "application/vnd.cncf.notary.config.v2+jwt"`.
+
+<img src="../../media/signature-as-index.png" width=650>
+
+The above represents the `net-monitor:v1` container image, signed by it's originating author (**wabbit-networks**) as well as **acme-rockets** which imported the image into their private registry.
+The signatures are persisted as OCI Indexes, with a new `index.config` object storing the signature content:
+
+1. **manifest digest for the `net-monitor:v1` image:** `sha256:111ma2d22ae5ef400769fa51c84717264cd1520ac8d93dc071374c1be49a111m`
+    ```JSON
+    {
+      "schemaVersion": 2,
+      "mediaType": "application/vnd.oci.image.manifest.v2+json",
+      "config": {
+        "mediaType": "application/vnd.oci.image.config.v1+json",
+        "digest": "sha256:111ca3788f3464fd9a06386c4d7a8e3018b525278ac4b9da872943d4cfea111c",
+        "size": 1906
+      },
+      "layers": [
+        {
+          "mediaType": "application/vnd.oci.image.layer.v1.tar+gzip",
+          "digest": "sha256:9834876dcfb05cb167a5c24953eba58c4ac89b1adf57f28f2f9d09af107ee8f0",
+          "size": 32654
+        },
+        {
+          "mediaType": "application/vnd.oci.image.layer.v1.tar+gzip",
+          "digest": "sha256:ec4b8955958665577945c89419d1af06b5f7636b4ac3da7f12184802ad867736",
+          "size": 73109
+        }
+      ]
+    }
+    ```
+
+2. **index digest for the wabbit-networks signature** `sha256:222ibbf80b44ce6be8234e6ff90a1ac34acbeb826903b02cfa0da11c82cb222i`
+
+    ```json
+    {
+      "schemaVersion": 2,
+      "mediaType": "application/vnd.oci.image.index.v2+json",
+      "config": {
+        "mediaType": "application/vnd.cncf.notary.config.v2+jwt",
+        "digest": "sha256:222cb130c152895905abe66279dd9feaa68091ba55619f5b900f2ebed38b222c",
+        "size": 1906
+      },
+      "manifests": [
+        {
+          "mediaType": "application/vnd.oci.image.manifest.v1+json",
+          "digest": "sha256:111ma2d22ae5ef400769fa51c84717264cd1520ac8d93dc071374c1be49a111m",
+          "size": 7023,
+          "platform": {
+            "architecture": "ppc64le",
+            "os": "linux"
+          }
+        }
+      ]
+    }
+    ```
+
+    The `index.config` contains the following signature information:
+`sha256:222cb130c152895905abe66279dd9feaa68091ba55619f5b900f2ebed38b222c`
+    ```json
+    {
+        "signed": {
+            "mediaType": "application/vnd.oci.image.manifest.v2+json",
+            "digest": "sha256:111ma2d22ae5ef400769fa51c84717264cd1520ac8d93dc071374c1be49a111m",
+            "size": 528,
+            "references": [
+                "registry.wabbit-networks.com/net-monitor:v1"
+            ],
+            "exp": 1627555319,
+            "nbf": 1596019319,
+            "iat": 1596019319
+        },
+        "signature": {
+            "typ": "x509",
+            "sig": "UFqN24K2fLj...",
+            "alg": "RS256",
+            "x5c": [
+                "MIIDszCCApugAwIBAgIUL1anEU/..."
+            ]
+        }
+    }
+    ```
+
+3. **index digest for the acme-rockets signature** `sha256:333ic0c33ebc4a74a0a554c86ac2b28ddf3454a5ad9cf90ea8cea9f9e75c333i`
+
+    ```json
+    {
+      "schemaVersion": 2,
+      "mediaType": "application/vnd.oci.image.index.v2+json",
+      "config": {
+        "mediaType": "application/vnd.cncf.notary.config.v2+jwt",
+        "digest": "sha256:333cc44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b785c333c",
+        "size": 1906
+      },
+      "manifests": [
+        {
+          "mediaType": "application/vnd.oci.image.manifest.v1+json",
+          "digest": "sha256:111ma2d22ae5ef400769fa51c84717264cd1520ac8d93dc071374c1be49a111m",
+          "size": 7023,
+          "platform": {
+            "architecture": "ppc64le",
+            "os": "linux"
+          }
+        }
+      ]
+    }
+    ```
+
+    The `index.config` contains the following signature information:
+`sha256:333cc44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b785c333c`
+    ```json
+    {
+        "signed": {
+            "mediaType": "application/vnd.oci.image.manifest.v2+json",
+            "digest": "sha256:111ma2d22ae5ef400769fa51c84717264cd1520ac8d93dc071374c1be49a111m",
+            "size": 528,
+            "references": [
+                "registry.acme-rockets.com/net-monitor:v1"
+            ],
+            "exp": 1627555319,
+            "nbf": 1596019319,
+            "iat": 1596019319
+        },
+        "signature": {
+            "typ": "x509",
+            "sig": "UFqN24K2fLj...",
+            "alg": "RS256",
+            "x5c": [
+                "MIIDszCCApugAwIBAgIUL1anEU/..."
+            ]
+        }
+    }
+    ```
+
+**Pros with this approach:**
+
+- Utilize the existing `index.manifests` collection for linking artifacts
+- Registries that support oci index already have infrastructure for tracking `index.manifests`, including delete operations and garbage collection
+- Existing distribution-spec upload APIs are utilized
+- Based on the artifact type:  `index.config.mediaType: "application/vnd.cncf.notary.config.v2+jwt"`, a registry may implement role checking to confirm the identity pushing the Notary v2 artifact type has the registries equivalent of a signer role
+- As registry operators may offer role checking for different artifact types, Notary v2 Signatures are just one of many types they may want to authorize
+
+**Cons with this approach:**
+
+- OCI index does not yet support the [OCI config descriptor][oci-descriptor]. This would require a schema change to oci-index, with a version bump.
+  - This has been a [desired item for OCI Artifacts][oci-artifacts-index] to support other artifact types which would base on Index.
+
+### Signature Persistence - Signing Multi-arch Manifests
+
+Taking the above scenario further, a signature can be associated with an individual manifest, or a signature can be applied to an index. The index could be a multi-arch index (windows & linux), or the index might represent a [CNAB][cnab].
+
+In the below case, the `net-monitor` software is available as windows (`net-monitor:v1-win`) and linux (`net-monitor:v1-lin`) images, as well as a multi-arch index (`net-monitor:v1`)
+The platform specific images, and the multi-arch index are all signed by **wabbit-networks** and **acme-rockets**.
+
+<img src="../../media/signature-as-index-signing-multi-arch-index.png" width=1100>
+
+- Objects (1-3) are equivalent to above references, with the exception that (1) is changed from a platform specific manifest to a multi-arch index
+- Objects (4-5) represent architecture specific manifests for the multi-arch manifest (1)
+- Objects (6-9) are Notary v2 signatures by the originating author (**wabbit-networks**) and the consumer (**acme-rockets**)
+
+## Signature Discovery
+
+The [OCI distribution-spec][oci-distribution] describes the action of [pushing content][oci-distribution-push] and [pulling of content][oci-distribution-pull].  Pulling a manifest and the associated layers implies a registry must store some linkage between the manifest and its references to layers and config. There are implied additional references between an [OCI Index][oci-index] and its referenced manifest.
+
+To support the [Notary v2 workflow][notaryv2-workflow], where the system knows of the artifact being referenced (1), but doesn't know what signatures might exist on that artifact (2-3, 6-9), a discovery API is required to return the objects that refer to the target artifact (1).
+
+Similar to pulling of an artifact, the referrer API implies a reverse lookup is possible. Based on a given artifact digest, what other objects are referencing that object.
+
+To generalize discovery, a `referrer-metadata` API is proposed to enable discovery of an referenced objects. To support this reverse lookup prototype, additions are proposed to the Notary v2 fork of the reference implementation [docker/distribution][notaryv2-distribution] through [notaryv2-referrer-api].
+
+A referrer is any registry artifact that has an immutable reference to a manifest. An OCI index is a referrer to each manifest it references. The [OCI image spec][oci-image] does not include a config property for an OCI index and there is no reverse lookup of referrers in docker distribution.
+
+A modified OCI index with an `index.config` property that references a collection of manifests allows us to associate a "type" to the referrer-referenced relationship, where the `index.config.mediaType` = `application/vnd.cncf.notary.config.v2+jwt`.
+
+### referrer-metadata API
+
+```HTTP
+`GET http://localhost:5000/v2/net-monitor/manifests/11wma2d22ae5ef400769fa51c84717264cd1520ac8d93dc071374c1be49a11wm/referrer-metadata`
+```
+
+Using the diagram above, the `net-monitor:v1` manifest tag (4) has a digest of: `sha256:11wma2d22ae5ef400769fa51c84717264cd1520ac8d93dc071374c1be49a11wm`. When requesting the referenced objects, we see 2 signature objects being returned (wabbit-networks (6) & acme-rockets(7)), and an OCI multi-arch index (1).
+
+The response could be in the following format. Note the additional `config-mediaType` to identify the specific artifact type in the results.
+
+```HTTP
+200 OK
+Content-Type: application/json
+{
+  "digest": "sha256:11wma2d22ae5ef400769fa51c84717264cd1520ac8d93dc071374c1be49a11wm",
+  "references": [
+    {
+      "digest": "sha256:222mbbf80b44ce6be8234e6ff90a1ac34acbeb826903b02cfa0da11c82cb222m",
+      "mediaType": "application/vnd.oci.image.index.v1+json",
+      "size": "1024",
+      "config-mediaType": "application/vnd.cncf.notary.config.v2+jwt"
+    },
+    {
+      "digest": "sha256:333mc0c33ebc4a74a0a554c86ac2b28ddf3454a5ad9cf90ea8cea9f9e75c333m",
+      "mediaType": "application/vnd.oci.image.index.v1+json",
+      "size": "1025",
+      "config-mediaType": "application/vnd.cncf.notary.config.v2+jwt"
+    },
+    {
+      "digest": "sha256:111ia2d22ae5ef400769fa51c84717264cd1520ac8d93dc071374c1be49a111i",
+      "mediaType": "application/vnd.oci.image.index.v1+json",
+      "size": "1025",
+      "config-mediaType": "application/vnd.oci.image.index.v1+json"
+    }
+  ]
+}
+```
+
+## Persisting Referrer Metadata (Signatures)
+
+The proposal implements a referrer metadata store for manifests that is essentially a reverse-lookup, by `mediaType`, to referrer config objects. For example, when an OCI index is pushed, if it references a config object of media type `application/vnd.cncf.notary.config.v2+jwt`, a link to the config object is recorded in the referrer metadata store of each referenced manifest.

How do you generate the response in section referrer-metadata API with such data structure? IMO the linkage is between the referrer and the manifest, rather than between the config object of the referrer and the manifest.

IMO instead of only storing the referrer metadata, you should first store the referrer?

aviral26

comment created time in 3 days

PullRequestReviewEvent

Pull request review commentnotaryproject/nv2

Proposal for generic reverse lookup

+# OCI Distribution
+
+To support [Notary v2 goals][notaryv2-goals], upload, persistence and discovery of signatures must be supported. Alternative designs were considered, as referenced in [persistance-discovery-options.md](./persistance-discovery-options.md).
+
+This document represents the current working prototype which:
+
+- Leverages [OCI Index][oci-index] to store Notary v2 Signatures
+- Implements `index.config` to align with the [OCI Artifacts][oci-artifacts] approach for artifact type differentiation within a registry.
+- Implements a referrer API to identify referenced artifacts. Such as what signatures refer to a specific container image.
+
+## Table of Contents
+
+- [Signature Persistence](#signature-persistence)
+- [Signature Discovery](#signature-discovery)
+- [Persisting Referrer Metadata (Signatures)](#persisting-referrer-metadata-signatures)
+- [Implementation](#implementation)
+- [Push, Discover, Pull Prototype](#push-discover-pull-prototype)
+
+## Signature Persistence
+
+Several [options for how to persist a signature were explored][signature-persistance-options] . We measure these options against the [goals of Notary v2][notaryv2-goals], specifically:
+
+- Maintain the original artifact digest and collection of associated tags, supporting existing dev through deployment workflows
+- Multiple signatures per artifact, enabling the originating vendor signature, public registry certification and user/environment signatures

I don't think how to store the multiple signatures is discussed in this doc: https://github.com/notaryproject/nv2/blob/prototype-1/docs/distribution/persistance-discovery-options.md

I assume for multiple signatures admin has to push multiple indexes?

aviral26

comment created time in 3 days

PullRequestReviewEvent

issue openedgoharbor/harbor

Inconsistency in registry log

When debugging an issue where the registry is running out of CPU and response was extremely slow.

We found such log entries:

...
level=info msg="response completed" go.version=go1.14.5 http.request.host=projects-stg.registry.vmware.com http.request.id=80d3f148-109d-41ca-a8b4-93076384d15c http.request.method=GET http.request.remoteaddr=10.199.17.39 http.request.uri="/v2/tkg/antrea/antrea-debian/blobs/sha256:dc196cbcea18f906126b9184c4fa89d6a46a5e98a748000bba9d56c755c2be9a" http.request.useragent="containerd/v1.3.4" http.response.contenttype="application/octet-stream" http.response.duration=1m51.900179248s http.response.status=200 http.response.written=14770572 
level=info msg="response completed" go.version=go1.14.5 http.request.host=projects-stg.registry.vmware.com http.request.id=c57316a8-5c4a-4fc4-b0a9-4ab94d03198d http.request.method=GET http.request.remoteaddr=10.199.17.39 http.request.uri="/v2/tkg/antrea/antrea-debian/blobs/sha256:dc196cbcea18f906126b9184c4fa89d6a46a5e98a748000bba9d56c755c2be9a" http.request.useragent="containerd/v1.3.4" http.response.contenttype="application/octet-stream" http.response.duration=2m20.799846616s http.response.status=200 http.response.written=5161936 

The client may have closed the connection due to timeout but the log message still says the response code is 200, more interestingly, the registry is serving the same blob but the values of http.response.written are different.

This may be related to way the upstream registry dumps response info to log messages.

We should understand the root cause.

created time in 7 days

issue commentgoharbor/harbor

[OIDC] Conflict in username, when change OIDC domain

@yue-wen Even if you have to change the domain of OIDC provider I think it may be possible to update the configuration to make sure the issuer stays the same.

As for your suggestion, if we decide to support multiple OIDC provider, the groups with same name from different OIDC provider must be treated as different groups, same reason we can't assume jack from google and jack from facebook is the same person. So I don't think there can be a smooth migration with such approach.

yue-wen

comment created time in 7 days

issue closedgoharbor/harbor

support SAML for TAS

When selecting uaa for TAS, we need the ability to select SAML/OIDC for consistency with uaa for PKS experience. This is a placeholder for tracking issue, for detailed requirements please ping me

closed time in 8 days

xaleeks

issue commentgoharbor/harbor

support SAML for TAS

Confirmed this is not needed.

xaleeks

comment created time in 8 days

issue commentgoharbor/harbor

[OIDC] Conflict in username, when change OIDC domain

@yue-wen unfortunately this is the current design, Harbor uses the issuer and subject to identify users from OIDC provider. And your OIDC provider uses the domain as the issuer. I wish to suggest you stick to the same issuer when you change the domain, because logically it is the same one.

I agree current approach does not have enough flexibility to cover the corner cases. I'm thinking maybe we should introduce other settings so admin can customize the claim for identifying user.

There is another issue related to this limitation: https://github.com/goharbor/harbor/issues/10797

yue-wen

comment created time in 8 days

issue commentgoharbor/harbor

[feature request] add hostedDomain (hd) validation in OIDC auth mode

@dkulchinsky

I think your requirement is some allowlist for filtering the ID token, such that only users meet the criteria can be onboarded and use Harbor? Is that correct?

But I'm not sure if hg a widely used claim?

dkulchinsky

comment created time in 8 days

issue commentgoharbor/harbor

Azure AD OIDC - Groups list from JWT token is entered as a single group in Harbor

@ivan-georgiev I double checked the code, currently Harbor favors the data from userinfo if the ID provider has it in response of the well-known URI: https://github.com/goharbor/harbor/blob/64af09d52bc814d3e1621d5114302406e9394a44/src/common/utils/oidc/helper.go#L248-L247

So could you double check what was returned by the userinfo endpoint?

More details see: https://openid.net/specs/openid-connect-core-1_0.html#UserInfoRequest

ivan-georgiev

comment created time in 8 days

issue commentgoharbor/harbor

Sync username attribute from OIDC ID token claim.

@jkroepke Thanks for raising this, for such kind of issue we have two choices:

  1. Treat external ID provider as single source of truth and always sync the info when user logs in.
  2. Allow user update the profile when he's onboarded via external ID manager (also applied to LDAP).

Do you think option 2 is more flexible, because in addition to username and email, there may be other attributes in user profile that he may want to update after onboarded.

In particular, as for the username, as it's also used for docker login, what about we introduce display name so that attribute can be updated after marriage while the username stays the same?

jkroepke

comment created time in 8 days

issue commentgoharbor/harbor

State mismatch - Error after Harbor Upgrade to 2.1.0

@yue-wen the state is stored in session it's possible that the data mapped to sid is broken after the upgrade.

Is it constantly reproducible?

More discussion regarding the state mismatch please see #12982

Vad1mo

comment created time in 8 days

issue commentgoharbor/harbor

failed to verify connection

Thank @Thoro @evanstucker-hates-2fa I'm not sure we have bandwidth to do it right away but you can ping me at harbor-dev slack channel in CNCF slack, and see if we can find some time to setup a zoom.

evanstucker-hates-2fa

comment created time in 8 days

issue commentgoharbor/harbor

State mismatch when login via oidc

@Leo-ljr You only see cookie when the Set-Cookie header is passed to browser.

I think you can check why the Set-Cookie header is not passed to browser. For example was it passed to haproxy? did haproxy drop this header in response for some reason?

I may be wrong to think the root cause is haproxy but I don't reproduce your problem and the only difference in your env is the haproxy. So I suggest we start by doing an investigation at haproxy level.

Leo-ljr

comment created time in 8 days

PullRequestReviewEvent

issue commentgoharbor/harbor

Harbor redirects to OIDC login page, when the username or password invalid

We may consider redesigning the login dialog when Harbor is configured to use OIDC.

For example the input text boxes for username and password can be folded and they will apear only when user click the link "Login as internal admin user"

luckymagic7

comment created time in 8 days

PullRequestReviewEvent

push eventgoharbor/harbor-helm

Wenkai Yin

commit sha 8ad9b9245aba5a96c6d9751f8b685a516dcb0892

Upload the Harbor chart 1.5.0 to the chart repository Upload the Harbor chart 1.5.0 to the chart repository Signed-off-by: Wenkai Yin <yinw@vmware.com>

view details

Daniel Jiang

commit sha ab567f25ad89471f2aa20875a7326b47fba2c5d9

Merge pull request #743 from ywk253100/200923_1.5.0 Upload the Harbor chart 1.5.0 to the chart repository

view details

push time in 8 days

PR merged goharbor/harbor-helm

Upload the Harbor chart 1.5.0 to the chart repository

Upload the Harbor chart 1.5.0 to the chart repository

Signed-off-by: Wenkai Yin yinw@vmware.com

+28 -1

0 comment

2 changed files

ywk253100

pr closed time in 8 days

PullRequestReviewEvent

issue closedgoharbor/harbor

OIDC log in breaks when redis has an NFS stale handle

I was asked to open this as an issue here: https://cloud-native.slack.com/archives/CC1E09J6S/p1595266394424000

Expected behavior and actual behavior:

Perhaps redis should be marked as unhealthy if it logs an nfs stale handle.

Steps to reproduce the problem:

I am running k3s on a single ubuntu vm with rancher's local path pvc provisioner. Harbor is installed via helm chart template.

The pvc provisioner is configured to create folders within /mnt/kube which is an NFS mount from another system.

I can't really provide any more details than that. This all appears very dependent on my particular setup but I believe the harbor issue is that it doesn't recognize this as an error state.

In a multi-host cluster, harbor could try scheduling the redis pod to use a pvc on another host.

Restarting the VM resolves the issue for some time.

Versions: Please specify the versions of following systems.

  • harbor version: v2.0.1
  • kubernetes version: v1.18.4+k3s1
  • host: ubuntu 18.04 LTS

closed time in 8 days

brandonkal

issue commentgoharbor/harbor

OIDC log in breaks when redis has an NFS stale handle

Closing this issue as this looks like a env issue.

brandonkal

comment created time in 8 days

issue commentgoharbor/harbor

Group ID instead of Group Name when using Azure AD OICD

@yaron Thanks for the explanation, I now understand the issue.

However, for simplicity and maintainability, we want to keep a unified workflow for all OIDC providers. Such that in the pipeline we'll only test dex .

Currently there's no plan to add specific logic for different OIDC vendor.

bgsz

comment created time in 8 days

issue commentgoharbor/harbor

OIDC login: add "auto redirect" option for SSO login

@xaleeks I don't think this is covered in v2.2 plan, this requirement mainly focuses on UX improvement, it will not block any workflow if it is not implemented, hence I'm putting it in backlog

iWangJiaxiang

comment created time in 8 days

issue closedgoharbor/harbor

admin login failed

Hi,

I use admin account in Jenkins CI pipeline, sometimes docker login command failed, so I find out why this happened.

Here's jenkins output.

10:38:27  + echo ****
10:38:27  + docker login -u **** --password-stdin harbor.company.net
10:38:27  Error response from daemon: Get https://harbor.company.net/v2/: unauthorized: authentication required

I found that you have the code which is for locking account when it login failed in https://github.com/goharbor/harbor/issues/9429. (but I don't know locking works how many failed and how long lock the account)

I search pod log and find admin login fail.

# k logs harbor-prod-harbor-core-6cfdcd5bf5-s4cns -n harbor-prod | grep "failed to authenticate admin" | tail
2020-02-19T07:24:16Z [ERROR] [/core/filter/security.go:430]: failed to authenticate admin: Failed to authenticate user, due to error 'Invalid credentials'
2020-02-19T07:24:47Z [ERROR] [/core/filter/security.go:430]: failed to authenticate admin: Failed to authenticate user, due to error 'Invalid credentials'
2020-02-19T07:28:37Z [ERROR] [/core/filter/security.go:430]: failed to authenticate admin: Failed to authenticate user, due to error 'Invalid credentials'
2020-02-19T07:28:56Z [ERROR] [/core/filter/security.go:430]: failed to authenticate admin: Failed to authenticate user, due to error 'Invalid credentials'
2020-02-19T07:30:26Z [ERROR] [/core/filter/security.go:430]: failed to authenticate admin: Failed to authenticate user, due to error 'Invalid credentials'
2020-02-19T07:30:56Z [ERROR] [/core/filter/security.go:430]: failed to authenticate admin: Failed to authenticate user, due to error 'Invalid credentials'
2020-02-19T07:37:46Z [ERROR] [/core/filter/security.go:430]: failed to authenticate admin: Failed to authenticate user, due to error 'Invalid credentials'
2020-02-19T07:47:36Z [ERROR] [/core/filter/security.go:430]: failed to authenticate admin: Failed to authenticate user, due to error 'Invalid credentials'
2020-02-19T07:49:29Z [ERROR] [/core/filter/security.go:430]: failed to authenticate admin: Failed to authenticate user, due to error 'Invalid credentials'
2020-02-19T07:51:47Z [ERROR] [/core/filter/security.go:430]: failed to authenticate admin: Failed to authenticate user, due to error 'Invalid credentials'

I don't know why admin login fails, cause no one, except me, will not know admin password.

I grep before and after to find out reason.

# k logs harbor-prod-harbor-core-6cfdcd5bf5-s4cns -n harbor-prod | grep -B3 -A3 "failed to authenticate admin" | tail -20
2020-02-24T23:32:36Z [ERROR] [/core/filter/security.go:430]: failed to authenticate admin: Failed to authenticate user, due to error 'Invalid credentials'
2020/02/24 23:32:37 [D] [server.go:2774] |      127.0.0.1| 200 | 1.627553764s|   match| GET      /service/token   r:/service/token
2020/02/24 23:32:37 [D] [server.go:2774] |   10.244.2.189| 200 |   9.509607ms|   match| POST     /service/notifications   r:/service/notifications
2020/02/24 23:32:37 [D] [server.go:2774] |      127.0.0.1| 200 |  78.393487ms|   match| GET      /v2/se/swarm_at_sa/manifests/latest   r:/v2/*
--
2020/02/24 23:36:37 [D] [server.go:2774] |     10.244.2.1| 200 |  10.669165ms|   match| GET      /api/ping   r:/api/ping
2020/02/24 23:36:42 [D] [server.go:2774] |      127.0.0.1| 401 |   12.29183ms|   match| GET      /v2/   r:/v2/*
2020-02-24T23:36:42Z [ERROR] [/core/filter/security.go:244]: Failed to verify secret: user is not onboarded as OIDC user
2020-02-24T23:36:43Z [ERROR] [/core/filter/security.go:430]: failed to authenticate admin: Failed to authenticate user, due to error 'Invalid credentials'
2020/02/24 23:36:46 [D] [server.go:2774] |     10.244.2.1| 200 |  11.216593ms|   match| GET      /api/ping   r:/api/ping
2020/02/24 23:36:47 [D] [server.go:2774] |     10.244.2.1| 200 |  11.265171ms|   match| GET      /api/ping   r:/api/ping
2020/02/24 23:36:51 [D] [server.go:2774] |      127.0.0.1| 200 |  15.207937ms|   match| GET      /v2/core_svr19a1/amf-aic/tags/list   r:/v2/*
--
2020/02/24 23:41:38 [D] [server.go:2774] |     10.244.2.1| 200 |  17.973302ms|   match| GET      /api/ping   r:/api/ping
2020/02/24 23:41:43 [D] [server.go:2774] |      127.0.0.1| 401 |  18.821842ms|   match| GET      /v2/   r:/v2/*
2020-02-24T23:41:43Z [ERROR] [/core/filter/security.go:244]: Failed to verify secret: user is not onboarded as OIDC user
2020-02-24T23:41:45Z [ERROR] [/core/filter/security.go:430]: failed to authenticate admin: Failed to authenticate user, due to error 'Invalid credentials'
2020/02/24 23:41:46 [D] [server.go:2774] |     10.244.2.1| 200 |  13.547409ms|   match| GET      /api/ping   r:/api/ping
2020/02/24 23:41:47 [D] [server.go:2774] |      127.0.0.1| 401 |  18.085535ms|   match| GET      /v2/   r:/v2/*
2020-02-24T23:41:47Z [ERROR] [/core/filter/security.go:244]: Failed to verify secret: failed to refresh token

I can see user is not onboarded as OIDC user message. I am using OIDC integration now.

I doubt about OIDC integration and admin account. Please check this problem.

Versions:

  • harbor version: 1.10.0

Thanks,

closed time in 8 days

Hokwang

issue commentgoharbor/harbor

admin login failed

I see there are different conversation happening in this issue

I'm closing this one, and if you still see the problem, please create a new issue.

Hokwang

comment created time in 8 days

issue commentgoharbor/harbor

admin login failed

@skandragon We do not maintain the chart under bitnami please talk to the maintainer of the chart to help you debug.

Hokwang

comment created time in 8 days

issue commentgoharbor/harbor

admin login failed

@xtreme-conor-nosal Conor, there will be debug message like Login failed, locking xxx....: https://github.com/goharbor/harbor/blob/b21f9dc6f10dec60597d7a279c1d5f4b999dcba7/src/core/auth/authenticator.go#L152

Hokwang

comment created time in 8 days

PR closed goharbor/harbor-helm

[harbor] Bump minor version to v2.1.0

In this PR, we bump Harbor docker images to the latest stable release which points to v2.1.0.

PS: This should point to new release branch at 1.5.0 (?)

+17 -17

4 comments

2 changed files

dntosas

pr closed time in 8 days

pull request commentgoharbor/harbor-helm

[harbor] Bump minor version to v2.1.0

The master branch will be pinned to dev

There has been a release branch that is pinned to v2.1.0: https://github.com/goharbor/harbor-helm/tree/1.5.0

After final verification, we'll tag the release.

dntosas

comment created time in 8 days

issue closedgoharbor/harbor

when use oidc auth, which claim do you use?

Hi,

When I login with oidc, I can see my profile. image

I guess harbor only cares username, email, full name for now, and I think username means id. I am currently in the situation with same username and full name.

Which claim do you use for username and full name ? Please let me know. email is fine.

Thanks.

closed time in 8 days

Hokwang

issue commentgoharbor/harbor

when use oidc auth, which claim do you use?

Closing this issue as #9311 has been merged.

Hokwang

comment created time in 8 days

PullRequestReviewEvent

issue commentgoharbor/harbor

Need refactor replication registry client and http client

Per discussion with @ywk253100 this should be handled in the gcr adapter.

bitsf

comment created time in 10 days

issue commentgoharbor/harbor

Only clean cache of blobs which are deleted during non blocking GC in redis

@wy65701436 We should double check the impact for deleting the entries from redis, I don't think walking through 100k entries should take that long.

heww

comment created time in 10 days

issue commentgoharbor/harbor

replicate the image folder is missed when specify “destination namespace”

@bitsf This can be fixed along with the enhancement for harbor <-> harbor replication?

WenwuPeng

comment created time in 10 days

issue commentgoharbor/harbor

Health check of replication adapter may be problematic

Let's investigate if we can use HEAD instead of GET for health-check. If HEAD is not possible this has to be a wont fix

reasonerjt

comment created time in 10 days

issue commentgoharbor/harbor

Refactor the legacy code to enabe the database transaction

We should fix such issue by adapting old APIs to new programming model, rather than inventing a new way to fix it.

ywk253100

comment created time in 10 days

issue commentgoharbor/harbor

Add oidc_admin_group to OIDC authentication like LDAP has

This dev work and follow up discussion will be tracked in #13113

Eric-Fontana-Bose

comment created time in 10 days

issue openedgoharbor/harbor

Support admin group in

What can we help you?

created time in 10 days

push eventgoharbor/harbor-helm

t.fouchard

commit sha 7a94a4f4cd0357caefb6b032f783cb59ef575ad2

s3 storage allow to skip verify Signed-off-by: hightoxicity <tony.fouchard@prevision.io>

view details

Daniel Jiang

commit sha 1b676d68401b1c15cdb03a4e34787b2ed24756b4

Merge pull request #496 from hightoxicity/feat-s3-storage-skip-verify s3 storage allow to skip verify

view details

push time in 10 days

PullRequestReviewEvent

push eventreasonerjt/harbor-helm

Daniel Jiang

commit sha 2052c78198d7020d5fd50bfadc297f0c9b0b0564

Add startup probe to harbor-core Fixes #502 Signed-off-by: Daniel Jiang <jiangd@vmware.com>

view details

push time in 13 days

issue commentgoharbor/harbor

Harbor Notary + Content Trust returning 401 on basic auth challenge (Deployed using Helm Chart 1.4.2)

@DandyDeveloper This is tricky as I don't know exactly how the registry-image-resource works.

Please double check if the client sends the same credentials (username/password) to both registry and CONTENT_TRUST_SERVER

DandyDeveloper

comment created time in 13 days

PR opened goharbor/harbor-helm

Bump the the requirement for k8s version

Signed-off-by: Daniel Jiang jiangd@vmware.com

+1 -1

0 comment

1 changed file

pr created time in 14 days

pull request commentgoharbor/harbor-helm

Update Ingress apiVersion for support Kubernetes v1.16

Thanks and I'll write another PR to remove the if chunks as we'll only support v1.16+ since v1.5.0 chart.

ramrodo

comment created time in 14 days

PullRequestReviewEvent

create barnchreasonerjt/harbor-helm

branch : bump-up-k8s-req

created branch time in 14 days

PR opened goharbor/harbor-helm

Add startup probe to harbor-core

Fixes #502

Signed-off-by: Daniel Jiang jiangd@vmware.com

+15 -7

0 comment

4 changed files

pr created time in 14 days

push eventreasonerjt/harbor-helm

Daniel Jiang

commit sha 7a55206a8641c94341ca2dd4d48445f24777b157

Add startup probe to harbor-core Fixes #502 Signed-off-by: Daniel Jiang <jiangd@vmware.com>

view details

push time in 14 days

pull request commentgoharbor/harbor-helm

Fix the filesystem permission for registry and redis using initContainer

Why fsGroup does not work for you?

huats

comment created time in 14 days

issue commentgoharbor/harbor

Support custom oauth request parameters for OIDC Authentication

It is. Are such "custom parameters" mentioned in any of the specs?

mhuangpivotal

comment created time in 14 days

issue commentgoharbor/harbor

Harbor Notary + Content Trust returning 401 on basic auth challenge (Deployed using Helm Chart 1.4.2)

How do you automate this process? The flow for docker content trust should work, but that requires interaction on the console, so something like expect may help.

DandyDeveloper

comment created time in 14 days

issue closedgoharbor/harbor

Cannot replicate images from the Harbor when current auth_mode is OIDC

It seems that only the admin user can be used to replicate image but it is unsafe and not allowed. robot account can not replicate image and the cli/token of the OIDC user only used for cli login.

closed time in 15 days

stonezdj

issue commentgoharbor/harbor

Cannot replicate images from the Harbor when current auth_mode is OIDC

Closing since it has been fixed in v2.1

stonezdj

comment created time in 15 days

issue commentgoharbor/harbor

Pod fails when deploying Harbor on Kubernetes

I told you in https://github.com/goharbor/harbor/issues/12251#issuecomment-645514769

zjcnew

comment created time in 15 days

issue closedgoharbor/harbor

Pod fails when deploying Harbor on Kubernetes

企业微信截图_20200616150425

kubectl logs harbor-harbor-core-8db96fcd-l2jsq -n harbor

ls: /harbor_cust_cert: No such file or directory 2020-06-16T07:01:54Z [INFO] [/controller/artifact/processor/processor.go:58]: the processor to process media type application/vnd.oci.image.index.v1+json registered 2020-06-16T07:01:54Z [INFO] [/controller/artifact/processor/processor.go:58]: the processor to process media type application/vnd.docker.distribution.manifest.list.v2+json registered 2020-06-16T07:01:54Z [INFO] [/controller/artifact/processor/processor.go:58]: the processor to process media type application/vnd.docker.distribution.manifest.v1+prettyjws registered 2020-06-16T07:01:54Z [INFO] [/controller/artifact/processor/processor.go:58]: the processor to process media type application/vnd.oci.image.config.v1+json registered 2020-06-16T07:01:54Z [INFO] [/controller/artifact/processor/processor.go:58]: the processor to process media type application/vnd.docker.container.image.v1+json registered 2020-06-16T07:01:54Z [INFO] [/controller/artifact/processor/processor.go:58]: the processor to process media type application/vnd.cncf.helm.config.v1+json registered 2020-06-16T07:01:54Z [INFO] [/controller/artifact/processor/processor.go:58]: the processor to process media type application/vnd.cnab.manifest.v1 registered 2020-06-16T07:01:54Z [INFO] [/replication/adapter/native/adapter.go:36]: the factory for adapter docker-registry registered 2020-06-16T07:01:54Z [INFO] [/replication/adapter/harbor/adaper.go:31]: the factory for adapter harbor registered 2020-06-16T07:01:54Z [INFO] [/replication/adapter/dockerhub/adapter.go:25]: Factory for adapter docker-hub registered 2020-06-16T07:01:54Z [INFO] [/replication/adapter/huawei/huawei_adapter.go:27]: the factory of Huawei adapter was registered 2020-06-16T07:01:54Z [INFO] [/replication/adapter/googlegcr/adapter.go:29]: the factory for adapter google-gcr registered 2020-06-16T07:01:54Z [INFO] [/replication/adapter/awsecr/adapter.go:47]: the factory for adapter aws-ecr registered 2020-06-16T07:01:54Z [INFO] [/replication/adapter/azurecr/adapter.go:15]: Factory for adapter azure-acr registered 2020-06-16T07:01:54Z [INFO] [/replication/adapter/aliacr/adapter.go:31]: the factory for adapter ali-acr registered 2020-06-16T07:01:54Z [INFO] [/replication/adapter/jfrog/adapter.go:30]: the factory of jfrog artifactory adapter was registered 2020-06-16T07:01:54Z [INFO] [/replication/adapter/quayio/adapter.go:38]: the factory of Quay.io adapter was registered 2020-06-16T07:01:54Z [INFO] [/replication/adapter/helmhub/adapter.go:30]: the factory for adapter helm-hub registered 2020-06-16T07:01:54Z [INFO] [/replication/adapter/gitlab/adapter.go:17]: the factory for adapter gitlab registered 2020-06-16T07:01:54Z [INFO] [/core/controllers/base.go:299]: Config path: /etc/core/app.conf 2020-06-16T07:01:54Z [INFO] [/core/main.go:111]: initializing configurations... 2020-06-16T07:01:54Z [INFO] [/core/config/config.go:83]: key path: /etc/core/key 2020-06-16T07:01:54Z [INFO] [/core/config/config.go:60]: init secret store 2020-06-16T07:01:54Z [INFO] [/core/config/config.go:63]: init project manager 2020-06-16T07:01:54Z [INFO] [/core/config/config.go:95]: initializing the project manager based on local database... 2020-06-16T07:01:54Z [INFO] [/core/main.go:113]: configurations initialization completed 2020-06-16T07:01:54Z [INFO] [/common/dao/base.go:84]: Registering database: type-PostgreSQL host-harbor-harbor-database port-5432 databse-registry sslmode-"disable" [ORM]2020/06/16 07:01:54 register db Ping default, pq: database "registry" does not exist 2020-06-16T07:01:54Z [FATAL] [/core/main.go:120]: failed to initialize database: register db Ping default, pq: database "registry" does not exist

kubectl logs harbor-harbor-jobservice-74d94cff5b-j6rnk -n harbor

2020-06-16T07:13:22Z [INFO] [/replication/adapter/helmhub/adapter.go:30]: the factory for adapter helm-hub registered 2020-06-16T07:13:22Z [INFO] [/replication/adapter/gitlab/adapter.go:17]: the factory for adapter gitlab registered 2020-06-16T07:13:22Z [INFO] [/common/config/store/driver/rest.go:31]: get configuration from url: http://harbor-harbor-core:80/api/internal/configurations 2020-06-16T07:13:22Z [INFO] [/jobservice/logger/sweeper_controller.go:97]: 0 outdated log entries are sweepped by sweeper *sweeper.FileSweeper 2020-06-16T07:13:52Z [ERROR] [/common/config/store/driver/rest.go:34]: Failed on load rest config err:Get http://harbor-harbor-core:80/api/internal/configurations: dial tcp 10.107.104.211:80: i/o timeout, url:http://harbor-harbor-core:80/api/internal/configurations 2020-06-16T07:13:52Z [ERROR] [/jobservice/job/impl/context.go:75]: Job context initialization error: failed to load rest config 2020-06-16T07:13:52Z [INFO] [/jobservice/job/impl/context.go:78]: Retry in 9 seconds 2020-06-16T07:14:01Z [INFO] [/common/config/store/driver/rest.go:31]: get configuration from url: http://harbor-harbor-core:80/api/internal/configurations 2020-06-16T07:14:31Z [ERROR] [/common/config/store/driver/rest.go:34]: Failed on load rest config err:Get http://harbor-harbor-core:80/api/internal/configurations: dial tcp 10.107.104.211:80: i/o timeout, url:http://harbor-harbor-core:80/api/internal/configurations 2020-06-16T07:14:31Z [ERROR] [/jobservice/job/impl/context.go:75]: Job context initialization error: failed to load rest config 2020-06-16T07:14:31Z [INFO] [/jobservice/job/impl/context.go:78]: Retry in 13 seconds 2020-06-16T07:14:44Z [INFO] [/common/config/store/driver/rest.go:31]: get configuration from url: http://harbor-harbor-core:80/api/internal/configurations 2020-06-16T07:15:14Z [ERROR] [/common/config/store/driver/rest.go:34]: Failed on load rest config err:Get http://harbor-harbor-core:80/api/internal/configurations: dial tcp 10.107.104.211:80: i/o timeout, url:http://harbor-harbor-core:80/api/internal/configurations 2020-06-16T07:15:14Z [ERROR] [/jobservice/job/impl/context.go:75]: Job context initialization error: failed to load rest config 2020-06-16T07:15:14Z [INFO] [/jobservice/job/impl/context.go:78]: Retry in 19 seconds 2020-06-16T07:15:33Z [INFO] [/common/config/store/driver/rest.go:31]: get configuration from url: http://harbor-harbor-core:80/api/internal/configurations 2020-06-16T07:16:03Z [ERROR] [/common/config/store/driver/rest.go:34]: Failed on load rest config err:Get http://harbor-harbor-core:80/api/internal/configurations: dial tcp 10.107.104.211:80: i/o timeout, url:http://harbor-harbor-core:80/api/internal/configurations 2020-06-16T07:16:03Z [ERROR] [/jobservice/job/impl/context.go:75]: Job context initialization error: failed to load rest config 2020-06-16T07:16:03Z [INFO] [/jobservice/job/impl/context.go:78]: Retry in 29 seconds 2020-06-16T07:16:32Z [INFO] [/common/config/store/driver/rest.go:31]: get configuration from url: http://harbor-harbor-core:80/api/internal/configurations

kubectl logs harbor-harbor-notary-signer-95ff9c6b5-fdjrt -n harbor

2020/06/16 07:13:27 Failed to Ping DB, sleep for 1 second. 2020/06/16 07:13:29 Failed to Ping DB, sleep for 1 second. 2020/06/16 07:13:30 Failed to Ping DB, sleep for 1 second. 2020/06/16 07:13:31 Failed to Ping DB, sleep for 1 second. 2020/06/16 07:13:32 Failed to Ping DB, sleep for 1 second. 2020/06/16 07:13:33 Failed to Ping DB, sleep for 1 second. 2020/06/16 07:13:34 Failed to Ping DB, sleep for 1 second. 2020/06/16 07:13:35 Failed to Ping DB, sleep for 1 second. 2020/06/16 07:13:36 Failed to Ping DB, sleep for 1 second. 2020/06/16 07:13:37 Failed to Ping DB, sleep for 1 second. 2020/06/16 07:13:38 Failed to Ping DB, sleep for 1 second. 2020/06/16 07:13:38 Failed to connect DB after 30 seconds, time out.

I have tried redeploying many times and it still does, what should I do?

closed time in 15 days

zjcnew

issue closedgoharbor/harbor-helm

(external database) - failed to initialize database: register db Ping `default`, pq: database "registry" does not exist

Hi,

I'm trying to install Harbor (Helm chart) with a setup of external components: RDS (Postgres), Elasticache (Redis) and S3 storage.

Upon chart install, multiple pods, including the core one, fail with the below error:

port-5432 databse-registry sslmode-"disable"
[ORM]2020/04/15 08:45:56 register db Ping `default`, pq: database "registry" does not exist
2020-04-15T08:45:56Z [FATAL] [/core/main.go:188]: failed to initialize database: register db Ping `default`, pq: database "registry" does not exist

It doesn't seem to be an issue with the RDS instance itself (or connectivity to it) as the pods come up with no issues after I manually create the registry, clair, notarysigner and notaryserver databases myself.

As I understand, those databases should be created on Harbor initialization. Is there any way of making it work without having to manually create the databases, which is obviously not scalable?

Thanks.

Harbor Helm chart version: 1.3.1 Kubernetes version: 1.15.4

closed time in 15 days

cparadal

issue commentgoharbor/harbor

Docker Trust Signer Failure / Docker Push failure

@nijamashruwala please provide the complete logs of different pods if you reproduce this problem. If the request reached notary pod there should be errors otherwise the 500 was returned by som proxy depending on your env.

nijamashruwala

comment created time in 15 days

issue commentgoharbor/harbor

Dirty database when updating Harbor 1.9.2 to 2.0.

@conradj87

The first time you upgrade to v2.x and harbor-core got started it will try to do the migration, and there should be some error message to help us understand the root cause.

Andreiaotto

comment created time in 15 days

issue closedgoharbor/harbor

Cannot refresh OIDC ID using Docker or Helm CLI's

Whenever we do a Helm upgrade for harbor or when the ttl for OIDC ID expires we cannot refresh the token using Docker or Helm CLI's and it throws either of the below error.

"Login did not succeed, error: Error response from daemon: Get https://<harbor_url>/v2/: unable to decode token response: invalid character '<' looking for beginning of value" or "Authenticating with existing credentials... Stored credentials invalid or expired"

The only way I could to get it to work was by logging out and logging in again to Harbor using the UI Portal and after that the token issue gets resolved in cli.

closed time in 15 days

kiran-koshy

issue commentgoharbor/harbor

Cannot refresh OIDC ID using Docker or Helm CLI's

closing due to inactivity.

kiran-koshy

comment created time in 15 days

issue commentgoharbor/harbor

404 Error when using Azure as OIDC Provider

I'll keep this open if there are more users hitting the same issue we may consider make this improvement.

fl-max

comment created time in 15 days

issue commentgoharbor/harbor

404 Error when using Azure as OIDC Provider

This may help, but I don't think most users will consider the well-known uri as OIDC endpoint and I think it's documented.

fl-max

comment created time in 15 days

issue commentgoharbor/harbor

Harbor redirects to OIDC login page, when the username or password invalid

@luckymagic7 Thanks a lot of the nice screenshot!

This is the current design because once Harbor is set to oidc auth mode, the local login does not work for regular users, the input fields are for internal admin user only.

luckymagic7

comment created time in 15 days

issue commentgoharbor/harbor

Google OAuth endpoint update

I think it SHOULD work, have you had a chance to verify? @rvennam-lbg

rvennam-lbg

comment created time in 15 days

issue commentgoharbor/harbor

can't do regular login to harbor when oidc client active

Morriz, currently Harbor does not support multiple auth backend at the same time.

The login dialog still appears for the internal admin user (the only one) to login.

I agree UX wise this is not friendly but function-wise this is the design right now.

Morriz

comment created time in 15 days

issue commentgoharbor/harbor

add de-de-lang.json (German as UI Language)

No objection

sluetze

comment created time in 15 days

issue commentgoharbor/harbor

Image not reflecting in harbour console UI

@leeadh This may happen if you are using v1.x due to the reliability of reigstry's notification mechanism.

leeadh

comment created time in 15 days

issue commentgoharbor/harbor

Wrong OIDC token recognized in HTTP Header Authorization: Bearer <token>

We want to provide a slightly simpler solution here, the way to verify access token differs from vendors but the way to verify the id token is consistent b/c it can be verified offline.

I don't think we want to introduce such complexity in short term.

brianmajor

comment created time in 15 days

issue commentgoharbor/harbor

State mismatch when login via oidc

@lz006 Could you please see my conversation with @Leo-ljr and try to debug if the cookie is dropped by haproxy?

Leo-ljr

comment created time in 15 days

issue commentgoharbor/harbor

First and last name from OIDC is set to username

IMO to fix this we may allow user to set more attributes in the onboard dialog, which is a common practice.

In addition to that we may consider allow user to update his profile partially after he's onboarded.

Natanande

comment created time in 15 days

more