profile
viewpoint

cncf/sig-security 252

🔐CNCF Special Interest Group on Security -- secure access, policy control, privacy, auditing, explainability and more!

in-toto/in-toto 183

in-toto is a framework to protect supply chain integrity.

cmatthew/Lind-misc 3

Misc files for the Lind project

detectivelyw/lind_paper_atc17 1

Repo for Lind paper camera-ready version for USENIX ATC '17

detectivelyw/lind_paper_nsdi17 1

Lind paper for NSDI'17

detectivelyw/lind_paper_usenix16 1

Repo for Lind Paper Usenix Security 2016

ebrynne/ViewPoints 1

ViewPoints!

JustinCappos/dockersecretspaper 1

Paper draft describing Docker Secrets

JustinCappos/educational-hardtohack 1

This program is part of an educational assignment in cybersecurity at NYU. This code is an early version of the Secure Turing-Complete Sandbox Challenge written by Prof Justin Cappos. https://seattle.poly.edu/wiki/EducationalAssignments/SecureTuringCompleteSandboxChallengeBuild

Pull request review commentnotaryproject/requirements

End to end scenarios, accounting for PR #1 feedback

+# Notary Signing - Scenarios++As containers and cloud native artifacts become the common unit of deployment, users want to know the artifacts in their environments are authentic and unmodified. ++These Notary v2 scenarios define end-to-end scenarios for signing artifacts in a generalized way, storing and moving them between OCI compliant registries, validating them with various artifact hosts and tooling. Notary v2 focuses on the signing of content, enabling e2e workflows, without specifying what those workflows must be.++By developing a generalized solution, artifact authors may develop their unique artifact types, allowing them to leverage Notary for signing and OCI Compliant registries for distribution.++## OCI Images & Artifacts++The [OCI TOB][oci-tob] has adopted [OCI Artifacts][artifacts-repo], generalizing container images as one of many types of artifacts that may be stored in a registry. Other artifact types currently include:++* [Helm Charts][helm-registry]+* [Singularity][singularity]+* Car firmware updates, deployed from OCI Artifact registries++## Goals++This document serves as the requirements and constraints of a generalized signing solution. It focuses on the scenarios and needs, and very specifically avoids any reference to other projects or implementations. As our working group forms a consensus on the requirements, the group will then transition to a spec.++## Non-Goals++- Notary v2 does not account for what the content represents or its lineage. Other efforts may attach additional content, and re-sign the super set of content to account for other scenarios. ++## Key Stake Holders & Contributors++As we identify the requirements and constraints, a number of key contributors will be asked to represent their requirements and constraints.++> Please add companies, projects, products that you believe should be included.++* Registry Cloud Operators+  * [Azure Container Registry (acr)][acr] - Steve Lasker <steve.lasker@microsoft.com> ([@stevelasker](http://github.com/stevelasker))+  * [Amazon Elastic Container Registry (ecr)][ecr] - Omar Paul <omarpaul@amazon.com>+  * [Docker Hub][docker-hub] - Justin Cormack justin.cormack@docker.com+  * [Google Container Registry (gcr)][gcr]+  * [GitHub Package Registry (gpr)][gpr]+  * [Quay][quay] - Joey Schorr jschorr@redhat.com+  * [IBM Cloud Container Registry (icr)][icr]+* Registry Vendors, Projects & Products+  * [Docker Trusted Registry][docker-dtr]+  * [Harbor][harbor]+  * [JFrog Artifactory][jfrog]+* Artifact Types+  * [OCI & Docker Container Images][image-spec]+  * [Helm Charts][helm-registry]+  * [Singularity][singularity]+  * Operator Bundles++## Scenarios++Notary v2 aims to solve the core issue of trusting content within, and across registries. There are many elements of an end to end scenario that are not implemented by Notary v2, rather enabled because the content is verifiable.++### End to End Orchestrator Scenario++To put Notary v2 in context, the following scenario is outlined. The blue elements are the scope of Notary v2, with the other elements providing generic references to other projects or products.++![Notary e2e Scenarios](./media/notary-e2e-scenarios.png)++### End to End Scenario: Build, Publish, Consume, Enforce Policy, Deploy++In a world of consuming public software, we must account for content that's acquired from a public source, moved into a trusted environment, then deployed. In this scenario, the consumer is not re-building or adding additional content.++1. The Wabbit Networks company builds their netmonitor software. As a result of the build, they produce an [OCI Image][oci-image], a Software Bill of Materials (`SBoM`) and to comply with gpl licensing, produce another artifact which contains the source (`src`) to all the gpl licensed projects.  +In addition to the `image`, `SBoM` and `src` artifacts, the build system produces an [OCI Index][oci-index] that encompassed the three artifacts.  +Each of the artifacts, and the `index` are signed with Notary v2.  +1. The index and its signed contents are pushed to a public OCI compliant registry.+1. ACME Rockets consumes the netmonitor software, importing the index and its referenced artifacts into their private registry.+1. The ACME Rockets environment enforces various company policies prior to any deployment, evaluating the content in the `SBoM`. The policy manager trusts the content within the SBoM is accurate, because they trust artifacts signed with the wabbit-networks key. The `src` content isn't evaluated at deployment time and can be left within the registry.+1. Once the policy manager completes its validation, the deployment to the hosting environment is initiated. The `SBoM` is no longer needed, allowing the `image` to be deployed separately. A `deploy` artifact, referencing a specific configuration definition, may also be signed and saved, providing a historical record of what was deployed. The hosting environment also validates content is signed by trusted entities.++**Implications of this requirement:**++- Signatures can be placed on any type of artifact stored in an OCI compliant registry using an [OCI Manifest][oci-manifest]+- Signatures can be placed on an [OCI Index][oci-index], allowing a entity to define a collection of artifacts.+- Signatures and their public keys can be moved within, and across OCI compliant registries which support Notary v2.+- Because content is trusted, an ecosystem of other projects and products can leverage information in various formats.++### Scenario #1: Local Build, Sign, Validate++Prior to committing any code, a developer can test the: "build, sign, validate scenario"++1. Locally build a container image using a non-registry specific `name:tag`, such as:  +  `$ docker build net-monitor:dev`+1. Locally sign `net-monitor:dev` +1. Run the image on the developers local machine which is configured to only accept signed images. +  `$ docker run net-monitor:dev`++**Implications of this requirement:**++- The developer has access to signing keys. How they get the keys is part of the usability spec.+- The local environment has a policy by which it states the set of keys it accepts.+- The signing and validation of artifacts does not require a registry. The local host can validate the signature using the public keys it accepts.+- The key used for validation may be hosted in a registry, or other accessible location.+- The lack of a registry name does not infer docker.io as a default registry.+- Signing is performed on the artifacts that are pushed to a registry.+- The verification of the signature can occur without additional transformation or computation. If the artifact is expected to be compressed, the signature will be performed on the compressed artifact rather than the uncompressed content.++### Scenario #2: Sign, Rename, Push, Validate in Dev++Once the developer has locally validated the build, sign, validate scenario, they will push the artifact to a registry used for deployment to a dev environment.++1. Locally build and sign an artifact, such as a `net-monitor:abc123` container image+1. Rename the artifact to reflect the registry it will be pushed to:  +  `$ docker tag net-monitor:abc123 wabbitnetworks.example.com/networking/net-monitor:1.0`  +  `$ docker push wabbitnetworks.example.com/networking/net-monitor:1.0`+1. Deploy the artifact to a cluster that requires signatures:  +  `$ orchestrator apply -f deploy.yaml`+1. The orchestrator in the dev environment accepts any signed content, enabling it to trace where deployed artifacts originated from.++**Implications of this requirement:**++- Signatures can be verified based on the referenced `:tag`. The signature is linked to a unique manifest, and not tied to a specific `repo:tag` name. +- The artifact can be renamed from the unique build id `net-monitor:abc123` to a product versioned tag `wabbitnetworks.example.com/networking/net-monitor:1.0` without invalidating the signature.+- Users may reference the `sha256` digest directly, or the `:tag`. While tag locking is not part of the [OCI Distribution Spec][oci-distribution], various registries support this capability, allowing users to reference human readable tags, as opposed to long digests. Either reference is supported with Notary v2, however it's the digest that is signed.+- Notary v2 supports a pattern for signing any type of artifact, from OCI Images, Helm Charts, Singularity to yet unknown types.+- Orchestrators may require signatures, but not enforce specific specific signatures. This enables a host to understand what content is deployed, without having to manage specific keys.++### Scenario #3: Automate Build, Sign, Push, Deploy to Prod, Verify++A CI system is triggered by a git commit. The system builds the artifacts, signs them, and pushes to a registry. The production system pulls the artifacts, verifies the signatures and runs them.++1. A CI solution responds to a git commit notification+1. The CI system clones the git repo and builds the artifacts, with fully qualified names:  +  **image**: `wabbitnetworks.example.com/networking/net-monitor:1.0-alpine`   +  **deployment chart**: `wabbitnetworks.example.com/networking/net-monitor:1.0-deploy`+1. The CI system signs the artifact with locally available keys.+1. The CI system creates a signed OCI Index, referencing the image and deployment charts:  +  `wabbitnetworks.example.com/networking/net-monitor:1.0`+1. The index, and its contents are pushed to a registry:  +  `$ docker push wabbitnetworks.example.com/networking/net-monitor:1.0-alpine`  +  `$ deploy-tool push wabbitnetworks.example.com/networking/net-monitor:1.0-deploy`  +  `$ oci-tool push wabbitnetworks.example.com/networking/net-monitor:1.0`+1. The artifacts are deployed to a production orchestrator.+1. The orchestrator verifies the artifacts are signed by a set of specifically trusted keys. Unsigned artifacts, or artifacts signed by non-trusted keys are rejected.++**Implications of this requirement:**++- Keys for signing are securely retrieved by build systems that create & destroy the environment each time.+- A specific set of keys may be required to pass validation.++### Scenario #4: Promote Artifacts Within a Registry, Using a Different Repo++A CI/CD system promotes validated artifacts from a dev repository to production repositories.++1. A CI/CD solution responds to a git commit notification, cloning, building, signing and pushing the artifacts to a development repo within their registry.+1. As the CI/CD solution runs functional tests, determining the artifacts are ready for production, the artifacts are moved from one repo to another.  +  `$ docker tag myregistry.example.com/dev/alpha-team/web:1abc myregistry.example.com/prod/web:1abc`++### Scenario #4.1: Archive Artifacts Within a Registry, Using a Different Repo++Once artifacts are no longer running in production, they are archived for period of months. They are moved out of the production registry or repo as they must be maintained in the state they were run for compliance requirements. However, they should not be flagged with vulnerabilities or occupy space in the production configured repo or registry.++1. A lifecycle management solution moves artifacts from production repositories to archived repositories and/or registries.++**Implications of this requirement:**++- Renaming maintains artifact signatures.+- Artifact copy, or movement to a different repository, maintains the signatures.++### Scenario #5: Validate Artifact Signatures Within Restricted Networks++ACME Rockets runs secure production environments, limiting all external network traffic. To assure the wabbit-networks network monitor software has valid signatures, they will need to trust a resource within their network to proxy key requests.++1. ACME Rockets acquires network monitoring software, copying it to their firewall protected production environment.+1. As part of the artifact copy, they will copy/proxy the signature validation to trusted resources within their network protected environment.++**Implications of this requirement:**++- In this scenario, the wabbit-networks signature must be validated within the ACME Rockets network. How this is done is open for design. However, the requirement states the signature must be validated without external access. When the artifact is copied to the private/network restricted registry, the signature may need to be copied, and is assumed to be trusted if available in the trusted server within the private network. How ACME Rockets would copy/proxy the signatures is part of the design and UX for a secure, but usable pattern.++### Scenario #6: Multiple Signatures++Customers may require multiple signatures for the following scenarios:++- Validate the artifact is the same as what the vendor provided.+- Secondarily sign the artifact by the consuming company, attesting to its validity within their production environment.+- Signatures represent validations through different dev, staging, production environments.+- Dev environments support any signature, while integration and production environments require mycompany-prod signatures.++#### Scenario 6.1: Dev and Prod Keys++1. A CI/CD solution builds, signs, pushes and deploys a collection of artifacts to a staging environment.+1. Once integrations tests are completed, the artifacts are signed with a production signature, copying them to a production registry or production set of repositories.+1. The integration and production orchestrators validate the artifacts are signed with production keys.++#### Scenario 6.2: Approved Vendor/Project Artifacts++A deployment requires a mydb image. The mydb image is routinely updated for security vulnerabilities. ACME Rockets references stable version tags (`mydb:1.0`), assuring they get newly patched builds, but they must verify each new version to be compatible with their environment.++1. The `mydb:1.0` image is acquired from a public registry, imported into private integration registries.+1. Functional testing is run in the integration environment, verifying the patched `mydb:1.0` image is compatible.+1. The `mydb:1.0` is tagged with a unique id `mydb:1.0-202002131000` and signed with an ACME Rockets production key.+1. The retagged image, with both the mydb and ACME Rockets signatures are copied to a prod registry/repository.+1. The release management system deploys the new `mydb:1.0-202002131000` image.+1. The production orchestrator validates it's signed with the Acme Rockets production key.++**Implications of this requirement:**++- Multiple signatures, including signatures from multiple sources can be associated with a specific artifact.+- Original signatures are maintained, even if the artifact is re-tagged.+

That's a great point. My text should indicate that it provides access to keys stored in hardware if they exist. I certainly don't mean to imply there must (or even should) be a HSM. I more meant that it should not be assumed to adequately protect a key in this case.

SteveLasker

comment created time in 4 hours

Pull request review commentnotaryproject/requirements

End to end scenarios, accounting for PR #1 feedback

+# Notary Signing - Scenarios++As containers and cloud native artifacts become the common unit of deployment, users want to know the artifacts in their environments are authentic and unmodified. ++These Notary v2 scenarios define end-to-end scenarios for signing artifacts in a generalized way, storing and moving them between OCI compliant registries, validating them with various artifact hosts and tooling. Notary v2 focuses on the signing of content, enabling e2e workflows, without specifying what those workflows must be.++By developing a generalized solution, artifact authors may develop their unique artifact types, allowing them to leverage Notary for signing and OCI Compliant registries for distribution.++## OCI Images & Artifacts++The [OCI TOB][oci-tob] has adopted [OCI Artifacts][artifacts-repo], generalizing container images as one of many types of artifacts that may be stored in a registry. Other artifact types currently include:++* [Helm Charts][helm-registry]+* [Singularity][singularity]+* Car firmware updates, deployed from OCI Artifact registries++## Goals++This document serves as the requirements and constraints of a generalized signing solution. It focuses on the scenarios and needs, and very specifically avoids any reference to other projects or implementations. As our working group forms a consensus on the requirements, the group will then transition to a spec.++## Non-Goals++- Notary v2 does not account for what the content represents or its lineage. Other efforts may attach additional content, and re-sign the super set of content to account for other scenarios. ++## Key Stake Holders & Contributors++As we identify the requirements and constraints, a number of key contributors will be asked to represent their requirements and constraints.++> Please add companies, projects, products that you believe should be included.++* Registry Cloud Operators+  * [Azure Container Registry (acr)][acr] - Steve Lasker <steve.lasker@microsoft.com> ([@stevelasker](http://github.com/stevelasker))+  * [Amazon Elastic Container Registry (ecr)][ecr] - Omar Paul <omarpaul@amazon.com>+  * [Docker Hub][docker-hub] - Justin Cormack justin.cormack@docker.com+  * [Google Container Registry (gcr)][gcr]+  * [GitHub Package Registry (gpr)][gpr]+  * [Quay][quay] - Joey Schorr jschorr@redhat.com+  * [IBM Cloud Container Registry (icr)][icr]+* Registry Vendors, Projects & Products+  * [Docker Trusted Registry][docker-dtr]+  * [Harbor][harbor]+  * [JFrog Artifactory][jfrog]+* Artifact Types+  * [OCI & Docker Container Images][image-spec]+  * [Helm Charts][helm-registry]+  * [Singularity][singularity]+  * Operator Bundles++## Scenarios++Notary v2 aims to solve the core issue of trusting content within, and across registries. There are many elements of an end to end scenario that are not implemented by Notary v2, rather enabled because the content is verifiable.++### End to End Orchestrator Scenario++To put Notary v2 in context, the following scenario is outlined. The blue elements are the scope of Notary v2, with the other elements providing generic references to other projects or products.++![Notary e2e Scenarios](./media/notary-e2e-scenarios.png)++### End to End Scenario: Build, Publish, Consume, Enforce Policy, Deploy++In a world of consuming public software, we must account for content that's acquired from a public source, moved into a trusted environment, then deployed. In this scenario, the consumer is not re-building or adding additional content.++1. The Wabbit Networks company builds their netmonitor software. As a result of the build, they produce an [OCI Image][oci-image], a Software Bill of Materials (`SBoM`) and to comply with gpl licensing, produce another artifact which contains the source (`src`) to all the gpl licensed projects.  +In addition to the `image`, `SBoM` and `src` artifacts, the build system produces an [OCI Index][oci-index] that encompassed the three artifacts.  +Each of the artifacts, and the `index` are signed with Notary v2.  +1. The index and its signed contents are pushed to a public OCI compliant registry.+1. ACME Rockets consumes the netmonitor software, importing the index and its referenced artifacts into their private registry.+1. The ACME Rockets environment enforces various company policies prior to any deployment, evaluating the content in the `SBoM`. The policy manager trusts the content within the SBoM is accurate, because they trust artifacts signed with the wabbit-networks key. The `src` content isn't evaluated at deployment time and can be left within the registry.+1. Once the policy manager completes its validation, the deployment to the hosting environment is initiated. The `SBoM` is no longer needed, allowing the `image` to be deployed separately. A `deploy` artifact, referencing a specific configuration definition, may also be signed and saved, providing a historical record of what was deployed. The hosting environment also validates content is signed by trusted entities.++**Implications of this requirement:**++- Signatures can be placed on any type of artifact stored in an OCI compliant registry using an [OCI Manifest][oci-manifest]+- Signatures can be placed on an [OCI Index][oci-index], allowing a entity to define a collection of artifacts.+- Signatures and their public keys can be moved within, and across OCI compliant registries which support Notary v2.+- Because content is trusted, an ecosystem of other projects and products can leverage information in various formats.++### Scenario #1: Local Build, Sign, Validate++Prior to committing any code, a developer can test the: "build, sign, validate scenario"++1. Locally build a container image using a non-registry specific `name:tag`, such as:  +  `$ docker build net-monitor:dev`+1. Locally sign `net-monitor:dev` +1. Run the image on the developers local machine which is configured to only accept signed images. +  `$ docker run net-monitor:dev`++**Implications of this requirement:**++- The developer has access to signing keys. How they get the keys is part of the usability spec.+- The local environment has a policy by which it states the set of keys it accepts.+- The signing and validation of artifacts does not require a registry. The local host can validate the signature using the public keys it accepts.+- The key used for validation may be hosted in a registry, or other accessible location.+- The lack of a registry name does not infer docker.io as a default registry.+- Signing is performed on the artifacts that are pushed to a registry.+- The verification of the signature can occur without additional transformation or computation. If the artifact is expected to be compressed, the signature will be performed on the compressed artifact rather than the uncompressed content.++### Scenario #2: Sign, Rename, Push, Validate in Dev++Once the developer has locally validated the build, sign, validate scenario, they will push the artifact to a registry used for deployment to a dev environment.++1. Locally build and sign an artifact, such as a `net-monitor:abc123` container image+1. Rename the artifact to reflect the registry it will be pushed to:  +  `$ docker tag net-monitor:abc123 wabbitnetworks.example.com/networking/net-monitor:1.0`  +  `$ docker push wabbitnetworks.example.com/networking/net-monitor:1.0`+1. Deploy the artifact to a cluster that requires signatures:  +  `$ orchestrator apply -f deploy.yaml`+1. The orchestrator in the dev environment accepts any signed content, enabling it to trace where deployed artifacts originated from.++**Implications of this requirement:**++- Signatures can be verified based on the referenced `:tag`. The signature is linked to a unique manifest, and not tied to a specific `repo:tag` name. +- The artifact can be renamed from the unique build id `net-monitor:abc123` to a product versioned tag `wabbitnetworks.example.com/networking/net-monitor:1.0` without invalidating the signature.+- Users may reference the `sha256` digest directly, or the `:tag`. While tag locking is not part of the [OCI Distribution Spec][oci-distribution], various registries support this capability, allowing users to reference human readable tags, as opposed to long digests. Either reference is supported with Notary v2, however it's the digest that is signed.+- Notary v2 supports a pattern for signing any type of artifact, from OCI Images, Helm Charts, Singularity to yet unknown types.+- Orchestrators may require signatures, but not enforce specific specific signatures. This enables a host to understand what content is deployed, without having to manage specific keys.++### Scenario #3: Automate Build, Sign, Push, Deploy to Prod, Verify++A CI system is triggered by a git commit. The system builds the artifacts, signs them, and pushes to a registry. The production system pulls the artifacts, verifies the signatures and runs them.++1. A CI solution responds to a git commit notification+1. The CI system clones the git repo and builds the artifacts, with fully qualified names:  +  **image**: `wabbitnetworks.example.com/networking/net-monitor:1.0-alpine`   +  **deployment chart**: `wabbitnetworks.example.com/networking/net-monitor:1.0-deploy`+1. The CI system signs the artifact with locally available keys.+1. The CI system creates a signed OCI Index, referencing the image and deployment charts:  +  `wabbitnetworks.example.com/networking/net-monitor:1.0`+1. The index, and its contents are pushed to a registry:  +  `$ docker push wabbitnetworks.example.com/networking/net-monitor:1.0-alpine`  +  `$ deploy-tool push wabbitnetworks.example.com/networking/net-monitor:1.0-deploy`  +  `$ oci-tool push wabbitnetworks.example.com/networking/net-monitor:1.0`+1. The artifacts are deployed to a production orchestrator.+1. The orchestrator verifies the artifacts are signed by a set of specifically trusted keys. Unsigned artifacts, or artifacts signed by non-trusted keys are rejected.++**Implications of this requirement:**++- Keys for signing are securely retrieved by build systems that create & destroy the environment each time.+- A specific set of keys may be required to pass validation.++### Scenario #4: Promote Artifacts Within a Registry, Using a Different Repo++A CI/CD system promotes validated artifacts from a dev repository to production repositories.++1. A CI/CD solution responds to a git commit notification, cloning, building, signing and pushing the artifacts to a development repo within their registry.+1. As the CI/CD solution runs functional tests, determining the artifacts are ready for production, the artifacts are moved from one repo to another.  +  `$ docker tag myregistry.example.com/dev/alpha-team/web:1abc myregistry.example.com/prod/web:1abc`++### Scenario #4.1: Archive Artifacts Within a Registry, Using a Different Repo++Once artifacts are no longer running in production, they are archived for period of months. They are moved out of the production registry or repo as they must be maintained in the state they were run for compliance requirements. However, they should not be flagged with vulnerabilities or occupy space in the production configured repo or registry.++1. A lifecycle management solution moves artifacts from production repositories to archived repositories and/or registries.++**Implications of this requirement:**++- Renaming maintains artifact signatures.+- Artifact copy, or movement to a different repository, maintains the signatures.++### Scenario #5: Validate Artifact Signatures Within Restricted Networks++ACME Rockets runs secure production environments, limiting all external network traffic. To assure the wabbit-networks network monitor software has valid signatures, they will need to trust a resource within their network to proxy key requests.++1. ACME Rockets acquires network monitoring software, copying it to their firewall protected production environment.+1. As part of the artifact copy, they will copy/proxy the signature validation to trusted resources within their network protected environment.++**Implications of this requirement:**++- In this scenario, the wabbit-networks signature must be validated within the ACME Rockets network. How this is done is open for design. However, the requirement states the signature must be validated without external access. When the artifact is copied to the private/network restricted registry, the signature may need to be copied, and is assumed to be trusted if available in the trusted server within the private network. How ACME Rockets would copy/proxy the signatures is part of the design and UX for a secure, but usable pattern.++### Scenario #6: Multiple Signatures++Customers may require multiple signatures for the following scenarios:++- Validate the artifact is the same as what the vendor provided.+- Secondarily sign the artifact by the consuming company, attesting to its validity within their production environment.+- Signatures represent validations through different dev, staging, production environments.+- Dev environments support any signature, while integration and production environments require mycompany-prod signatures.++#### Scenario 6.1: Dev and Prod Keys++1. A CI/CD solution builds, signs, pushes and deploys a collection of artifacts to a staging environment.+1. Once integrations tests are completed, the artifacts are signed with a production signature, copying them to a production registry or production set of repositories.+1. The integration and production orchestrators validate the artifacts are signed with production keys.++#### Scenario 6.2: Approved Vendor/Project Artifacts++A deployment requires a mydb image. The mydb image is routinely updated for security vulnerabilities. ACME Rockets references stable version tags (`mydb:1.0`), assuring they get newly patched builds, but they must verify each new version to be compatible with their environment.++1. The `mydb:1.0` image is acquired from a public registry, imported into private integration registries.+1. Functional testing is run in the integration environment, verifying the patched `mydb:1.0` image is compatible.+1. The `mydb:1.0` is tagged with a unique id `mydb:1.0-202002131000` and signed with an ACME Rockets production key.+1. The retagged image, with both the mydb and ACME Rockets signatures are copied to a prod registry/repository.+1. The release management system deploys the new `mydb:1.0-202002131000` image.+1. The production orchestrator validates it's signed with the Acme Rockets production key.++**Implications of this requirement:**++- Multiple signatures, including signatures from multiple sources can be associated with a specific artifact.+- Original signatures are maintained, even if the artifact is re-tagged.+

#### Scenario 7: A repository compromise occurs

An attacker manages to compromise a repository and gain access to all keys on the repository.  This includes access to sign artifacts using keys sorted on the repository's HSM.  

**Implications of this requirement:**

1. The potential damage/risk to users in this case must be limited
2. There must be a secure way for users to recover to a known, secure state and verify this has occurred even in the face of an attacker that can act as a man-in-the-middle on the network.

I'll add more of these later, but wanted feedback here first.

SteveLasker

comment created time in a day

Pull request review commentnotaryproject/requirements

End to end scenarios, accounting for PR #1 feedback

+# Notary Signing - Scenarios++As containers and cloud native artifacts become the common unit of deployment, users want to know the artifacts in their environments are authentic and unmodified. ++These Notary v2 scenarios define end-to-end scenarios for signing artifacts in a generalized way, storing and moving them between OCI compliant registries, validating them with various artifact hosts and tooling. Notary v2 focuses on the signing of content, enabling e2e workflows, without specifying what those workflows must be.++By developing a generalized solution, artifact authors may develop their unique artifact types, allowing them to leverage Notary for signing and OCI Compliant registries for distribution.++## OCI Images & Artifacts++The [OCI TOB][oci-tob] has adopted [OCI Artifacts][artifacts-repo], generalizing container images as one of many types of artifacts that may be stored in a registry. Other artifact types currently include:++* [Helm Charts][helm-registry]+* [Singularity][singularity]+* Car firmware updates, deployed from OCI Artifact registries++## Goals++This document serves as the requirements and constraints of a generalized signing solution. It focuses on the scenarios and needs, and very specifically avoids any reference to other projects or implementations. As our working group forms a consensus on the requirements, the group will then transition to a spec.++## Non-Goals++- Notary v2 does not account for what the content represents or its lineage. Other efforts may attach additional content, and re-sign the super set of content to account for other scenarios. ++## Key Stake Holders & Contributors++As we identify the requirements and constraints, a number of key contributors will be asked to represent their requirements and constraints.++> Please add companies, projects, products that you believe should be included.++* Registry Cloud Operators+  * [Azure Container Registry (acr)][acr] - Steve Lasker <steve.lasker@microsoft.com> ([@stevelasker](http://github.com/stevelasker))+  * [Amazon Elastic Container Registry (ecr)][ecr] - Omar Paul <omarpaul@amazon.com>+  * [Docker Hub][docker-hub] - Justin Cormack justin.cormack@docker.com+  * [Google Container Registry (gcr)][gcr]+  * [GitHub Package Registry (gpr)][gpr]+  * [Quay][quay] - Joey Schorr jschorr@redhat.com+  * [IBM Cloud Container Registry (icr)][icr]+* Registry Vendors, Projects & Products+  * [Docker Trusted Registry][docker-dtr]+  * [Harbor][harbor]+  * [JFrog Artifactory][jfrog]+* Artifact Types+  * [OCI & Docker Container Images][image-spec]+  * [Helm Charts][helm-registry]+  * [Singularity][singularity]+  * Operator Bundles++## Scenarios++Notary v2 aims to solve the core issue of trusting content within, and across registries. There are many elements of an end to end scenario that are not implemented by Notary v2, rather enabled because the content is verifiable.++### End to End Orchestrator Scenario++To put Notary v2 in context, the following scenario is outlined. The blue elements are the scope of Notary v2, with the other elements providing generic references to other projects or products.++![Notary e2e Scenarios](./media/notary-e2e-scenarios.png)++### End to End Scenario: Build, Publish, Consume, Enforce Policy, Deploy++In a world of consuming public software, we must account for content that's acquired from a public source, moved into a trusted environment, then deployed. In this scenario, the consumer is not re-building or adding additional content.++1. The Wabbit Networks company builds their netmonitor software. As a result of the build, they produce an [OCI Image][oci-image], a Software Bill of Materials (`SBoM`) and to comply with gpl licensing, produce another artifact which contains the source (`src`) to all the gpl licensed projects.  +In addition to the `image`, `SBoM` and `src` artifacts, the build system produces an [OCI Index][oci-index] that encompassed the three artifacts.  +Each of the artifacts, and the `index` are signed with Notary v2.  +1. The index and its signed contents are pushed to a public OCI compliant registry.+1. ACME Rockets consumes the netmonitor software, importing the index and its referenced artifacts into their private registry.+1. The ACME Rockets environment enforces various company policies prior to any deployment, evaluating the content in the `SBoM`. The policy manager trusts the content within the SBoM is accurate, because they trust artifacts signed with the wabbit-networks key. The `src` content isn't evaluated at deployment time and can be left within the registry.+1. Once the policy manager completes its validation, the deployment to the hosting environment is initiated. The `SBoM` is no longer needed, allowing the `image` to be deployed separately. A `deploy` artifact, referencing a specific configuration definition, may also be signed and saved, providing a historical record of what was deployed. The hosting environment also validates content is signed by trusted entities.++**Implications of this requirement:**++- Signatures can be placed on any type of artifact stored in an OCI compliant registry using an [OCI Manifest][oci-manifest]+- Signatures can be placed on an [OCI Index][oci-index], allowing a entity to define a collection of artifacts.+- Signatures and their public keys can be moved within, and across OCI compliant registries which support Notary v2.+- Because content is trusted, an ecosystem of other projects and products can leverage information in various formats.++### Scenario #1: Local Build, Sign, Validate++Prior to committing any code, a developer can test the: "build, sign, validate scenario"++1. Locally build a container image using a non-registry specific `name:tag`, such as:  +  `$ docker build net-monitor:dev`+1. Locally sign `net-monitor:dev` +1. Run the image on the developers local machine which is configured to only accept signed images. +  `$ docker run net-monitor:dev`++**Implications of this requirement:**++- The developer has access to signing keys. How they get the keys is part of the usability spec.+- The local environment has a policy by which it states the set of keys it accepts.+- The signing and validation of artifacts does not require a registry. The local host can validate the signature using the public keys it accepts.+- The key used for validation may be hosted in a registry, or other accessible location.

This feels more like you're proposing a solution than talking about scenarios / requirements. Is this appropriate for the document?

Implications in general seems to be a mix of actual implications / requirements and details from a proposed implementation intertwined. Can we separate this out so that it is just requirements? It's weird to read about details of a proposed implementation with no description.

SteveLasker

comment created time in a day

Pull request review commentnotaryproject/requirements

End to end scenarios, accounting for PR #1 feedback

+# Notary Signing - Scenarios++As containers and cloud native artifacts become the common unit of deployment, users want to know the artifacts in their environments are authentic and unmodified. ++These Notary v2 scenarios define end-to-end scenarios for signing artifacts in a generalized way, storing and moving them between OCI compliant registries, validating them with various artifact hosts and tooling. Notary v2 focuses on the signing of content, enabling e2e workflows, without specifying what those workflows must be.++By developing a generalized solution, artifact authors may develop their unique artifact types, allowing them to leverage Notary for signing and OCI Compliant registries for distribution.++## OCI Images & Artifacts++The [OCI TOB][oci-tob] has adopted [OCI Artifacts][artifacts-repo], generalizing container images as one of many types of artifacts that may be stored in a registry. Other artifact types currently include:++* [Helm Charts][helm-registry]+* [Singularity][singularity]+* Car firmware updates, deployed from OCI Artifact registries+

It would be good to have an explicit threat model. I've plopped in a draft to help get things started.

## Threat model

It is assumed that an attacker may perform one or more the following actions:

1. intercept and alter network traffic
2. compromise some set of weak crypto algorithms which are supported in some legacy cases
3. compromise a repository, including gaining access to use any keys stored on the repository
4. compromise a signing key, for example due to malicious action or accidental disclosure by the key owner
5. compromise a step in the software supply chain.  This can happen in many different ways, such as by gaining access to the server, compromising the software used in the step of the supply chain, passing different software to a subsequent step than what was intended, or causing an operator to make an error in a step. 

While it is not always possible to protect against all scenarios, the system should to the extent possible mitigate and/or reduce the damage caused by a successful attack, detect the occurrence of an attack and notify appropriate parties, yet remain usable for parties operating the system.  Furthermore, the system should recover from successful attacks in a way that presents low operational overhead and risk to users.

SteveLasker

comment created time in a day

issue commenttheupdateframework/specification

Secondary literature with detailed rationale and recommendations

This is a good suggestion. We have a document like this for the automotive variant of TUF (Uptane) called the Deployment Considerations ( e.g., see part of it here: https://uptane.github.io/deployment-considerations/repositories.html ).

We should think about how to get relevant information back into TUF.

On Fri, Feb 14, 2020 at 9:41 AM Joshua Lock notifications@github.com wrote:

It could be valuable for potential adopters of TUF if there were some documentation beyond the specification, published papers and conversations captured on GitHub, that goes into detail about certain decisions, makes recommendations where the specification deliberately leaves things open and points to open implementations of the specification (i.e. Notary and PEP 458) as examples of the context for the various decisions that must be made when applying the TUF specification to a scenario.

The spec is a good document but provides several points where choices must be made without providing any explanation or guidance.

The papers which motivated various spec decisions and changes provide interesting reading but can be a little dense when trying to understand a nuance of the specification where the context for a decision may be difficult to elicit and, furthermore, the papers are a static document, unlike the specification itself.

In contrast to the specification, which should only say “do XYZ”, this additional document could say things like “do XYZ because foo, bar, baz” or “do X if your situation is quux (akin to projects ABC) or do Y if it is thud (akin to projects DEF)”.

cc @lukpueh https://github.com/lukpueh

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/theupdateframework/specification/issues/91?email_source=notifications&email_token=AAGROD6UVKV5YHPWH2Z2Z63RC2URLA5CNFSM4KVJM5PKYY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4INS2TYA, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAGRODZZ4JPGC4QLAQL2PZLRC2URLANCNFSM4KVJM5PA .

joshuagl

comment created time in 3 days

push eventJustinCappos/checkapi

Justin Cappos

commit sha 2508c414869eda3479e1384b1bea65ec1e749d3b

README updates

view details

push time in 4 days

push eventJustinCappos/checkapi

Justin Cappos

commit sha 209f26c50f78011b0ea50a9cde4cdf6c1b790839

CheckAPI source code and LICENSE

view details

push time in 4 days

create barnchJustinCappos/checkapi

branch : master

created branch time in 4 days

created repositoryJustinCappos/checkapi

CheckAPI project software

created time in 4 days

push eventJustinCappos/netcheck

Justin Cappos

commit sha 35daa755bcea9c81d08238bf4db22cf6be9713aa

README update

view details

push time in 4 days

push eventJustinCappos/netcheck

Justin Cappos

commit sha 68278a43af4e2252d0f05b27dc66f250ac02cf43

Traces and code from NetCheck

view details

push time in 4 days

create barnchJustinCappos/netcheck

branch : master

created branch time in 4 days

created repositoryJustinCappos/netcheck

This is our code and data from the NetCheck paper

created time in 4 days

push eventJustinCappos/vsn

Justin Cappos

commit sha 406adbee2214eab1a41de2b0628960ce243b9e7b

README clarification

view details

push time in 4 days

push eventJustinCappos/vsn

Justin Cappos

commit sha b35d8df10bb86f8c3b84c7097a742996af5613e3

VSN main code and LICENSE

view details

push time in 4 days

create barnchJustinCappos/vsn

branch : master

created branch time in 4 days

created repositoryJustinCappos/vsn

Virtual Secure Network repository

created time in 4 days

push eventJustinCappos/uppir

Justin Cappos

commit sha 2a2fc435d5e21138f7f4543c6b3588d6529f4619

README update

view details

push time in 4 days

push eventJustinCappos/uppir

Justin Cappos

commit sha c35ab8e2651edb95c6f1d8c16f9fe5ffe28e5fa6

initial code add

view details

Justin Cappos

commit sha 64d6ff9ab0d1c16d5e7181bd64f8f68999c1ec3f

adding license

view details

push time in 4 days

create barnchJustinCappos/uppir

branch : master

created branch time in 4 days

created repositoryJustinCappos/uppir

UPPIR source code

created time in 4 days

Pull request review commentuptane/uptane-standard

Partial verification only requires the Director's Targets metadata.

 ECUs MUST have a secure source of time. An OEM/Uptane implementor MAY use any ex For an ECU to be capable of receiving Uptane-secured updates, it MUST have the following data provisioned at the time it is manufactured or installed in the vehicle:  1. A sufficiently recent copy of required Uptane metadata at the time of manufacture or install. See the Uptane Deployment Considerations ({{DEPLOY}}) for more information.-    * Partial verification ECUs MUST have the Root and Targets metadata from the Director repository.+    * Partial verification ECUs MUST have the Targets metadata from the Director repository.     * Full verification ECUs MUST have a complete set of metadata (Root, Targets, Snapshot, and Timestamp) from both repositories, as well as the repository mapping metadata ({{repo_mapping_meta}}).

For full verification, you want the ECU to have the snapshot metadata because this prevents rollback / replay attacks. You don't need all of the targets metadata.

For partial verification, I think it is better to have a version of targets to reduce the scope of rollback / replay of targets metadata. This is specifically impactful for metadata that isn't directed specifically at the vehicle.

patrickvacek

comment created time in 4 days

push eventsecure-systems-lab/ssl-site

Justin Cappos

commit sha 9953c2f1629d6168b8caaeaf09391ed27ec38fec

dashlane press

view details

push time in 6 days

Pull request review commenttheupdateframework/specification

Clarify rollback attack prevention and fast-forward attack recovery

 non-volatile storage as FILENAME.EXT.     metadata file, discard it, abort the update cycle, and report the potential     rollback attack. -    * **3.3.3**. The version number of the targets metadata file, and all-    delegated targets metadata files (if any), in the trusted snapshot metadata-    file, if any, MUST be less than or equal to its version number in the new-    snapshot metadata file. Furthermore, any targets metadata filename that was-    listed in the trusted snapshot metadata file, if any, MUST continue to be-    listed in the new snapshot metadata file.  If any of these conditions are-    not met, discard the new snaphot metadadata file, abort the update cycle,-    and report the failure.+    * **3.3.3**. The version number of the top-level targets metadata file, in+    the trusted snapshot metadata file, if any, MUST be less than or equal to+    its version number in the new snapshot metadata file. Furthermore, any+    targets metadata filename that was listed in the trusted snapshot metadata+    file, if any, MUST continue to be listed in the new snapshot metadata file.

Do you mean for the targets metadata files you previously downloaded or somehow checking for all targets files, even those that were not downloaded?

lukpueh

comment created time in 6 days

issue commentpypa/warehouse

Roadmap update for TUF support

[FYI: I'm CCing the post authors in case they want to weigh in here.]

I'd say that you could get some value by combining them, but you have to weigh if it's more important to 1) have a list of packages released that is somewhat harder to change but have a large potential for damage and no secure way to recover (TL) or 2) be able to restrict damage when there is a compromise and securely recover from such an attack (TUF).

You could use both together (which has higher operational overhead than either separately) and see security benefits. However, I'd certainly argue TUF provides the far more important protections.

I'm not aware of any of the TUF deployments https://theupdateframework.com/adoptions/ which have chosen to also use TL.

On Tue, Feb 11, 2020 at 8:45 AM Thomas Grainger notifications@github.com wrote:

the blog https://ssl.engineering.nyu.edu/blog/2020-02-03-transparent-logs seems to imply that it's a good idea to use TUF+TL at the same time

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/pypa/warehouse/issues/5247?email_source=notifications&email_token=AAGROD4RLJUK2FK6Y7WSNPDRCKTWTA5CNFSM4GNHO6P2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOELMOTNI#issuecomment-584640949, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAGRODYJO6LIGTVC5D6ZFXLRCKTWTANCNFSM4GNHO6PQ .

LucidOne

comment created time in 6 days

pull request commenttheupdateframework/specification

Clarify timestamp.json METAFILES format

Snapshot doesn't have hashes because the size becomes quite large overall and our Mercury work shows that the version number is even more valuable to have than the hash in most cases.

For timestamp, it's a single entry so I think doesn't matter in the same way...

On Mon, Feb 10, 2020 at 12:54 PM Trishank Karthik Kuppusamy < notifications@github.com> wrote:

My off-the-cuff thoughts are: having a version number is good, having a length is good, having a hash is good, and having all three seems to be the best. I believe it may be possible to drop one or more (but we would want to think very carefully about this), but is there a reason not to list all three?

Yeah, but shouldn't the snapshot also list all three by that logic? Don't we accept some tradeoff in security for b/w performance there? Here there is no such demand, but I find all the scenarios above fairly contrived. Having said that, there's absolutely no reason to drop all 3 in timestamp except for some aesthetic consistency.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/theupdateframework/specification/pull/90?email_source=notifications&email_token=AAGROD3TH2UERFQX3VLK2PLRCGIEDA5CNFSM4KQLRYJKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOELJO7SQ#issuecomment-584249290, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAGROD3UU2Z3D5T46KEGLOTRCGIEDANCNFSM4KQLRYJA .

joshuagl

comment created time in 7 days

push eventsecure-systems-lab/ssl-site

Trishank K Kuppusamy

commit sha 5dd728d5800daa741b9ba795b163e01aa4055969

minor fixes

view details

Justin Cappos

commit sha 40e71be8c8e75d8d5286c3dc6dba1a789c3e3499

Merge pull request #102 from trishankatdatadog/trishankatdatadog/minor-fixes Transparent Logs and TUF: minor fixes

view details

push time in 9 days

push eventsecure-systems-lab/ssl-site

Trishank K Kuppusamy

commit sha 0c010784ea51f6883883a4f66cbd7745248cce18

Merge remote-tracking branch 'upstream/master'

view details

Trishank K Kuppusamy

commit sha b14c6304048ebc5b3d2a42ccbe9cdf51e2b15288

fix order of sections

view details

Justin Cappos

commit sha 5f4bd37cd5ba665b5f6518947ef03fd2d327ba94

Merge pull request #101 from trishankatdatadog/trishankatdatadog/fix-order Transparent logs and TUF: fix order of sections

view details

push time in 10 days

push eventsecure-systems-lab/ssl-site

Trishank K Kuppusamy

commit sha d0fc68d82d535122d6a0c35f57210b7ebfab446e

Merge remote-tracking branch 'upstream/master'

view details

Trishank K Kuppusamy

commit sha f51876a11ce6c1d1f1c503701882ad66f91c39fd

Merge remote-tracking branch 'upstream/master'

view details

Trishank K Kuppusamy

commit sha 0c565be0f72fcf09059c4ef0c1c17a6dc985d57b

clarify subtle diff in compromise recovery

view details

Justin Cappos

commit sha fdb94057c99d34cd0f2a1028c8e5e2812a5522b7

Merge pull request #100 from trishankatdatadog/trishankatdatadog/transparent-logs-compromise-recovery Transparent logs: clarify compromise recovery

view details

push time in 10 days

PR merged secure-systems-lab/ssl-site

Transparent logs: clarify compromise recovery

Fix #95

@FiloSottile and @JustinCappos, would you both please review?

+11 -8

2 comments

2 changed files

trishankatdatadog

pr closed time in 10 days

issue closedsecure-systems-lab/ssl-site

Transparent Logs and TUF: clarify compromise recovery

From @FiloSottile:

"About compromise recovery, our story is very simple: there is always a client involved and that client has an update mechanism (Go releases, for us), so if there is a compromise we'd roll the tree key, and make a release."

"...I think most TL systems can build PEP 458-level compromise recovery over their existing software update channel."

closed time in 10 days

trishankatdatadog

pull request commenttheupdateframework/specification

Clarify timestamp.json METAFILES format

I do feel more nervous about removing the hash and length from timestamp. The length of snapshot could increase dramatically as new targets are added. It could also decrease after a rotation of the snapshot key and clean up of outdated targets files.

I'll list a (semi-contrived) situation where version number, hash, and length are all important to have.

  1. If missing the length, but with a hash and version number. An attacker can launch an endless data attack more easily without compromising any keys.
  2. If missing the hash, but with a length and version number. An attacker which has stolen the timestamp and snapshot key and who can answer client requests (e.g., a compromised repo), can give different clients the same timestamp but different snapshot metadata. This may allow bifurcation of client requests without doing things like rapidly increasing the version number or giving different timestamp files, both of which would be more noticeable. (This is fairly contrived. I'm not really sure the impact here, but it seems worrying.)
  3. If missing the version number, but with a hash and a length. An attacker that has compromised the timestamp only, may generate a timestamp file for a future time that lists an old snapshot file. A client updating later, but before the future time, would not download the new snapshot file, thus causing a freeze attack without the attacker needing to maintain control of the repo. (The repo could rotate timestamp to fix this.)

My off-the-cuff thoughts are: having a version number is good, having a length is good, having a hash is good, and having all three seems to be the best. I believe it may be possible to drop one or more (but we would want to think very carefully about this), but is there a reason not to list all three?

joshuagl

comment created time in 10 days

pull request commentLind-Project/native_client

Exec args fix

Can we add a note indicating your assumption to the code here? This may save someone else (possibly even you) debugging headache in the future...

On Fri, Feb 7, 2020 at 12:27 PM Nicholas Renner notifications@github.com wrote:

Merged #19 https://github.com/Lind-Project/native_client/pull/19 into develop.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/Lind-Project/native_client/pull/19?email_source=notifications&email_token=AAGRODZBH5YRCWISKXCE7PTRBWKYLA5CNFSM4KRF6IIKYY3PNVWWK3TUL52HS4DFWZEXG43VMVCXMZLOORHG65DJMZUWGYLUNFXW5KTDN5WW2ZLOORPWSZGOWPVDTZQ#event-3018471910, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAGROD6LVJOX5MGOGRRZSCLRBWKYLANCNFSM4KRF6IIA .

rennergade

comment created time in 10 days

push eventsecure-systems-lab/ssl-site

Trishank K Kuppusamy

commit sha 1b1b179daeec823241bf4ee288e4f3df6166fdd5

clarify

view details

Trishank K Kuppusamy

commit sha a6ba2aed6ddcab684eb29c321d718969322e48db

make it a footnote to avoid digression

view details

Trishank K Kuppusamy

commit sha 392a715e11ae8594cf2669c442fac1a924b97395

make it a footnote

view details

Trishank K Kuppusamy

commit sha 75e49fab78e293e4425891a9772688e4ea5d2e32

add link to issue

view details

Trishank K Kuppusamy

commit sha c4e745e487e434e6d2bb0fd165e7d0191f69fa7e

add where trust is being removed from

view details

Justin Cappos

commit sha f3e7945c0439d09e6fd6c98adbd4792a5695d723

Merge pull request #98 from trishankatdatadog/trishankatdatadog/transparent-logs-clarify-trust Transparent logs & TUF: clarify trust

view details

push time in 10 days

issue closedsecure-systems-lab/ssl-site

Transparent Logs and TUF: clarify removing trust

From @FiloSottile:

"Also, we think about auditing in terms of trust, rather than just compromise: a major goal for us was to make sure the community wouldn't have to trust Google blindly."

"I think that's what you put under the umbrella of third party auditing and immutable log, but I think of those as means to an end, which is removing trust."

closed time in 10 days

trishankatdatadog

issue commentsecure-systems-lab/ssl-site

Transparent Logs and TUF: clarify compromise recovery

The keys used to revoke trust in TUF are not the same ones used to sign a new release. They are only used when revoking trust. You don't need to update the TUF software to change the keys used to sign a new release.

On Wed, Feb 5, 2020 at 10:12 AM Trishank Karthik Kuppusamy < notifications@github.com> wrote:

I think I'm still missing the semantic difference (that is, beyond where and how the keys are stored). The online keys in TUF are the tree signing keys in the sumdb, the root keys in TUF are the release signing keys in Go, all keys rotate permanently upon update (the signatures by the old untrusted key would be just ignored by updated clients), and if you have a TLS connection to an entity you trust you can pull it off securely.

I think one difference is that in sumdb, you need to update the client to permanently switch root keys, whereas in TUF (PEP 458, for apples-to-apples comparison), you don't need to update the client at all. I'll have to think if other differences exist.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/secure-systems-lab/ssl-site/issues/95?email_source=notifications&email_token=AAGROD4MBYSWNYCNJQRQAJ3RBLJODA5CNFSM4KPO5OC2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEK3YXYQ#issuecomment-582454242, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAGRODYA76TYJJHB3OUPXD3RBLJODANCNFSM4KPO5OCQ .

trishankatdatadog

comment created time in 12 days

Pull request review commenttheupdateframework/specification

Clarify rollback attack prevention and fast-forward attack recovery

 repo](https://github.com/theupdateframework/specification/issues).   cycle, report the potential freeze attack.  On the next update cycle, begin   at step 0 and version N of the root metadata file. -  * **1.9**. **If the timestamp and / or snapshot keys have been rotated, then-  delete the trusted timestamp and snapshot metadata files.** This is done in-  order to recover from fast-forward attacks after the repository has been-  compromised and recovered. A _fast-forward attack_ happens when attackers-  arbitrarily increase the version numbers of: (1) the timestamp metadata, (2)-  the snapshot metadata, and / or (3) the targets, or a delegated targets,-  metadata file in the snapshot metadata. Please see [the Mercury+  * **1.9**. **Fast-forward attack recovery** A _fast-forward attack_ happens+  when attackers arbitrarily increase the version numbers in any of the+  timestamp, snapshot, targets, or delegated targets metadata. To recover from+  fast-forward attacks after the repository has been compromised and recovered,+  certain metadata files need to be deleted as specified in this section.+  Please see [the Mercury   paper](https://ssl.engineering.nyu.edu/papers/kuppusamy-mercury-usenix-2017.pdf)   for more details. +    * **1.9.1**. **Targets recovery** If a threshold of targets keys have been+    removed in the new trusted root metadata compared to the previous trusted+    root metadata, delete the old top-level targets and snapshot metadata+    files.

Okay, after talking with @lukpueh , I think we're on the same page. I was misremembering that something like the keyid is listed in the file name for a delegated targets role. We do agree that cases where you know a fast forward attack did not occur (such as situations where a repository compromise did not happen) do not need to have the snapshot, etc. metadata deleted.

lukpueh

comment created time in 12 days

pull request commentcncf/sig-security

Some additional categories to consider (#1)

FYI: There is an on-going discussion about taking a different approach with the landscape document. We (@lumjjb ) will discuss and present in an upcoming meeting and try to take your edits into account.

gadinaor

comment created time in 13 days

Pull request review commenttheupdateframework/specification

Add specification versioning- and release management instructions and checks

 Versioning ----------  The TUF specification uses `Semantic Versioning 2.0.0 <https://semver.org/>`_-for its version numbers.+(semver) for its version numbers, and a gitflow-based release management:++- The 'master' branch of this repository always points to the latest stable+  version of the specification.+- The 'draft' branch of this repository always points to the latest development+  version of the specification and must always be based off of the latest+  'master' branch.+- Contributors must submit changes as pull requests against these branches,+  depending on the type of the change (see semver rules).+- For patch-type changes, pull requests may be submitted directly against the+  'master' branch.+- For major- and minor-type changes, pull requests must be submitted against+  the 'draft' branch.

It's slightly odd to me that the person sending a PR doesn't indicate minor vs major. Also, it sounds like those are getting intertwined in a way we may not want. What if we want to push a release with some backwards compat (minor version) changes but not others that are not backwards compatible? Do we disentangle these changes at that time?

lukpueh

comment created time in 13 days

Pull request review commenttheupdateframework/specification

Add specification versioning- and release management instructions and checks

 Versioning ----------  The TUF specification uses `Semantic Versioning 2.0.0 <https://semver.org/>`_-for its version numbers.+(semver) for its version numbers, and a gitflow-based release management:++- The 'master' branch of this repository always points to the latest stable+  version of the specification.+- The 'draft' branch of this repository always points to the latest development+  version of the specification and must always be based off of the latest+  'master' branch.+- Contributors must submit changes as pull requests against these branches,+  depending on the type of the change (see semver rules).+- For patch-type changes, pull requests may be submitted directly against the+  'master' branch.+- For major- and minor-type changes, pull requests must be submitted against+  the 'draft' branch.

I'd prefer to have a model where there are major, minor, and patch branches and they interrelate in the way your master and draft branches do.

I like the overall approach though!

lukpueh

comment created time in 13 days

Pull request review commenttheupdateframework/specification

Add specification versioning- and release management instructions and checks

 Versioning ----------  The TUF specification uses `Semantic Versioning 2.0.0 <https://semver.org/>`_-for its version numbers.+(semver) for its version numbers, and a gitflow-based release management:++- The 'master' branch of this repository always points to the latest stable+  version of the specification.+- The 'draft' branch of this repository always points to the latest development+  version of the specification and must always be based off of the latest+  'master' branch.+- Contributors must submit changes as pull requests against these branches,+  depending on the type of the change (see semver rules).+- For patch-type changes, pull requests may be submitted directly against the+  'master' branch.+- For major- and minor-type changes, pull requests must be submitted against+  the 'draft' branch.+- Maintainers may, from time to time, decide that the 'draft' branch is ready+  for a new major or minor release, and submit a pull request from 'draft'+  against 'master'.+- Before merging a branch with 'master' the 'last modified date' and 'version'+  in the specification header must be bumped.+- Merges with 'master' that originate from the 'draft' branch must bump either+  the major or minor version number.+- Merges with 'master' that originate from any other branch must bump the patch+  version number.+- Merges with 'master' must be followed by a git tag for the new version+  number.

I agree this would be nice to automate.

lukpueh

comment created time in 13 days

Pull request review commenttheupdateframework/specification

Clarify rollback attack prevention and fast-forward attack recovery

 non-volatile storage as FILENAME.EXT.   trusted root metadata file.  If the new targets metadata file is not signed   as required, discard it, abort the update cycle, and report the failure. -  * **4.3**. **Check for a freeze attack.** The latest known time should be+  * **4.3**. **Check for a rollback attack.** The version number of the trusted+  targets metadata file, if any, MUST be less than or equal to the version+  number of the new targets metadata file.  If the new targets metadata file is+  older than the trusted targets metadata file, discard it, abort the update+  cycle, and report the potential rollback attack.++  * **4.4**. **Check for a freeze attack.** The latest known time should be   lower than the expiration timestamp in the new targets metadata file.  If so,   the new targets metadata file becomes the trusted targets metadata file.  If   the new targets metadata file is expired, discard it, abort the update cycle,   and report the potential freeze attack. -  * **4.4**. **Perform a preorder depth-first search for metadata about the-  desired target, beginning with the top-level targets role.**  Note: If-  any metadata requested in steps 4.4.1 - 4.4.2.3 cannot be downloaded nor-  validated, end the search and report that the target cannot be found.+  * **4.5**. **Perform a preorder depth-first search for metadata about the+  desired target.** Let TARGETS be the current metadata, beginning with the+  top-level targets metadata role. -    * **4.4.1**. If this role has been visited before, then skip this role (so+    * **4.5.1**. If this role has been visited before, then skip this role (so     that cycles in the delegation graph are avoided).  Otherwise, if an     application-specific maximum number of roles have been visited, then go to     step 5 (so that attackers cannot cause the client to waste excessive     bandwidth or time).  Otherwise, if this role contains metadata about the     desired target, then go to step 5. -    * **4.4.2**. Otherwise, recursively search the list of delegations in order+    * **4.5.2**. Otherwise, recursively search the list of delegations in order     of appearance. -      * **4.4.2.1**. If the current delegation is a multi-role delegation,+      * **4.5.2.1**. Let DELEGATE denote the current target role TARGETS is+      delegating to.++      * **4.5.2.2**. **Fast-forward attack recovery.** If a threshold of

Are you saying that the snapshot metadata file should be deleted whenever someone rotates a key?

lukpueh

comment created time in 13 days

Pull request review commenttheupdateframework/specification

Clarify rollback attack prevention and fast-forward attack recovery

 repo](https://github.com/theupdateframework/specification/issues).   cycle, report the potential freeze attack.  On the next update cycle, begin   at step 0 and version N of the root metadata file. -  * **1.9**. **If the timestamp and / or snapshot keys have been rotated, then-  delete the trusted timestamp and snapshot metadata files.** This is done in-  order to recover from fast-forward attacks after the repository has been-  compromised and recovered. A _fast-forward attack_ happens when attackers-  arbitrarily increase the version numbers of: (1) the timestamp metadata, (2)-  the snapshot metadata, and / or (3) the targets, or a delegated targets,-  metadata file in the snapshot metadata. Please see [the Mercury+  * **1.9**. **Fast-forward attack recovery** A _fast-forward attack_ happens+  when attackers arbitrarily increase the version numbers in any of the+  timestamp, snapshot, targets, or delegated targets metadata. To recover from+  fast-forward attacks after the repository has been compromised and recovered,+  certain metadata files need to be deleted as specified in this section.+  Please see [the Mercury   paper](https://ssl.engineering.nyu.edu/papers/kuppusamy-mercury-usenix-2017.pdf)   for more details. +    * **1.9.1**. **Targets recovery** If a threshold of targets keys have been+    removed in the new trusted root metadata compared to the previous trusted+    root metadata, delete the old top-level targets and snapshot metadata+    files.

I'm not sure why all of these rotations are needed. Isn't this only true the case of a fast-forward attack? Why in general do you need to rotate the snapshot when there hasn't been a FF attack on the targets role?

Also, if there is a FF attack on a delegated key, I think you don't need to rotate snapshot. The key that was used to sign will not be trusted anymore (post "rotation") so delegations will end up pointing at a new delegated target.

I'd also like to say we're not talking about rotation in a TAP 8 sense here. TAP 8 has another mechanism for dealing with these sorts of issues.

lukpueh

comment created time in 13 days

issue commentsecure-systems-lab/ssl-site

Transparent Logs and TUF: clarify compromise recovery

The root metadata in TUF ( https://theupdateframework.io/metadata/#root-metadata-rootjson ) is signed with keys kept offline that are only needed when a top level role is compromised. A new root metadata file is pushed out which revokes the old key and adds a new one.

In practice compromising a targets role (which you need to do to provide malicious software) usually also requires compromise of an offline key and is restricted to only the projects that party is trusted for.

In practice these are each often thresholds of keys, so require multiple offline key compromises...

On Tue, Feb 4, 2020 at 2:42 AM Filippo Valsorda notifications@github.com wrote:

If there is a stolen / accidentally disclosed key, how would a client know they are talking to you versus getting a malicious update from an attacker?

The client software update is out of band, the sumdb protects the Go module ecosystem, not the Go releases themselves. (The latter are currently just signed in a platform-specific way and distributed via HTTPS and package managers, we are working to make them easily reproducible.)

So in a sense, we do have two layers of keys, although exercising the releases key is a much more noisy action, as it involves making a new Go release.

With TUF, how does a client know to trust the offline key but not the compromised online one?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/secure-systems-lab/ssl-site/issues/95?email_source=notifications&email_token=AAGROD677NASVHJ6WSLUNH3RBEL5XA5CNFSM4KPO5OC2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEKWURAQ#issuecomment-581781634, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAGRODYZOJGN3Z4N6KL2K5LRBEL5XANCNFSM4KPO5OCQ .

trishankatdatadog

comment created time in 13 days

Pull request review commenttheupdateframework/specification

Clarify rollback attack prevention and fast-forward attack recovery

 repo](https://github.com/theupdateframework/specification/issues).   cycle, report the potential freeze attack.  On the next update cycle, begin   at step 0 and version N of the root metadata file. -  * **1.9**. **If the timestamp and / or snapshot keys have been rotated, then-  delete the trusted timestamp and snapshot metadata files.** This is done in-  order to recover from fast-forward attacks after the repository has been-  compromised and recovered. A _fast-forward attack_ happens when attackers-  arbitrarily increase the version numbers of: (1) the timestamp metadata, (2)-  the snapshot metadata, and / or (3) the targets, or a delegated targets,-  metadata file in the snapshot metadata. Please see [the Mercury+  * **1.9**. **Fast-forward attack recovery** A _fast-forward attack_ happens+  when attackers arbitrarily increase the version numbers in any of the+  timestamp, snapshot, targets, or delegated targets metadata. To recover from+  fast-forward attacks after the repository has been compromised and recovered,+  certain metadata files need to be deleted as specified in this section.+  Please see [the Mercury   paper](https://ssl.engineering.nyu.edu/papers/kuppusamy-mercury-usenix-2017.pdf)   for more details. +    * **1.9.1**. **Targets recovery** If a threshold of targets keys have been+    removed in the new trusted root metadata compared to the previous trusted+    root metadata, delete the old top-level targets and snapshot metadata+    files.

I would think you could just rotate snapshot in this case too. Or would this not work for some reason?

lukpueh

comment created time in 13 days

issue commentsecure-systems-lab/ssl-site

Transparent Logs and TUF: clarify compromise recovery

If there is a stolen / accidentally disclosed key, how would a client know they are talking to you versus getting a malicious update from an attacker?

On Mon, Feb 3, 2020 at 9:43 PM Filippo Valsorda notifications@github.com wrote:

Oh and I should mention that sumdb signed tree heads can be signed by multiple keys, allowing such key rolls to happen without loss of availability.

See golang.org/x/mod/sumdb/note https://pkg.go.dev/golang.org/x/mod/sumdb/note.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/secure-systems-lab/ssl-site/issues/95?email_source=notifications&email_token=AAGROD5QUM6COPBHO6CS7BLRBDI5HA5CNFSM4KPO5OC2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEKWEQMY#issuecomment-581716019, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAGROD6B6YOPU4PIBNMYCZTRBDI5HANCNFSM4KPO5OCQ .

trishankatdatadog

comment created time in 14 days

push eventsecure-systems-lab/ssl-site

Trishank K Kuppusamy

commit sha 10f471e22bffaa694d6ef9aa359eff8a18678230

fix first two figures

view details

Justin Cappos

commit sha a577219f885bc02ae3cada508db6919b3234b47b

Merge pull request #94 from trishankatdatadog/trishankatdatadog/transparent-logs Transparent logs: fix first two figures

view details

push time in 14 days

Pull request review commenttheupdateframework/specification

Clarify rollback attack prevention and fast-forward attack recovery

 non-volatile storage as FILENAME.EXT.   trusted root metadata file.  If the new targets metadata file is not signed   as required, discard it, abort the update cycle, and report the failure. -  * **4.3**. **Check for a freeze attack.** The latest known time should be+  * **4.3**. **Check for a rollback attack.** The version number of the trusted+  targets metadata file, if any, MUST be less than or equal to the version+  number of the new targets metadata file.  If the new targets metadata file is+  older than the trusted targets metadata file, discard it, abort the update+  cycle, and report the potential rollback attack.++  * **4.4**. **Check for a freeze attack.** The latest known time should be   lower than the expiration timestamp in the new targets metadata file.  If so,   the new targets metadata file becomes the trusted targets metadata file.  If   the new targets metadata file is expired, discard it, abort the update cycle,   and report the potential freeze attack. -  * **4.4**. **Perform a preorder depth-first search for metadata about the-  desired target, beginning with the top-level targets role.**  Note: If-  any metadata requested in steps 4.4.1 - 4.4.2.3 cannot be downloaded nor-  validated, end the search and report that the target cannot be found.+  * **4.5**. **Perform a preorder depth-first search for metadata about the+  desired target.** Let TARGETS be the current metadata, beginning with the+  top-level targets metadata role. -    * **4.4.1**. If this role has been visited before, then skip this role (so+    * **4.5.1**. If this role has been visited before, then skip this role (so     that cycles in the delegation graph are avoided).  Otherwise, if an     application-specific maximum number of roles have been visited, then go to     step 5 (so that attackers cannot cause the client to waste excessive     bandwidth or time).  Otherwise, if this role contains metadata about the     desired target, then go to step 5. -    * **4.4.2**. Otherwise, recursively search the list of delegations in order+    * **4.5.2**. Otherwise, recursively search the list of delegations in order     of appearance. -      * **4.4.2.1**. If the current delegation is a multi-role delegation,+      * **4.5.2.1**. Let DELEGATE denote the current target role TARGETS is+      delegating to.++      * **4.5.2.2**. **Fast-forward attack recovery.** If a threshold of

@mnm678 How does this relate to TAP 8?

lukpueh

comment created time in 14 days

Pull request review commenttheupdateframework/specification

Clarify rollback attack prevention and fast-forward attack recovery

 repo](https://github.com/theupdateframework/specification/issues).   cycle, report the potential freeze attack.  On the next update cycle, begin   at step 0 and version N of the root metadata file. -  * **1.9**. **If the timestamp and / or snapshot keys have been rotated, then-  delete the trusted timestamp and snapshot metadata files.** This is done in-  order to recover from fast-forward attacks after the repository has been-  compromised and recovered. A _fast-forward attack_ happens when attackers-  arbitrarily increase the version numbers of: (1) the timestamp metadata, (2)-  the snapshot metadata, and / or (3) the targets, or a delegated targets,-  metadata file in the snapshot metadata. Please see [the Mercury+  * **1.9**. **Fast-forward attack recovery** A _fast-forward attack_ happens+  when attackers arbitrarily increase the version numbers in any of the+  timestamp, snapshot, targets, or delegated targets metadata. To recover from+  fast-forward attacks after the repository has been compromised and recovered,+  certain metadata files need to be deleted as specified in this section.+  Please see [the Mercury   paper](https://ssl.engineering.nyu.edu/papers/kuppusamy-mercury-usenix-2017.pdf)   for more details. +    * **1.9.1**. **Targets recovery** If a threshold of targets keys have been+    removed in the new trusted root metadata compared to the previous trusted+    root metadata, delete the old top-level targets and snapshot metadata+    files.

Why do we delete the snapshot metadata in this case? If that key was compromised, won't it be rotated as well independently?

lukpueh

comment created time in 14 days

Pull request review commenttheupdateframework/specification

Clarify rollback attack prevention and fast-forward attack recovery

 repo](https://github.com/theupdateframework/specification/issues).   cycle, report the potential freeze attack.  On the next update cycle, begin   at step 0 and version N of the root metadata file. -  * **1.9**. **If the timestamp and / or snapshot keys have been rotated, then-  delete the trusted timestamp and snapshot metadata files.** This is done in-  order to recover from fast-forward attacks after the repository has been-  compromised and recovered. A _fast-forward attack_ happens when attackers-  arbitrarily increase the version numbers of: (1) the timestamp metadata, (2)-  the snapshot metadata, and / or (3) the targets, or a delegated targets,-  metadata file in the snapshot metadata. Please see [the Mercury+  * **1.9**. **Fast-forward attack recovery** A _fast-forward attack_ happens+  when attackers arbitrarily increase the version numbers in any of the+  timestamp, snapshot, targets, or delegated targets metadata. To recover from+  fast-forward attacks after the repository has been compromised and recovered,+  certain metadata files need to be deleted as specified in this section.+  Please see [the Mercury   paper](https://ssl.engineering.nyu.edu/papers/kuppusamy-mercury-usenix-2017.pdf)   for more details. +    * **1.9.1**. **Targets recovery** If a threshold of targets keys have been

You talk about top-level targets below, but not here. Do you mean to differentiate them? Does any delegated role's key loss result in the top-level targets file being deleted?

lukpueh

comment created time in 14 days

Pull request review commenttheupdateframework/specification

Clarify rollback attack prevention and fast-forward attack recovery

 repo](https://github.com/theupdateframework/specification/issues).   cycle, report the potential freeze attack.  On the next update cycle, begin   at step 0 and version N of the root metadata file. -  * **1.9**. **If the timestamp and / or snapshot keys have been rotated, then-  delete the trusted timestamp and snapshot metadata files.** This is done in-  order to recover from fast-forward attacks after the repository has been-  compromised and recovered. A _fast-forward attack_ happens when attackers-  arbitrarily increase the version numbers of: (1) the timestamp metadata, (2)-  the snapshot metadata, and / or (3) the targets, or a delegated targets,-  metadata file in the snapshot metadata. Please see [the Mercury+  * **1.9**. **Fast-forward attack recovery** A _fast-forward attack_ happens+  when attackers arbitrarily increase the version numbers in any of the+  timestamp, snapshot, targets, or delegated targets metadata. To recover from
  timestamp, snapshot, targets, or delegated targets metadata. The attacker goal
  is to cause clients to refuse to update the metadata later because the attacker's
  listed metadata version number (possibly MAX_INT) is greater than the new valid 
  version.  To recover from
lukpueh

comment created time in 14 days

Pull request review commenttheupdateframework/specification

Clarify rollback attack prevention and fast-forward attack recovery

 repo](https://github.com/theupdateframework/specification/issues).   cycle, report the potential freeze attack.  On the next update cycle, begin   at step 0 and version N of the root metadata file. -  * **1.9**. **If the timestamp and / or snapshot keys have been rotated, then-  delete the trusted timestamp and snapshot metadata files.** This is done in-  order to recover from fast-forward attacks after the repository has been-  compromised and recovered. A _fast-forward attack_ happens when attackers-  arbitrarily increase the version numbers of: (1) the timestamp metadata, (2)-  the snapshot metadata, and / or (3) the targets, or a delegated targets,-  metadata file in the snapshot metadata. Please see [the Mercury+  * **1.9**. **Fast-forward attack recovery** A _fast-forward attack_ happens+  when attackers arbitrarily increase the version numbers in any of the+  timestamp, snapshot, targets, or delegated targets metadata. To recover from

Would recommend something like this...

lukpueh

comment created time in 14 days

Pull request review commenttheupdateframework/taps

Added TAP for TUF Version Management

 recent specification version supported by both the client and the repository. # Motivation  Various TAPs, including TAPs 3 and 8, propose changes that are not backwards-compatible. Because these changes are not compatible with previous TUF versions,+compatible. These non backwards compatible, or breaking, changes add or change

It sounds like you are defining it again here. Define it clearly up front and then use the single clear term. The more you go back and forth using the meaning of the term some times and the definition other times, the more confusing it is. It feels to me the reader like you're trying to have some added nuance here which I don't understand.

mnm678

comment created time in 15 days

Pull request review commenttheupdateframework/taps

Added TAP for TUF Version Management

  The TUF specification does not currently support breaking changes or changes that are not backwards compatible. If a repository and a client are not using-the same version of the TUF specification, metadata can not be safely and-reliably verified. This TAP addresses this clash of versions by allowing TUF+the same version of the TUF specification, differences in metadata format may+mean that metadata cannot be safely and reliably verified. Any changes that+affect the metadata in this way are considered breaking changes. This TAP

Here it sounds like you are defining "breaking changes". If so, don't use the term before this...

mnm678

comment created time in 15 days

Pull request review commenttheupdateframework/taps

Added TAP for TUF Version Management

 metadata from previous TUF specification versions. These functions allow a client to maintain old versions of the specification while still supporting the most recent version. +The top level TUF metadata (root, snapshot, timestamp, and top-level targets)+used to install an update should all implement the same TUF specification+version. A specification change may rely on more than one metadata file (for+example a change in the signing process would affect all metadata types), so+using the same specification version for top level metadata allows for these

How does this work with TAP 5 / pinning?

mnm678

comment created time in 15 days

Pull request review commenttheupdateframework/taps

Added TAP for TUF Version Management

 metadata from previous TUF specification versions. These functions allow a client to maintain old versions of the specification while still supporting the most recent version. +The top level TUF metadata (root, snapshot, timestamp, and top-level targets)+used to install an update should all implement the same TUF specification+version. A specification change may rely on more than one metadata file (for+example a change in the signing process would affect all metadata types), so+using the same specification version for top level metadata allows for these+large changes to the specification. However, delegated targets may not be+managed by the same parties as the top level metadata. For this reason, this TAP

It allows it, but how do you implement it? What difficulties arise?

mnm678

comment created time in 15 days

Pull request review commenttheupdateframework/taps

Added TAP for TUF Version Management

  The TUF specification does not currently support breaking changes or changes

How are breaking changes different from those that are not backwards compatible?

mnm678

comment created time in 15 days

issue commentcncf/sig-security

[Assessment] Cloud Custodian

Okay, it looks like the next step is that the chairs need to review the conflicts. @ultrasaurus @dshaw @pragashj

kapilt

comment created time in 20 days

issue commentcncf/sig-security

[Assessment] Cloud Custodian

I could lead this, and mentor @ericavonb if that works better.

Great! @justincormack Would you kindly post your conflict statement?

kapilt

comment created time in 20 days

issue commentcncf/sig-security

[Suggestion] Add recommendations for tooling in security assessments

I'm open to this but don't know the tools well enough to say more than "this sounds like a good idea".

@lumjjb Do you want to have a PR where you suggest Pandoc, while leaving it open for people to choose what they like?

lumjjb

comment created time in 21 days

push eventsecure-systems-lab/ssl-site

Justin Cappos

commit sha 459427a67134d485e7b26ae455478c6e5cb14469

Press update

view details

push time in 24 days

push eventcncf/sig-security

Andres Vega

commit sha 0c69383563169aa87452533f6cb7aa60f281f0d9

Update self-assessment.md Remove double brackets in software links.

view details

Justin Cappos

commit sha 8b4a1e5b727d17581c58f58f22ae6f0c6d523f0a

Merge pull request #334 from anvega/patch-6 Update self-assessment.md

view details

push time in a month

PR merged cncf/sig-security

Update self-assessment.md

Remove double brackets in software links.

+2 -2

0 comment

1 changed file

anvega

pr closed time in a month

push eventcncf/sig-security

Andres Vega

commit sha c00719f1cd44e6e057e7e50a1801b790fdb9f198

Update self-assessment.md Fix capitalization in the title.

view details

Justin Cappos

commit sha 35187f26e87156f0ce510d260e73659c9f2f80c4

Merge pull request #333 from anvega/patch-5 Update self-assessment.md

view details

push time in a month

PR merged cncf/sig-security

Update self-assessment.md

Fix capitalization in the title.

+1 -1

0 comment

1 changed file

anvega

pr closed time in a month

issue commentLind-Project/lind_project

Bash

Excellent! Sounds like a big step.

On Wed, Jan 22, 2020 at 3:38 PM Nicholas Renner notifications@github.com wrote:

A-ha! I was able to reproduce a minimal version of the bug!

Previously, when trying to create a minimal version I had made a program which forks, and then mallocs in both parent and child. This did not trigger the bug.

Now, I do a malloc in parent pre-fork, and then fork and malloc in child. This is more like what is happening in bash, and triggers the bug at the same point.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/Lind-Project/lind_project/issues/33?email_source=notifications&email_token=AAGRODZLFS33P5XNN6C5JC3Q7CVFTA5CNFSM4JB7UGL2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEJVAVBY#issuecomment-577374855, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAGROD4ZUY7ZRZAXMET2CILQ7CVFTANCNFSM4JB7UGLQ .

rennergade

comment created time in a month

pull request commentcncf/sig-security

Add CODEOWNERS

um... I approved, but would like @dshaw and/or @pragashj to also approve before merging.

Would be nice to also have review of other folks mentioned @JustinCappos @hannibalhuang @ericavonb

I have the same questions you raised @ultrasaurus , but otherwise approve once they are resolved.

lumjjb

comment created time in a month

issue commentsecure-systems-lab/ssl-site

Turn TUF/TL comparison into blog post

@Marina Moore mnm678@gmail.com Was this forwarded to me?

On Tue, Jan 21, 2020 at 4:31 PM Lois Anne DeLong notifications@github.com wrote:

I looked at it over the weekend and I believe Marina was going to pass it to Justin to review. Justin generally reviews these posts before we send them out.

Lois

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/secure-systems-lab/ssl-site/issues/92?email_source=notifications&email_token=AAGRODYQQZTKKLKM3647KNTQ65SSDA5CNFSM4KILR2MKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEJRKWFQ#issuecomment-576891670, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAGROD3PSKDPOZHGBWVJTPDQ65SSDANCNFSM4KILR2MA .

brainwane

comment created time in a month

push eventtheupdateframework/specification

Lukas Puehringer

commit sha 2eccb595628471bcee820f61faa64c2b96a3df60

Remove 2nd targets rollback attack check In the client application workflow, remove rollback attack check for top-level targets file, which is (1) redundant and (2) prevents recovery from a fast-forward attack. (1) rollback attacks, via serving older versions of targets or top-level targets than the previously trusted versions, are already prevented by step 3.3.3 of the client workflow, where version numbers of targets and delegated targets in the new snapshot metadata are asserted to be greater than those in the prior trusted snapshot metadata. This, in combination with the 4.1 check that asserts that hashes and version of the actual targets metadata match the ones in the new trusted snapshot, makes another version number check, i.e the one removed in this commit, obsolete. (2) fast-forward attack recovery, as described in 1.9, works by having the client remove the trusted timestamp and snapshot metadata after a non-root key rotation, so that the client can overcome the version comparison check, and update from a compromised high version to a recovered lower version. However, 1.9 does not mention removing trusted targets metadata after a key rotation. As a consequence, the additional version number check, removed in this commit, would prevent updating recovered targets metadata after a fast-forward attack.

view details

Justin Cappos

commit sha 0f56aee998db130e6e1c43c5c7141217436b6d33

Merge pull request #65 from lukpueh/rm-2nd-rollback-check Remove problematic targets rollback attack check

view details

push time in a month

PR merged theupdateframework/specification

Reviewers
Remove problematic targets rollback attack check

In the client application workflow, remove rollback attack check for top-level targets file, which is (1) redundant and (2) prevents recovery from a fast-forward attack.

(1) rollback attacks, via serving older versions of targets or top-level targets than the previously trusted versions, are already prevented by step 3.3.3 of the client workflow, where version numbers of targets and delegated targets in the new snapshot metadata are asserted to be greater than those in the prior trusted snapshot metadata. This, in combination with the 4.1 check that asserts that hashes and version of the actual targets metadata match the ones in the new trusted snapshot, makes another version number check, i.e the one removed in this PR, obsolete.

(2) fast-forward attack recovery, as described in 1.9, works by having the client remove the trusted timestamp and snapshot metadata after a non-root key rotation, so that the client can overcome the version comparison check, and update from a compromised high version to a recovered lower version. However, 1.9 does not mention removing trusted targets metadata after a key rotation. As a consequence, the additional version number check, removed in this PR, would prevent updating recovered targets metadata after a fast-forward attack.

+8 -14

3 comments

1 changed file

lukpueh

pr closed time in a month

issue commentcncf/sig-security

landscape: map projects to categories

@lumjjb and I are still working on our version of this, but we're at a point where it makes sense to have CNCF help us mock up a small part of what the final product would look like.

We could either do this now for the small part we have done or could wait, depending on what others think makes sense...

ultrasaurus

comment created time in a month

issue commentcncf/sig-security

Assessments Listing

Looking forward to seeing this. I know several folks have been looking for such a resource...

On Tue, Dec 17, 2019 at 11:33 PM Sarah Allen notifications@github.com wrote:

heard via email from @amye https://github.com/amye -- she's tracking down list of audits and will add them here

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/cncf/sig-security/issues/206?email_source=notifications&email_token=AAGROD7JAH4G3AUHP5KSFVDQZGR3NA5CNFSM4HYYZETKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEHE2KQQ#issuecomment-566863170, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAGROD3LNCID2SPC7SXAS5DQZGR3NANCNFSM4HYYZETA .

TheFoxAtWork

comment created time in a month

issue commentcncf/sig-security

[Assessment] Cloud Custodian

Okay. @ericavonb , I'd be happy to have you lead this. I understand the concern about not having done this, but you can rely on @rficcaglia, @ultrasaurus, and me to help out if you have questions / problems. Are you comfortable taking the lead role with us supporting?

kapilt

comment created time in a month

issue commentcncf/sig-security

[Assessment] Cloud Custodian

That only really leaves @ashutosh-narkar, I think. Ash, are you willing to do this?

kapilt

comment created time in a month

push eventsecure-systems-lab/ssl-site

Justin Cappos

commit sha c2ba7d4de4199e5da1684400e6aa4e6e8a93839e

iPhone press

view details

push time in a month

issue commentcncf/sig-security

[Assessment] Cloud Custodian

We need a lead reviewer. @ericavonb Would you be willing to take on this role?

kapilt

comment created time in a month

issue commentcncf/sig-security

[Assessment] Cloud Custodian

Hard conflicts: Reviewer is a maintainer of the project - NO Reviewer is a direct report of/to a maintainer of the project - NO Reviewer is paid to work on the project -NO Reviewer has significant financial interest directly tied to success of the project - NO

Soft conflicts:

Reviewer belongs to the same company/organization of the project, but does not work on the project - NO Reviewer uses the project in his/her work - NO Reviewer has contributed to the project. - NO Reviewer has a personal stake in the project (personal relationships, etc.) - NO

kapilt

comment created time in a month

issue commentsecure-systems-lab/peps

Add a transition plan

Do you just want me to ask someone from our group to add such text or were you looking for more of a draft?

On Wed, Jan 8, 2020 at 3:29 PM Sumana Harihareswara < notifications@github.com> wrote:

@JustinCappos https://github.com/JustinCappos I think this (and #56 https://github.com/secure-systems-lab/peps/pull/56) deserve discussion in the Discourse thread https://discuss.python.org/t/pep-458-secure-pypi-downloads-with-package-signing/2648/ for potential incorporation into the PEP -- could I ask you to suggest it there?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/secure-systems-lab/peps/issues/47?email_source=notifications&email_token=AAGRODZPWKUZ5Q7DN72ARATQ4YZTZA5CNFSM4JCQ3YC2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEIN32FI#issuecomment-572243221, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAGROD35AKWBF7YDBUQSLQTQ4YZTZANCNFSM4JCQ3YCQ .

trishankatdatadog

comment created time in a month

pull request commenttheupdateframework/tuf

Fix signature threshold

Note, this was first reported to us by Erik MacLean at Analog Devices, Inc.

More information about that disclosure will be forthcoming.

lukpueh

comment created time in a month

push eventtheupdateframework/tuf

Santiago Torres

commit sha bea6496dc2e7b80effa4393f360d3dd48f7d1955

release: 0.12.2 Signed-off-by: Santiago Torres <santiago@archlinux.org>

view details

Justin Cappos

commit sha 15414c6735516436839857732b0c260f3cc4d4f0

Merge pull request #975 from theupdateframework/bump0.12.2 release: 0.12.2

view details

push time in a month

PR merged theupdateframework/tuf

release: 0.12.2
+8 -2

0 comment

3 changed files

SantiagoTorres

pr closed time in a month

Pull request review commenttheupdateframework/taps

Added TAP for TUF Version Management

+* TAP:+* Title: Managing TUF Versions+* Version: 1+* Last-Modified: 22-July-2019+* Author: Marina Moore, Justin Cappos+* Status: Draft+* Content-Type: text/markdown+* Created: 19-December-2018++# Abstract++The TUF specification does not currently support breaking changes or changes+that are not backwards compatible. If a repository and a client are not using+the same version of the TUF specification, metadata can not be safely and+reliably verified. This TAP addresses this clash of versions by allowing TUF+implementations to independently manage updates on clients and repositories.+This in turn ensures that TUF will continue functioning after breaking changes+are implemented.++To manage breaking changes without loss of functionality, this TAP requires two+changes: one to the way the TUF specification manages versions, and the other to+how TUF implementations perform updates. The former is accomplished by having this TAP require+that the specification use Semantic Versioning to separate breaking from+non-breaking changes. The latter is achieved by requiring both clients and+repositories to maintain sets of TUF metadata that follow different versions of the TUF specification+so that reliable updates can continue throughout the process of the specification+upgrade. To do so, repositories will generate metadata for multiple TUF+specification versions and maintain them in an accessible directory. In+addition, clients will include support for metadata from previous TUF versions.+To determine the version to use for an update, clients will select the most+recent specification version supported by both the client and the repository.+++# Motivation++Various TAPs, including TAPs 3 and 8, propose changes that are not backwards+compatible. Because these changes are not compatible with previous TUF versions,+current implementations that use the existing specification can not access the+new features in these TAPs. By creating a way to deal with non-backwards+compatible (breaking) changes, TUF will be able to handle a variety of use+cases, including those that appear below.++## Use case 1: A repository updates to a new TUF spec version++As new features become available in the TUF specification, repositories may wish+to upgrade to a new version. This could include adding TAPs with breaking+changes. When the repository upgrades, clients that use an older version of the+TUF spec will no longer be able to safely parse metadata from the repository.+However, repositories and clients may be managed by different people with+different interests who may not be able to coordinate the upgrade to a new TUF version.+To resolve this problem, clients will first need a way to determine whether+their version of TUF spec is compatible with the version used by the repository.+This is to ensure that clients always parse metadata according to the version+that generated that metadata. Then, they will need some way to process updates+after this upgrade occurs on the repository to ensure reliable access to+updates, even if they are not able to upgrade immediately due to development+time or other constraints.++## Use case 2: A client updates to a new TUF spec version++Just as a repository may be upgraded, a TUF client may wish to upgrade to a new+TUF spec version to use new features. When the client implements an upgrade that+includes a breaking change, it cannot be sure that all repositories have also+upgraded to the new specification version. Therefore, after the client upgrades+they must ensure that any metadata downloaded from a repository was generated+using the new version. If the repository is using an older version, the client+should have some way to allow the update to proceed.++## Use case 3: A delegated targets role uses a different TUF spec version than the repository++A delegated role may make and sign metadata using a different version of the TUF+specification than the repository hosting the top level roles. Delegated roles+and the top level repository may be managed by people in different organizations+who are not able to coordinate upgrading to a new version of the TUF+specification. In this case, a client should be able to parse delegations that+generate metadata with a different TUF specification version than the repository.++## Use case 4: A client downloads metadata from multiple repositories++As described in TAP 4, TUF clients may download metadata from multiple+repositories. These repositories do not coordinate with each other, and so may+not upgrade to a new TUF specification version at the same time. A client should+be able to use multiple repositories that do not use the same version of TUF.+For example, a client may download metadata from one repository that uses TUF+version 1.0.0 and another repository that uses version 2.0.0.++## Use case 5: Allowing Backwards Compatibility++Existing TUF clients will still expect to download and parse metadata without+knowledge of the supported TUF version, even after this TAP is implemented on+repositories. Before this TAP there was no method for determining compatibility+between a client and a repository, so existing TUF repositories should continue+to distribute metadata using their existing method to support existing clients. This+means that this mechanism for allowing backwards compatible updates must itself+be backwards compatible.++# Rationale++We propose this TAP because as TUF continues to evolve, the need for TUF clients+and repositories to upgrade will grow. This sets up the potential for clients+that are no longer able to perform updates because they cannot parse the+metadata generated by repositories, or for clients to install the wrong images+due to changes in TUF. Both of these issues prevent clients from installing the+images intended by repository managers, and could lead to critical security or+functionality problems. TUF clients and repositories need to be able to upgrade+to new versions without preventing secure and reliable access to software+updates.++In trying to create more flexible functionality between servers and clients when+it comes to dealing with different versions of TUF, we identified two main+issues that need to be addressed. First, clients need to be able to determine+whether the TUF specification version they implement is compatible with the TUF+version of the metadata they receive. This ensures that clients parse+metadata according to the correct version to prevent errors from any changes to+the metadata in the new version. Second, clients need a way to use this+information to determine how to access metadata that is compatible with the+version of the TUF specification they implement.++To address the first issue, this TAP proposes standardizing the specification+version field to separate breaking changes from non-breaking changes and+ensure that the field can be compared across clients and repositories.+Separating out non-breaking changes allows these backwards compatible changes to+happen at any time without affecting TUF's operation. Therefore, clients and+repositories can still coordinate after a non-breaking change occurs. One common+framework used to separate versions by type is Semantic Versioning. Semantic+Versioning is a versioning scheme popular across open source projects that+categorizes specification versions by the scope and criticality of their+changes. Breaking changes in a specification would warrant a MAJOR version+number increase, while non-breaking changes would warrant only a MINOR version+number increase. In addition, Semantic Versioning has a standard way to format+version numbers so that they can be parsed and compared across implementations.++To address the second issue of accessing a compatible version, there are three+possible approaches. First, each repository could maintain sets of metadata that+implement multiple TUF versions while the clients only implement one TUF+specification version. In this case, TUF clients could not use metadata from+multiple repositories unless they all generate metadata that supports the same+TUF version (Use Case 4). In the second approach, each client could maintain+multiple versions that implement different versions of the TUF specification+while the repositories each generate metadata following a single TUF+specification version. In this case, if a repository upgrades to support a new+TUF version, clients will be unable to perform updates until support for the new+version is added to the client (Use Case 1). The third option is to implement+support for multiple TUF versions on both clients and repositories. This option+allows clients and repositories to be upgraded to support new versions of the+TUF specification independently and supports all use cases mentioned in this TAP.++This TAP adopts the third option, and describes procedures for both clients and+repositories to implement multiple TUF specification versions at the same time.+This requires both clients and+repositories to maintain multiple versions of the TUF spec, as well as changes+to the way repositories store metadata and clients parse this metadata.++For repositories, this means continuing support for old TUF versions for some+period of time after upgrading. This grace period gives existing clients that+implement old versions of the TUF specification time to implement support for a+new specification version. Repositories achieve this using a directory structure+with a directory for each supported TUF specification version. These directories+contain metadata that supports the given TUF specification version. Using these+directories, a client is able to choose the most recent metadata they support.+More details about this directory structure are contained in the [specification](#how-a-repository-updates).++On the client side, this TAP also requires maintenance of multiple versions that+support different TUF specification+versions to allow for communication with various repositories. To do so, it is+recommended that clients maintain functions that can be used to validate+metadata from previous TUF specification versions. These functions allow a+client to maintain old versions of the specification while still supporting the+most recent version.

How does one handle different versions of metadata? Can you have targets files from different versions than the root, snapshot, etc.? What files may be different?

mnm678

comment created time in a month

Pull request review commenttheupdateframework/taps

Added TAP for TUF Version Management

+* TAP:+* Title: Managing TUF Versions+* Version: 1+* Last-Modified: 22-July-2019+* Author: Marina Moore, Justin Cappos+* Status: Draft+* Content-Type: text/markdown+* Created: 19-December-2018++# Abstract++The TUF specification does not currently support breaking changes or changes+that are not backwards compatible. If a repository and a client are not using+the same version of the TUF specification, metadata can not be safely and+reliably verified. This TAP addresses this clash of versions by allowing TUF+implementations to independently manage updates on clients and repositories.+This in turn ensures that TUF will continue functioning after breaking changes+are implemented.++To manage breaking changes without loss of functionality, this TAP requires two+changes: one to the way the TUF specification manages versions, and the other to+how TUF implementations perform updates. The former is accomplished by having this TAP require+that the specification use Semantic Versioning to separate breaking from+non-breaking changes. The latter is achieved by requiring both clients and+repositories to maintain sets of TUF metadata that follow different versions of the TUF specification+so that reliable updates can continue throughout the process of the specification+upgrade. To do so, repositories will generate metadata for multiple TUF+specification versions and maintain them in an accessible directory. In+addition, clients will include support for metadata from previous TUF versions.+To determine the version to use for an update, clients will select the most+recent specification version supported by both the client and the repository.+++# Motivation++Various TAPs, including TAPs 3 and 8, propose changes that are not backwards+compatible. Because these changes are not compatible with previous TUF versions,+current implementations that use the existing specification can not access the+new features in these TAPs. By creating a way to deal with non-backwards+compatible (breaking) changes, TUF will be able to handle a variety of use+cases, including those that appear below.++## Use case 1: A repository updates to a new TUF spec version++As new features become available in the TUF specification, repositories may wish+to upgrade to a new version. This could include adding TAPs with breaking+changes. When the repository upgrades, clients that use an older version of the+TUF spec will no longer be able to safely parse metadata from the repository.+However, repositories and clients may be managed by different people with+different interests who may not be able to coordinate the upgrade to a new TUF version.+To resolve this problem, clients will first need a way to determine whether+their version of TUF spec is compatible with the version used by the repository.+This is to ensure that clients always parse metadata according to the version+that generated that metadata. Then, they will need some way to process updates+after this upgrade occurs on the repository to ensure reliable access to+updates, even if they are not able to upgrade immediately due to development+time or other constraints.++## Use case 2: A client updates to a new TUF spec version++Just as a repository may be upgraded, a TUF client may wish to upgrade to a new+TUF spec version to use new features. When the client implements an upgrade that+includes a breaking change, it cannot be sure that all repositories have also+upgraded to the new specification version. Therefore, after the client upgrades+they must ensure that any metadata downloaded from a repository was generated+using the new version. If the repository is using an older version, the client+should have some way to allow the update to proceed.++## Use case 3: A delegated targets role uses a different TUF spec version than the repository++A delegated role may make and sign metadata using a different version of the TUF+specification than the repository hosting the top level roles. Delegated roles+and the top level repository may be managed by people in different organizations+who are not able to coordinate upgrading to a new version of the TUF+specification. In this case, a client should be able to parse delegations that+generate metadata with a different TUF specification version than the repository.++## Use case 4: A client downloads metadata from multiple repositories++As described in TAP 4, TUF clients may download metadata from multiple+repositories. These repositories do not coordinate with each other, and so may+not upgrade to a new TUF specification version at the same time. A client should+be able to use multiple repositories that do not use the same version of TUF.+For example, a client may download metadata from one repository that uses TUF+version 1.0.0 and another repository that uses version 2.0.0.++## Use case 5: Allowing Backwards Compatibility++Existing TUF clients will still expect to download and parse metadata without+knowledge of the supported TUF version, even after this TAP is implemented on+repositories. Before this TAP there was no method for determining compatibility+between a client and a repository, so existing TUF repositories should continue+to distribute metadata using their existing method to support existing clients. This+means that this mechanism for allowing backwards compatible updates must itself+be backwards compatible.++# Rationale++We propose this TAP because as TUF continues to evolve, the need for TUF clients+and repositories to upgrade will grow. This sets up the potential for clients
and repositories to upgrade their TUF metadata format will grow. This sets up the potential for clients
mnm678

comment created time in a month

Pull request review commenttheupdateframework/taps

Added TAP for TUF Version Management

+* TAP:+* Title: Managing TUF Versions+* Version: 1+* Last-Modified: 22-July-2019+* Author: Marina Moore, Justin Cappos+* Status: Draft+* Content-Type: text/markdown+* Created: 19-December-2018++# Abstract++The TUF specification does not currently support breaking changes or changes+that are not backwards compatible. If a repository and a client are not using+the same version of the TUF specification, metadata can not be safely and+reliably verified. This TAP addresses this clash of versions by allowing TUF+implementations to independently manage updates on clients and repositories.+This in turn ensures that TUF will continue functioning after breaking changes+are implemented.++To manage breaking changes without loss of functionality, this TAP requires two+changes: one to the way the TUF specification manages versions, and the other to+how TUF implementations perform updates. The former is accomplished by having this TAP require+that the specification use Semantic Versioning to separate breaking from+non-breaking changes. The latter is achieved by requiring both clients and+repositories to maintain sets of TUF metadata that follow different versions of the TUF specification+so that reliable updates can continue throughout the process of the specification+upgrade. To do so, repositories will generate metadata for multiple TUF+specification versions and maintain them in an accessible directory. In+addition, clients will include support for metadata from previous TUF versions.+To determine the version to use for an update, clients will select the most+recent specification version supported by both the client and the repository.+++# Motivation++Various TAPs, including TAPs 3 and 8, propose changes that are not backwards+compatible. Because these changes are not compatible with previous TUF versions,+current implementations that use the existing specification can not access the+new features in these TAPs. By creating a way to deal with non-backwards+compatible (breaking) changes, TUF will be able to handle a variety of use+cases, including those that appear below.++## Use case 1: A repository updates to a new TUF spec version++As new features become available in the TUF specification, repositories may wish+to upgrade to a new version. This could include adding TAPs with breaking+changes. When the repository upgrades, clients that use an older version of the+TUF spec will no longer be able to safely parse metadata from the repository.+However, repositories and clients may be managed by different people with+different interests who may not be able to coordinate the upgrade to a new TUF version.+To resolve this problem, clients will first need a way to determine whether+their version of TUF spec is compatible with the version used by the repository.+This is to ensure that clients always parse metadata according to the version+that generated that metadata. Then, they will need some way to process updates+after this upgrade occurs on the repository to ensure reliable access to+updates, even if they are not able to upgrade immediately due to development+time or other constraints.++## Use case 2: A client updates to a new TUF spec version++Just as a repository may be upgraded, a TUF client may wish to upgrade to a new+TUF spec version to use new features. When the client implements an upgrade that+includes a breaking change, it cannot be sure that all repositories have also+upgraded to the new specification version. Therefore, after the client upgrades+they must ensure that any metadata downloaded from a repository was generated+using the new version. If the repository is using an older version, the client+should have some way to allow the update to proceed.++## Use case 3: A delegated targets role uses a different TUF spec version than the repository++A delegated role may make and sign metadata using a different version of the TUF+specification than the repository hosting the top level roles. Delegated roles+and the top level repository may be managed by people in different organizations+who are not able to coordinate upgrading to a new version of the TUF+specification. In this case, a client should be able to parse delegations that+generate metadata with a different TUF specification version than the repository.++## Use case 4: A client downloads metadata from multiple repositories++As described in TAP 4, TUF clients may download metadata from multiple+repositories. These repositories do not coordinate with each other, and so may+not upgrade to a new TUF specification version at the same time. A client should+be able to use multiple repositories that do not use the same version of TUF.+For example, a client may download metadata from one repository that uses TUF+version 1.0.0 and another repository that uses version 2.0.0.++## Use case 5: Allowing Backwards Compatibility++Existing TUF clients will still expect to download and parse metadata without

I'm confused about what you mean here. I don't understand the case you are trying to convey.

mnm678

comment created time in a month

Pull request review commenttheupdateframework/taps

Added TAP for TUF Version Management

+* TAP:+* Title: Managing TUF Versions+* Version: 1+* Last-Modified: 22-July-2019+* Author: Marina Moore, Justin Cappos+* Status: Draft+* Content-Type: text/markdown+* Created: 19-December-2018++# Abstract++The TUF specification does not currently support breaking changes or changes+that are not backwards compatible. If a repository and a client are not using+the same version of the TUF specification, metadata can not be safely and+reliably verified. This TAP addresses this clash of versions by allowing TUF+implementations to independently manage updates on clients and repositories.+This in turn ensures that TUF will continue functioning after breaking changes+are implemented.++To manage breaking changes without loss of functionality, this TAP requires two+changes: one to the way the TUF specification manages versions, and the other to+how TUF implementations perform updates. The former is accomplished by having this TAP require+that the specification use Semantic Versioning to separate breaking from+non-breaking changes. The latter is achieved by requiring both clients and+repositories to maintain sets of TUF metadata that follow different versions of the TUF specification+so that reliable updates can continue throughout the process of the specification+upgrade. To do so, repositories will generate metadata for multiple TUF+specification versions and maintain them in an accessible directory. In+addition, clients will include support for metadata from previous TUF versions.+To determine the version to use for an update, clients will select the most+recent specification version supported by both the client and the repository.+++# Motivation++Various TAPs, including TAPs 3 and 8, propose changes that are not backwards+compatible. Because these changes are not compatible with previous TUF versions,+current implementations that use the existing specification can not access the+new features in these TAPs. By creating a way to deal with non-backwards+compatible (breaking) changes, TUF will be able to handle a variety of use+cases, including those that appear below.++## Use case 1: A repository updates to a new TUF spec version++As new features become available in the TUF specification, repositories may wish+to upgrade to a new version. This could include adding TAPs with breaking+changes. When the repository upgrades, clients that use an older version of the+TUF spec will no longer be able to safely parse metadata from the repository.+However, repositories and clients may be managed by different people with+different interests who may not be able to coordinate the upgrade to a new TUF version.+To resolve this problem, clients will first need a way to determine whether+their version of TUF spec is compatible with the version used by the repository.+This is to ensure that clients always parse metadata according to the version+that generated that metadata. Then, they will need some way to process updates+after this upgrade occurs on the repository to ensure reliable access to+updates, even if they are not able to upgrade immediately due to development+time or other constraints.++## Use case 2: A client updates to a new TUF spec version++Just as a repository may be upgraded, a TUF client may wish to upgrade to a new+TUF spec version to use new features. When the client implements an upgrade that+includes a breaking change, it cannot be sure that all repositories have also+upgraded to the new specification version. Therefore, after the client upgrades+they must ensure that any metadata downloaded from a repository was generated+using the new version. If the repository is using an older version, the client+should have some way to allow the update to proceed.++## Use case 3: A delegated targets role uses a different TUF spec version than the repository++A delegated role may make and sign metadata using a different version of the TUF+specification than the repository hosting the top level roles. Delegated roles+and the top level repository may be managed by people in different organizations+who are not able to coordinate upgrading to a new version of the TUF+specification. In this case, a client should be able to parse delegations that+generate metadata with a different TUF specification version than the repository.++## Use case 4: A client downloads metadata from multiple repositories++As described in TAP 4, TUF clients may download metadata from multiple+repositories. These repositories do not coordinate with each other, and so may
repositories. These repositories may not coordinate with each other, and so may
mnm678

comment created time in a month

Pull request review commenttheupdateframework/taps

Added TAP for TUF Version Management

+* TAP:+* Title: Managing TUF Versions+* Version: 1+* Last-Modified: 22-July-2019+* Author: Marina Moore, Justin Cappos+* Status: Draft+* Content-Type: text/markdown+* Created: 19-December-2018++# Abstract++The TUF specification does not currently support breaking changes or changes+that are not backwards compatible. If a repository and a client are not using+the same version of the TUF specification, metadata can not be safely and+reliably verified. This TAP addresses this clash of versions by allowing TUF+implementations to independently manage updates on clients and repositories.+This in turn ensures that TUF will continue functioning after breaking changes+are implemented.++To manage breaking changes without loss of functionality, this TAP requires two+changes: one to the way the TUF specification manages versions, and the other to+how TUF implementations perform updates. The former is accomplished by having this TAP require+that the specification use Semantic Versioning to separate breaking from+non-breaking changes. The latter is achieved by requiring both clients and+repositories to maintain sets of TUF metadata that follow different versions of the TUF specification+so that reliable updates can continue throughout the process of the specification+upgrade. To do so, repositories will generate metadata for multiple TUF+specification versions and maintain them in an accessible directory. In+addition, clients will include support for metadata from previous TUF versions.+To determine the version to use for an update, clients will select the most+recent specification version supported by both the client and the repository.+++# Motivation++Various TAPs, including TAPs 3 and 8, propose changes that are not backwards+compatible. Because these changes are not compatible with previous TUF versions,+current implementations that use the existing specification can not access the+new features in these TAPs. By creating a way to deal with non-backwards+compatible (breaking) changes, TUF will be able to handle a variety of use+cases, including those that appear below.++## Use case 1: A repository updates to a new TUF spec version++As new features become available in the TUF specification, repositories may wish+to upgrade to a new version. This could include adding TAPs with breaking+changes. When the repository upgrades, clients that use an older version of the+TUF spec will no longer be able to safely parse metadata from the repository.+However, repositories and clients may be managed by different people with+different interests who may not be able to coordinate the upgrade to a new TUF version.+To resolve this problem, clients will first need a way to determine whether+their version of TUF spec is compatible with the version used by the repository.+This is to ensure that clients always parse metadata according to the version+that generated that metadata. Then, they will need some way to process updates+after this upgrade occurs on the repository to ensure reliable access to+updates, even if they are not able to upgrade immediately due to development+time or other constraints.++## Use case 2: A client updates to a new TUF spec version++Just as a repository may be upgraded, a TUF client may wish to upgrade to a new+TUF spec version to use new features. When the client implements an upgrade that+includes a breaking change, it cannot be sure that all repositories have also+upgraded to the new specification version. Therefore, after the client upgrades+they must ensure that any metadata downloaded from a repository was generated+using the new version. If the repository is using an older version, the client

Why does the metadata need to be the latest version? Do newer versions always break old clients? Can't the client understand both versions?

mnm678

comment created time in a month

Pull request review commenttheupdateframework/taps

Added TAP for TUF Version Management

+* TAP:+* Title: Managing TUF Versions+* Version: 1+* Last-Modified: 22-July-2019+* Author: Marina Moore, Justin Cappos+* Status: Draft+* Content-Type: text/markdown+* Created: 19-December-2018++# Abstract++The TUF specification does not currently support breaking changes or changes+that are not backwards compatible. If a repository and a client are not using+the same version of the TUF specification, metadata can not be safely and+reliably verified. This TAP addresses this clash of versions by allowing TUF+implementations to independently manage updates on clients and repositories.+This in turn ensures that TUF will continue functioning after breaking changes+are implemented.++To manage breaking changes without loss of functionality, this TAP requires two+changes: one to the way the TUF specification manages versions, and the other to+how TUF implementations perform updates. The former is accomplished by having this TAP require+that the specification use Semantic Versioning to separate breaking from+non-breaking changes. The latter is achieved by requiring both clients and+repositories to maintain sets of TUF metadata that follow different versions of the TUF specification+so that reliable updates can continue throughout the process of the specification+upgrade. To do so, repositories will generate metadata for multiple TUF+specification versions and maintain them in an accessible directory. In+addition, clients will include support for metadata from previous TUF versions.+To determine the version to use for an update, clients will select the most+recent specification version supported by both the client and the repository.+++# Motivation++Various TAPs, including TAPs 3 and 8, propose changes that are not backwards+compatible. Because these changes are not compatible with previous TUF versions,+current implementations that use the existing specification can not access the+new features in these TAPs. By creating a way to deal with non-backwards+compatible (breaking) changes, TUF will be able to handle a variety of use+cases, including those that appear below.++## Use case 1: A repository updates to a new TUF spec version++As new features become available in the TUF specification, repositories may wish+to upgrade to a new version. This could include adding TAPs with breaking+changes. When the repository upgrades, clients that use an older version of the+TUF spec will no longer be able to safely parse metadata from the repository.

What does it mean to safely parse?

mnm678

comment created time in a month

Pull request review commenttheupdateframework/taps

Added TAP for TUF Version Management

+* TAP:+* Title: Managing TUF Versions+* Version: 1+* Last-Modified: 22-July-2019+* Author: Marina Moore, Justin Cappos+* Status: Draft+* Content-Type: text/markdown+* Created: 19-December-2018++# Abstract++The TUF specification does not currently support breaking changes or changes+that are not backwards compatible. If a repository and a client are not using+the same version of the TUF specification, metadata can not be safely and+reliably verified. This TAP addresses this clash of versions by allowing TUF+implementations to independently manage updates on clients and repositories.+This in turn ensures that TUF will continue functioning after breaking changes+are implemented.++To manage breaking changes without loss of functionality, this TAP requires two+changes: one to the way the TUF specification manages versions, and the other to+how TUF implementations perform updates. The former is accomplished by having this TAP require+that the specification use Semantic Versioning to separate breaking from+non-breaking changes. The latter is achieved by requiring both clients and+repositories to maintain sets of TUF metadata that follow different versions of the TUF specification+so that reliable updates can continue throughout the process of the specification+upgrade. To do so, repositories will generate metadata for multiple TUF+specification versions and maintain them in an accessible directory. In+addition, clients will include support for metadata from previous TUF versions.+To determine the version to use for an update, clients will select the most+recent specification version supported by both the client and the repository.+++# Motivation++Various TAPs, including TAPs 3 and 8, propose changes that are not backwards+compatible. Because these changes are not compatible with previous TUF versions,+current implementations that use the existing specification can not access the+new features in these TAPs. By creating a way to deal with non-backwards+compatible (breaking) changes, TUF will be able to handle a variety of use+cases, including those that appear below.++## Use case 1: A repository updates to a new TUF spec version++As new features become available in the TUF specification, repositories may wish+to upgrade to a new version. This could include adding TAPs with breaking

Need to have defined "breaking changes" at this point.

mnm678

comment created time in a month

Pull request review commenttheupdateframework/taps

Added TAP for TUF Version Management

+* TAP:+* Title: Managing TUF Versions+* Version: 1+* Last-Modified: 22-July-2019+* Author: Marina Moore, Justin Cappos+* Status: Draft+* Content-Type: text/markdown+* Created: 19-December-2018++# Abstract++The TUF specification does not currently support breaking changes or changes+that are not backwards compatible. If a repository and a client are not using+the same version of the TUF specification, metadata can not be safely and
the same version of the TUF specification, differences in metadata format may mean that metadata cannot be safely and
mnm678

comment created time in a month

Pull request review commenttheupdateframework/taps

Added TAP for TUF Version Management

+* TAP:+* Title: Managing TUF Versions+* Version: 1+* Last-Modified: 22-July-2019+* Author: Marina Moore, Justin Cappos+* Status: Draft+* Content-Type: text/markdown+* Created: 19-December-2018++# Abstract++The TUF specification does not currently support breaking changes or changes+that are not backwards compatible. If a repository and a client are not using+the same version of the TUF specification, metadata can not be safely and+reliably verified. This TAP addresses this clash of versions by allowing TUF+implementations to independently manage updates on clients and repositories.+This in turn ensures that TUF will continue functioning after breaking changes+are implemented.++To manage breaking changes without loss of functionality, this TAP requires two

without loss of functionality

What functionality and for whom? What is the goal?

You probably need something early on that shows an example of changes that are of each type (breaking, forwards compatible, patch notes).

mnm678

comment created time in a month

Pull request review commenttheupdateframework/taps

Added TAP for TUF Version Management

+* TAP:+* Title: Managing TUF Versions+* Version: 1+* Last-Modified: 22-July-2019+* Author: Marina Moore, Justin Cappos+* Status: Draft+* Content-Type: text/markdown+* Created: 19-December-2018++# Abstract++The TUF specification does not currently support breaking changes or changes+that are not backwards compatible. If a repository and a client are not using+the same version of the TUF specification, metadata can not be safely and+reliably verified. This TAP addresses this clash of versions by allowing TUF+implementations to independently manage updates on clients and repositories.+This in turn ensures that TUF will continue functioning after breaking changes+are implemented.++To manage breaking changes without loss of functionality, this TAP requires two+changes: one to the way the TUF specification manages versions, and the other to+how TUF implementations perform updates. The former is accomplished by having this TAP require+that the specification use Semantic Versioning to separate breaking from+non-breaking changes. The latter is achieved by requiring both clients and+repositories to maintain sets of TUF metadata that follow different versions of the TUF specification+so that reliable updates can continue throughout the process of the specification+upgrade. To do so, repositories will generate metadata for multiple TUF+specification versions and maintain them in an accessible directory. In+addition, clients will include support for metadata from previous TUF versions.+To determine the version to use for an update, clients will select the most+recent specification version supported by both the client and the repository.+++# Motivation++Various TAPs, including TAPs 3 and 8, propose changes that are not backwards+compatible. Because these changes are not compatible with previous TUF versions,+current implementations that use the existing specification can not access the+new features in these TAPs. By creating a way to deal with non-backwards+compatible (breaking) changes, TUF will be able to handle a variety of use

What do you mean by handle here? The goal is still a bit unclear.

mnm678

comment created time in a month

push eventsecure-systems-lab/ssl-site

Justin Cappos

commit sha 82c7b6715a1dc4268ba621c568b9c12a73d64415

medium title update

view details

push time in a month

Pull request review commentuptane/uptane-standard

Partial verification only requires the Director's Targets metadata.

 To properly check Targets metadata, an ECU SHOULD:  1. Download up to Z number of bytes, constructing the metadata filename as defined in {{metadata_filename_rules}}. The value for Z is set by the implementor. For example, Z may be tens of kilobytes. 1. The version number of the new Targets metadata file MUST match the version number listed in the latest Snapshot metadata. If the version number does not match, discard it, abort the update cycle, and report the failure. (Checks for a mix-and-match attack.) Skip this step if checking Targets metadata on a partial verification ECU; partial verification ECUs will not have Snapshot metadata.-1. Check that the Targets metadata has been signed by the threshold of keys specified in the relevant metadata file (Checks for an arbitrary software attack):+1. Check that the Targets metadata has been signed by the threshold of keys specified in the relevant metadata file. (Checks for an arbitrary software attack.) Skip this step if checking Targets metadata on a partial verification ECU.

Yes, this matches my understanding. A PV secondary doesn't have a separate way to update its root of trust without updating the image. If you wanted to add that and do it securely, you'd just make it a FV secondary...

patrickvacek

comment created time in a month

issue commenttheupdateframework/tuf

Potential DoS for attacker that can create metadata files...

Because this seems like it will relate to crypto agility, I'd like @mnm678 to take a look.

JustinCappos

comment created time in a month

issue openedtheupdateframework/tuf

Potential DoS for attacker that can create metadata files...

We received the report below about an attacker that can create many invalid signatures on a metadata file, delaying the moment when the client will determine the signature is not valid. This delay may be for at least a few minutes, but possibly could be longer especially if multiple files are impacted.

Possible remediations include failing earlier (possibly immediately) if any signature is not valid.

Credit to Erik Maclean - Analog Devices, Inc. for reporting this issue.

(More Details below.)

Tracking ID: CVE-2020-6173

Summary:

Potential Client-side Denial of Service

Description:

While maximum file size is restricted for downloading, the client may attempt to validate a large number of signatures. We have been able to add over 500 copies of the same invalid signature into the root.json file, which results in the client attempting to validate each one, spending several minutes on validation. The file size limit of target.json is larger and may allow up to 5000 signatures, further increasing the amount of time spent in validation.

Security Impact: Denial of Service

Affected Version:

Identified at commit 9fde70fbb3ba6a3385b80046559058d939833c60, suspect all versions.

Credit:

Erik Maclean - Analog Devices, Inc.

created time in a month

push eventcncf/sig-security

Sarah Allen

commit sha 7b979b28de5c32df0687966721c36ae2c7621f23

intake process and prioritization

view details

Sarah Allen

commit sha 12474605d3f8a809810148ebf217b8775a0a5027

wrap at 80 col

view details

Sarah Allen

commit sha b828cc761d9ca53fa422c0914305dad2cef452d6

wrap README at 80 col

view details

Sarah Allen

commit sha 225603701453dec0d382a6d8503e4e54142dcb8e

address PR feedback

view details

Sarah Allen

commit sha 21c231033744ff69c563090b303c9fe5e7f3a42b

address some feedback, add reference to queue project and github logistics

view details

Sarah Allen

commit sha afdcb349362ddcebfa8917b3d987b13c6ea7cfbc

added note that in future security assessment will happen before audit

view details

Sarah Allen

commit sha 972071c4d06f7ac4d5a81b9e29049e35b60d0135

Merge branch 'master' into intake-process

view details

Sarah Allen

commit sha 52b6698c1a59c9f62cdb5f359eaad5308e799a8a

small tweaks to improve words/punctuation

view details

Sarah Allen

commit sha 0971cddac6c8fde38722d54afbd1a92fa9523571

fix broken link, typo

view details

Sarah Allen

commit sha 73bdaa81baf2b170af2c8137d76197cc0dda3ea1

Merge branch 'master' into intake-process

view details

Justin Cappos

commit sha 968279c30397a331a9c67f888136f58388eeca65

Update assessments/README.md

view details

Justin Cappos

commit sha 315e202ec6db17a9da5687bd65c42ced843f186f

Update assessments/README.md

view details

Sarah Allen

commit sha d4493c22f116559daabce796584c3cf5ed3d4a6d

priorities = guidance with TOC communication addressing feedback from Justin Cappos added a description of priorities as guidance, where we'll communicate what we're doing on a regular heartbeat and facilitator and named chair coordinate

view details

Sarah Allen

commit sha 6759d59533bdb379c0f330fd259f5f6e729b3920

removing additional point, unrelated to intake-process chatted with JustinCappos via Slack who agreed this can be addressed as separate PR

view details

Justin Cappos

commit sha d6303cb9dbeb58ea1bdaf4c9ae7abc94b29b7150

Merge pull request #296 from cncf/intake-process intake process and prioritization

view details

push time in a month

more