profile
viewpoint
Trishank Karthik Kuppusamy trishankatdatadog @DataDog New York, NY https://keybase.io/trishankdatadog Staff Security Engineer at @DataDog. Helped to research and develop @theupdateframework and @uptane.

DataDog/integrations-core 439

Core integrations of the Datadog Agent

DataDog/yubikey 380

YubiKey at Datadog

cnabio/signy 18

Go implementation for CNAB content trust verification using TUF, Notary, and in-toto

engineerd/pysigny 1

[WIP] Python reference implementation for the CNAB security specification, with TUF, and in-toto

trishankatdatadog/hvac-openpgp 1

An extension to HVAC for a Transit-Secrets-Engine-like API to vault-gpg-plugin

DataDog/pip 0

The Python Package Installer (recommended by PyPA)

JustinCappos/toc 0

Technical Oversight Committee (TOC)

secure-systems-lab/peps 0

Python Enhancement Proposals

secure-systems-lab/signing-spec 0

A specification for signing methods and formats used by Secure Systems Lab projects.

trishankatdatadog/aws-vault 0

A vault for securely storing and accessing AWS credentials in development environments

created tagtrishankatdatadog/crypto

tagv0.0.0-20200930202336-a58df677f9f7

[mirror] Go supplementary cryptography libraries

created time in 5 hours

push eventtrishankatdatadog/crypto

Trishank Karthik Kuppusamy

commit sha a58df677f9f71328522201c128558b5610e0e6b0

functions for signing subkey cross-certification Signed-off-by: Trishank Karthik Kuppusamy <trishank.kuppusamy@datadoghq.com>

view details

push time in 5 hours

PullRequestReviewEvent

created tagtrishankatdatadog/crypto

tagv0.0.0-20200930184313-2fa05e1d64ba

[mirror] Go supplementary cryptography libraries

created time in 6 hours

created tagtrishankatdatadog/crypto

tagv0.0.0-20200930150000-2fa05e1d64ba

[mirror] Go supplementary cryptography libraries

created time in 6 hours

created tagtrishankatdatadog/crypto

tagv0.0.0-20200930150000-2fa05e1d64ba60c7cef42d4c26f5e3aca202f1a5

[mirror] Go supplementary cryptography libraries

created time in 6 hours

PullRequestReviewEvent

Pull request review commenttheupdateframework/specification

Checking version after verifying signatures

 VERSION_NUMBER is the version number of the targets metadata file listed in the snapshot metadata file.  In either case, the client MUST write the file to non-volatile storage as FILENAME.EXT. -  * **4.1**. **Check against snapshot metadata.** The hashes and version-  number of the new targets metadata file MUST match the hashes (if any) and-  version number listed in the trusted snapshot metadata.  This is done, in-  part, to prevent a mix-and-match attack by man-in-the-middle attackers.  If-  the new targets metadata file does not match, discard it, abort the update-  cycle, and report the failure.+  * **4.1**. **Check against snapshot role's targets hash.** The hashes+  of the new targets metadata file MUST match the hashes (if any) listed in the+  trusted snapshot metadata.  This is done, in part, to prevent a mix-and-match+  attack by man-in-the-middle attackers.  If the new targets metadata file does+  not match, discard the new target metadata, abort the update cycle, and+  report the failure.    * **4.2**. **Check for an arbitrary software attack.** The new targets   metadata file MUST have been signed by a threshold of keys specified in the   trusted root metadata file.  If the new targets metadata file is not signed   as required, discard it, abort the update cycle, and report the failure. -  * **4.3**. **Check for a freeze attack.** The latest known time should be+  * **4.3**. **Check against snapshot role's targets version.** The version

Yeah, I think just to say why we're doing this, because we try to do it for other checks.

erickt

comment created time in 9 hours

issue commenttheupdateframework/tuf

Make confined_target_dirs optional field in the MIRROR_SCHEMA

We should just remove it. Nobody uses it AFAIK.

MVrachev

comment created time in 9 hours

issue commenttheupdateframework/specification

Rewriting the workflow to call out to sub-sections

Maybe we should just drop Markdown and use something more standards-specific. There are a few tools out there for this. We should shop around.

erickt

comment created time in 9 hours

pull request commenttheupdateframework/specification

Remove slow retrieval attacks from protections

On second thought, we may want to reinstate this in the future, but yes: there is currently nothing in the specification or reference implementation that specifies precisely how to do it. I am starting a conversation on the CNCF Slack channel for more high-bandwidth discussion. Will post findings here later.

joshuagl

comment created time in 11 hours

fork trishankatdatadog/crypto

[mirror] Go supplementary cryptography libraries

https://godoc.org/golang.org/x/crypto

fork in a day

PR opened theupdateframework/specification

Number sections in ToC

Did not bump version because no effective change.

+8 -8

0 comment

1 changed file

pr created time in a day

create barnchtheupdateframework/specification

branch : trishankatdatadog-patch-1

created branch time in a day

pull request commenttheupdateframework/specification

Persist metadata to local store after validation

Also, step 4.5 in Section 5 is already inconsistent, because it refers to the now outdated Sections 4.4.1 - 4.4.2.3...

erickt

comment created time in a day

pull request commenttheupdateframework/specification

Persist metadata to local store after validation

Sorry, just saw this. Wish we had clarified exactly when to persist delegated targets metadata, although it may be obvious. We can always add this later!

erickt

comment created time in a day

Pull request review commenttheupdateframework/specification

Add TAP 11 to the specification.

 repo](https://github.com/theupdateframework/specification/issues).    partly because the specific threat posted to clients in many situations is    largely determined by how the framework is being used. +*  **2.3. Protocol, Operations, Usage, and Format (POUF) Documents**

On second thought, I think we should clarify that although we may concretely talk about files and use JSON as a pedagogical data format here, it is in no way an endorsement or a requirement.

mnm678

comment created time in a day

PullRequestReviewEvent
PullRequestReviewEvent
PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commenttheupdateframework/specification

Checking version after verifying signatures

 VERSION_NUMBER is the version number of the targets metadata file listed in the snapshot metadata file.  In either case, the client MUST write the file to non-volatile storage as FILENAME.EXT. -  * **4.1**. **Check against snapshot metadata.** The hashes and version-  number of the new targets metadata file MUST match the hashes (if any) and-  version number listed in the trusted snapshot metadata.  This is done, in-  part, to prevent a mix-and-match attack by man-in-the-middle attackers.  If-  the new targets metadata file does not match, discard it, abort the update-  cycle, and report the failure.+  * **4.1**. **Check against snapshot role's targets hash.** The hashes+  of the new targets metadata file MUST match the hashes (if any) listed in the+  trusted snapshot metadata.  This is done, in part, to prevent a mix-and-match+  attack by man-in-the-middle attackers.  If the new targets metadata file does+  not match, discard the new target metadata, abort the update cycle, and+  report the failure.    * **4.2**. **Check for an arbitrary software attack.** The new targets   metadata file MUST have been signed by a threshold of keys specified in the   trusted root metadata file.  If the new targets metadata file is not signed   as required, discard it, abort the update cycle, and report the failure. -  * **4.3**. **Check for a freeze attack.** The latest known time should be+  * **4.3**. **Check against snapshot role's targets version.** The version

This is to check for a mix-and-match or rollback attack.

erickt

comment created time in a day

PullRequestReviewEvent

Pull request review commenttheupdateframework/specification

Clean up language

 of the form VERSION_NUMBER.FILENAME.EXT (e.g., 42.targets.json), where VERSION_NUMBER is the version number of the targets metadata file listed in the snapshot metadata file. -  * **4.1**. **Check against snapshot metadata.** The hashes and version-  number of the new targets metadata file MUST match the hashes (if any) and+  * **5.4.1**. **Check against snapshot metadata.** The hashes and version+  number of the new targets metadata file MUST match the hashes, if any, and   version number listed in the trusted snapshot metadata.  This is done, in   part, to prevent a mix-and-match attack by man-in-the-middle attackers.  If   the new targets metadata file does not match, discard it, abort the update   cycle, and report the failure. -  * **4.2**. **Check for an arbitrary software attack.** The new targets+  * **5.4.2**. **Check for an arbitrary software attack.** The new targets   metadata file MUST have been signed by a threshold of keys specified in the   trusted root metadata file.  If the new targets metadata file is not signed   as required, discard it, abort the update cycle, and report the failure. -  * **4.3**. **Check for a freeze attack.** The latest known time should be+  * **5.4.3**. **Check for a freeze attack.** The latest known time MUST be   lower than the expiration timestamp in the new targets metadata file.  If so,   the new targets metadata file becomes the trusted targets metadata file.  If   the new targets metadata file is expired, discard it, abort the update cycle,   and report the potential freeze attack. -  * **4.4**. **Persist targets metadata.** The client MUST write the file to+  * **5.4.4**. **Persist targets metadata.** The client MUST write the file to   non-volatile storage as FILENAME.EXT (e.g. targets.json). -  * **4.5**. **Perform a preorder depth-first search for metadata about the+  * **5.4.5**. **Perform a pre-order depth-first search for metadata about the   desired target, beginning with the top-level targets role.**  Note: If-  any metadata requested in steps 4.4.1 - 4.4.2.3 cannot be downloaded nor+  any metadata requested in steps 5.4.5.1 - 5.4.5.2.3 cannot be downloaded nor
  any metadata requested in steps 5.4.5.1 - 5.4.5.2 cannot be downloaded nor
erickt

comment created time in a day

Pull request review commenttheupdateframework/specification

Clean up language

 repo](https://github.com/theupdateframework/specification/issues).           "keyval" : {"public" : PUBLIC}         } -   where PUBLIC is in PEM format and a string.  All RSA keys must be at least+   where PUBLIC is in PEM format and a string.  All RSA keys MUST be at least

I understand why, but do we actually enforce this? Do we reject RSA public keys that correspond to private keys < 2048 bits?

erickt

comment created time in a day

PullRequestReviewEvent
PullRequestReviewEvent

pull request commenttheupdateframework/go-tuf

Incorporate python-tuf v0.14 into the interoperability test suite

Hmm, test failed, why? 👀

erickt

comment created time in 2 days

issue commenttheupdateframework/tuf

repository_lib.py lists signatures in an non-deterministic order

Yes, we might as well do this, but are we also fixing entropy during the tests? Because signatures themselves are not necessarily deterministic...

erickt

comment created time in 2 days

pull request commenttheupdateframework/tuf

Raise an error on loading/writing unsigned metadata

Client definitely has to unwaveringly and uncompromisingly strict: no threshold of sigs, no go.

As for repository/developer tools, I'm okay with explicit warnings.

sechkova

comment created time in 2 days

Pull request review commentphp-tuf/php-tuf

Implement updating the root

 class Updater { +    const MAX_ROOT_DOWNLOADS = 10000000;

No problem 🙂 BTW, as an aside, your team and yourself might like to join us to discuss #tuf on the CNCF Slack! Let me know if you need an invite, just email trishank.kuppusamy [that-symbol] datadoghq.com

tedbow

comment created time in 2 days

PullRequestReviewEvent

delete branch DataDog/yubikey

delete branch : gpi/doc

delete time in 2 days

push eventDataDog/yubikey

Gaëtan

commit sha f2a240e66840e6be3593b939768502e9e44872ce

Add documentation to configure a computer to use a configured Yubikey (#55) Add documentation to configure a computer to use a configured Yubikey Signed-off-by: Trishank Karthik Kuppusamy <trishank.kuppusamy@datadoghq.com> Co-authored-by: Trishank Karthik Kuppusamy <trishank.kuppusamy@datadoghq.com>

view details

push time in 2 days

PR merged DataDog/yubikey

Add documentation to configure a computer to use a configured Yubikey

Add documentation to explain how to configure a computer to use an already configured Yubikey. Fixes #51

+35 -3

2 comments

2 changed files

daisukixci

pr closed time in 2 days

issue closedDataDog/yubikey

Q: How to setup on multiple machines?

This is a question and not an issue 😃

I was wondering how can I set this up for multiple machines? I have two personal machines and looking at the code it will setup new gpg keys, reset yubikey, etc.

What if I want to setup this once on one machine but then setup everything on a second machine without recreating keys / reseting yubikey?

I understand if this is not a supported behavior.

closed time in 2 days

donferi
PullRequestReviewEvent

push eventDataDog/yubikey

Trishank Karthik Kuppusamy

commit sha 34426a57959d1e3e21e80d59dc6e22cdb5de03b0

minor edits for readability Signed-off-by: Trishank Karthik Kuppusamy <trishank.kuppusamy@datadoghq.com>

view details

push time in 2 days

delete branch trishankatdatadog/vault-gpg-plugin

delete branch : trishankatdatadog/create-duplicate-key

delete time in 2 days

pull request commentLeSuisse/vault-gpg-plugin

Error on create duplicate key

When I designed this endpoint I voluntarily missed this check because I wanted to be consistent with built-in engines of Vault. Engines that use a named key do not protect against the overwriting of a specific key at least not in a explicit manner. For example the TOTP engine behave like this one and just overwrite existing key. The transit engine has a more subtle behavior and does not overwrite the key but does throw an error. I'm however fine with the proposed change since overwriting a key can have disastrous consequences and I prefer to have an explicit error than something silently doing nothing.

Thanks for merging this!

From my tests, Transit SE does not overwrite an existing key or throw an error, but it does return a warning. I agree that it's better to thrown an explicit error than a "silent" warning that can be ignored.

trishankatdatadog

comment created time in 2 days

issue commentDataDog/yubikey

Issue with PIN not being typed

Best to read his debug log or do a remote Zoom debugging session, I think...

donferi

comment created time in 4 days

pull request commentDataDog/yubikey

Add documentation to configure a computer to use a configured Yubikey

This is great, thanks!

Please do me a favour? Please move the text to a subsection under the Optional document. No need to link from the main README, but people who go to the Optional document can find it 🙂

daisukixci

comment created time in 4 days

push eventtheupdateframework/go-tuf

Erick Tryzelaar

commit sha b383bafd27472310a650f3733e686163a868b71a

Remove Signature method, TUF-0.9 compatible keyid This removes the TUF-0.9 "Method" field from Signature, and stops generating TUF-0.9 compatible keyids. Change-Id: I41ad55f8aa0cfcd3c6d28c86c6c00f60dbce0e8b

view details

push time in 5 days

PR merged theupdateframework/go-tuf

Remove Signature "Method" field, TUF-0.9 compatible keyids

This removes the TUF-0.9 "Method" field from Signature, and stops generating TUF-0.9 compatible keyids.

+254 -1668

3 comments

73 changed files

erickt

pr closed time in 5 days

push eventtheupdateframework/go-tuf

Christian Rebischke

commit sha 9c85946f02063352668365ad763f892d46a461bb

Move CI to Github and add more Badges This commit fixes CI via moving to Github Actions. We do matrix checks from now on. This means go-tuf will be checked against MacOS and Linux with Go version 1.13.x. And because everybody likes badges, we are going to add a bunch of new shiny badges to the project.

view details

Christian Rebischke

commit sha 2e67459e867a3c86ba36e595edd0f2a3e89d746a

add python support

view details

Trishank Karthik Kuppusamy

commit sha fab0b5b7c127aed416fb1a97194143358271b530

Merge pull request #116 from shibumi/shibumi/fix-ci-and-add-badges Move CI to Github and add more Badges

view details

push time in 5 days

PR merged theupdateframework/go-tuf

Move CI to Github and add more Badges

This commit fixes CI via moving to Github Actions. We do matrix checks from now on. This means go-tuf will be checked against MacOS and Linux with Go version 1.13.x. And because everybody likes badges, we are going to add a bunch of new shiny badges to the project.

While doing this, I came across a few issues:

  1. The tests fail on Go 1.14.x and Go 1.15.x on all plattforms
  2. The tests fail on Windows
+37 -17

4 comments

3 changed files

shibumi

pr closed time in 5 days

PullRequestReviewEvent

push eventtheupdateframework/go-tuf

Justin Mattson

commit sha a71856067acca1791f5af504d2d18106d5f78d48

Switch from boltdb to leveldb The functionality is similar, but leveldb does not require mmap which Fuchsia doesn't currently support. Also remove dependency on the Docker term package because Fuchsia this relies on Termios which we don't want to port now, if ever. (This is mostly a re-apply of https://fuchsia.googlesource.com/amber/+/53b44641b4e62c183fb54a65253d885affb714da) Change-Id: I0a6e4bd433b2ba36626b72bbbcd6ead05f7c5343

view details

Justin Mattson

commit sha 517a4977b8102c3afbacb1065c7b1189a23bc0d7

Try the empty password Change-Id: Ic4b6fb9237ccf30c2dd9161a137bdd3c8919e6be

view details

Justin Mattson

commit sha aaa7c5d5aeb5081ee20b65623bba249ae702baef

Allow HTTP Client to be supplied This will allow for more customized behavior than is currently available. Change-Id: I45e1bfe66469148c5bd6d228af0a7002d8f048a4

view details

Justin Mattson

commit sha da5e1f99ee36ac6c9b1fdbb71e6a569279302b21

Add API to get/set metadata versions This allows for greater control over versioning of the repo. Change-Id: I163230ab14816c31b2d86c8b2705f9bdbf124bec

view details

James Tucker

commit sha 9de12d7af06f6cb229d32cddcd6abb96950aba0e

[local_store] make the leveldb store a package We don't want to compile in leveldb and snappy in some cases, so this is moved to it's own package. We can likely remove it later. Change-Id: I6415ec8667d6f704ae28fa3132d543885f179f2c

view details

Erick Tryzelaar

commit sha fde627a966116f26a16382e9cbb0a3291dedc72e

Fix references to FileLocalStore to get tests to pass This fixes some references to FileLocalStore in order to get `go test ./...` to fully pass. Change-Id: I0a1558da580f86751cd6fe69ebc24b7baf7ffc6a

view details

Erick Tryzelaar

commit sha 57f21d14dffce615accfc2fd7405ad2f55c4b756

update go-tuf to store metadata version numbers in snapshots and timestamps In the latest 1.0 draft version of TUF, the "meta" object in snapshot.json and timestamp.json contain a version number, in addition to the hash of the objects. Adding this is one step towards getting go-tuf compatible with other 1.0 draft compatible clients. Test: Manually ran go-tuf's unit tests and they passed Bug: PKG-450 #done Change-Id: I206928c9a6ac87b6b51a82b68083d7746d6854cd

view details

Erick Tryzelaar

commit sha f1da75eb3ba3e46eca7e774406dfca87e0c67135

Use gofmt to simplify the code Change-Id: I228d7335a6914dc9e90f6fdcc52e286893bddd70

view details

Erick Tryzelaar

commit sha 67d8544bb5254317c6a7153f1d5172fe200e3c72

Extend ErrWrongLength with actual lengths This extends util.ErrWrongLength to include the expected and actual lengths. This was done to enable tests to verify the right sizes were being returned. Change-Id: I896239c2b90fe2772d60406db6f848ee7bbda9df

view details

Erick Tryzelaar

commit sha efcc02747443c51b3ca6bf002495a6c1f87d0092

Fix checking if metadata version number is correct In TUF 1.0, the snapshot and timestamp metadata only requires that it contains the version number of the target files. To be backwards compatible, I added support to generate these files that includes the version number, but the logic to confirm if version matched was incorrect. It assumed that the version number was in the snapshot/timestamp metadata, and could error out if the snapshot/timestamp was generated by an older go-tuf that didn't insert the version numbers. This patch changes the check to only verify versions if the version number is present in the snapshot/timestamp. It also includes a number of tests to make sure the logic is correct. Change-Id: I97e5f5cdf847e3326027d58c7a191c033e70050b

view details

Erick Tryzelaar

commit sha 20676406d9a9fa19c5ba2b582c7918a5c9eb49a9

include role name in ErrUnknownRole Change-Id: Iff184e98ce6335ef41b0e9c08547a9cc842f0afb

view details

Erick Tryzelaar

commit sha 8cf0fe8e1cb50853cb10707365ddafd26aed60ef

Only increment metadata version once TUF 1.0 requires that the root data must always increase, and cannot skip numbers. This change makes sure that we only increment numbers once until we commit. Change-Id: I3fa54ea0b958e41d91c0a14f8a94febe5548a78c

view details

Erick Tryzelaar

commit sha c586cfd4d65acf1e1404067b9bdf80ab192af124

Use metadata-specific FileMeta In TUF 1.0, each of the metadata roles have different rules for what needs to be in the metadata: * timestamp - version, optional length and hashes * snapshot - version, optional length, and required hashes for delegated roles * targets - hashes, length To make this easier to track in go-tuf, this creates a unique file for each role in order to ease implementing these different cases, and to let the type system protect against checking using the wrong test to verify if some download is correct. Change-Id: I944467694e0dc13edee4b64c46dc72ac38295fae

view details

Erick Tryzelaar

commit sha 46780438c8b28e40882214552fca96d5cb46b4e4

Remove unused meta argument from LocalStore.Commit Change-Id: I084ad1cd5b98ebf04bc3dd1bd0679b78c188d9e6

view details

Erick Tryzelaar

commit sha 384b5f9121e122402ce4fc03f4f12665d36e4180

Repo LocalStore should not mix staged and committed metadata The Repo struct is responsible for updating TUF metadata on the local system. It implements a staging model, where multiple metadata changes can be staged before comitting to the actual repository. This is to support the ability to use keys on isolated machines to sign the metadata. Unfortunately the LocalStore implementation mixed together the GetMeta method to return both committed and staged metadata, which resulted in tests getting confused about treating staged metadata as being committed, without actually committing the metadata. This patch changes this behavior in order to have separate APIs to read staged metadata when it is actually needed. Change-Id: Ic443f5b90963158fa569d2a40479f59cd7eea11a

view details

Erick Tryzelaar

commit sha 10a5f67cd35b8d6afb248d263da86647b6753b26

Move python testdata into python-tuf-v0.9.9 This moves the python-tuf test metadata into a new namespaced directory, in order to avoid collisions with python-tuf v0.11.1 test metadata, which will be coming in a separate patch. Change-Id: Ic71d558a698f8f122d42b6f05874a0c37e7cb94c

view details

Erick Tryzelaar

commit sha cce819cf203268c6b9747fe48513b755276d9ee8

Generate python-tuf 0.11.1 test metadata This patch generates test metadata with python-tuf 0.11.1, which is compatible with TUF 1.0. This will be used to eventually verify that go-tuf is compatible with TUF 1.0. Change-Id: If27176afd7b070adcfe7c15961a45a339b20a493

view details

Erick Tryzelaar

commit sha 0ebb45e7d97ef65ad7165e2ae5ce6c90576f33c2

Support multiple key ids for keys The TUF standard allows for keys to have multiple key ids, such as if we wanted to support key ids being generated with sha256 and sha512. For our case, we wan to to be able to support both the TUF 0.9-style key generation (which only includes the key type and actual value), and the 1.0-style (which also mixes in the signing scheme into the key id) at the same time. Change-Id: I923a92fff490cbdc0f64600c3871512b34f05f04

view details

Erick Tryzelaar

commit sha 2fbbd60ee12ffeb9f2bceeefb5896f9f52eadaef

support both TUF 0.9 and 1.0 key ids This extends go-tuf to return both the TUF-0.9 style key ids (which contain the key type and the key value), and the TUF-1.0 key ids (which also contains the signing sheme). This results in doubling the key ids in metadata, and doubling the number of signatures signing the metadata. The signed metadata just reuses the signature for both key ids, because hasn't changed. Change-Id: I8b64dc4c3d0222578340530de7721aa855311f42

view details

Erick Tryzelaar

commit sha c11e1acccb3fe624c3fce8080d83be07b7c138c9

Remove support for leading path separators in targets TUF 1.0 is considering banning leading path separators, because of poor behavior constructing paths across targets and delegates on local filesystems for certain libraries, like python, where `os.path.join("/foo", "/bar")` just returns "/bar". This patch migrates go-tuf to using and generating metadata without that leading separator. However, it temporarily enables consuming metadata with separators to enable a rolling upgrade to 1.0. Change-Id: I88a3a79f4d6f84a1521c7789c208e0bf00b08366

view details

push time in 5 days

PR closed theupdateframework/go-tuf

Delegation

The delegation branch fulfilled the function of delegation. Detailed instructions are added in readme file. This branch is developed based on fuchsia branch, which can be dated closer to now. This is my first time using pull request, so not sure if I'm sending a right request. I think you can choose to merge it to fuchsia or master or even create a new branch.

+1264 -55

0 comment

20 changed files

SimonMen65

pr closed time in 5 days

delete branch theupdateframework/go-tuf

delete branch : fuchsia

delete time in 5 days

PR merged theupdateframework/go-tuf

Reviewers
Fuchsia

Merge Fuchsia fork with master.

Cc @erickt @raggi @joshuaseaton

+17841 -1395

10 comments

727 changed files

trishankatdatadog

pr closed time in 5 days

Pull request review commentphp-tuf/php-tuf

Implement updating the root

 class Updater { +    const MAX_ROOT_DOWNLOADS = 10000000;

Might be a bit excessive 🙂

tedbow

comment created time in 5 days

PullRequestReviewEvent

issue commenttheupdateframework/tuf

Allow one to turn off Unicode support for target files

Agreed. I'm not convinced TUF should solve typosquatting.

JustinCappos

comment created time in 5 days

issue closedtheupdateframework/tuf

How to download all files in a folder without knowing files' names

I can see the tutorial says that using client.py --repo http://localhost:8001 filename to download filename. What if I want to download all file in a folder? I tried client.py --repo http://localhost:8001 repo/* but it doesn't work. Please help. Thanks.

closed time in 5 days

thangld322

issue commenttheupdateframework/tuf

Allow one to turn off Unicode support for target files

Do we even support Unicode at all?

JustinCappos

comment created time in 5 days

issue closedtheupdateframework/tuf

Delta Support

Please add support for binary delta updates to reduce bandwidth requirements for server admins who can't afford large running expenses.

closed time in 5 days

HulaHoopWhonix

issue commenttheupdateframework/tuf

Delta Support

Do we think this idea is worth exploring two years later @trishankatdatadog or we want to close that issue?

Probably not. No one's asked for it yet. When they do, we will revisit. Thanks!

HulaHoopWhonix

comment created time in 5 days

startedpypa/warehouse

started time in 5 days

issue commenttheupdateframework/tuf

libeatmydata for running tests and installing dependencies under Travis CI

What do you think @trishankatdatadog @joshuagl @lukpueh ?

As long as our tests can detect failures induced by libeatmydata, this should be a harmless improvement to make. But if our tests run quickly enough now, it's probably not worthwhile, and we can close this, and revisit in the future.

vladimir-v-diaz

comment created time in 6 days

Pull request review commenttheupdateframework/go-tuf

Delegation

 func (db *DB) Verify(s *data.Signed, role string, minVersion int) error { 	if err := json.Unmarshal(s.Signed, sm); err != nil { 		return err 	}-	if strings.ToLower(sm.Type) != strings.ToLower(role) {-		return ErrWrongMetaType+	if role == "root" || role == "targets" || role == "snapshot" || role == "timestamp" {

use switch-case statement 🙂

SimonMen65

comment created time in 6 days

PullRequestReviewEvent

push eventtrishankatdatadog/vault-gpg-plugin

Trishank Karthik Kuppusamy

commit sha d487cdfd68430c1c49819676cc6ced158960e9b0

add subkey/signature expiration time Signed-off-by: Trishank Karthik Kuppusamy <trishank.kuppusamy@datadoghq.com>

view details

push time in 6 days

push eventtrishankatdatadog/vault-gpg-plugin

Trishank Karthik Kuppusamy

commit sha 406bc1f6295abb8c324ec433301aa1b3afa59501

minor linguistic edits Signed-off-by: Trishank Karthik Kuppusamy <trishank.kuppusamy@datadoghq.com>

view details

Trishank Karthik Kuppusamy

commit sha 77fb4c9740573b272e3bc1ae0f099da46bb4c9e3

add subkey/signature expiration time Signed-off-by: Trishank Karthik Kuppusamy <trishank.kuppusamy@datadoghq.com>

view details

push time in 6 days

push eventtrishankatdatadog/vault-gpg-plugin

Trishank Karthik Kuppusamy

commit sha 4939d7be8737569e27b82779257d33d8a5afa448

First cut of HTTP API for subkeys Signed-off-by: Trishank Karthik Kuppusamy <trishank.kuppusamy@datadoghq.com>

view details

push time in 8 days

issue openedtrishankatdatadog/vault-gpg-plugin

Add support for subkeys

Wishlist:

  • [ ] Create Subkey
  • [ ] Read Subkey
  • [ ] List Subkeys
  • [ ] Update Subkey Configuration
  • [ ] Revoke Subkey
  • [ ] Delete Subkey
  • [ ] Sign Data with Subkey

created time in 8 days

delete branch DataDog/yubikey

delete branch : gpi/fix_reset

delete time in 9 days

push eventDataDog/yubikey

Gaëtan

commit sha e4202f4cfd98538eb8244f1427985c2fdf1a0d97

Improve output and let ykman manage the FIDO reset (#54)

view details

push time in 9 days

PR merged DataDog/yubikey

Improve output and let ykman manage the FIDO reset
  • Improve output
  • Let ykman manage the FIDO reset because you need to unplug then replug the key and touch it before
Usage: ykman fido reset [OPTIONS]
Try "ykman fido reset -h" for help.

Error: Reset failed. Reset must be triggered within 5 seconds after the YubiKey is inserted.

after

WARNING! This will delete all FIDO credentials, including FIDO U2F credentials, and restore factory settings. Proceed? [y/N]: y
Remove and re-insert your YubiKey to perform the reset...
Touch your YubiKey...
+10 -8

0 comment

1 changed file

daisukixci

pr closed time in 9 days

PullRequestReviewEvent

delete branch DataDog/yubikey

delete branch : gpi/fix_reset

delete time in 9 days

push eventDataDog/yubikey

daisukixci

commit sha e7ef145425419405aed385b508521d85fdbc9b77

Fix #52

view details

Trishank Karthik Kuppusamy

commit sha 9cf9efba28c1c89bb82efde440b799a81b0fedea

Merge pull request #53 from DataDog/gpi/fix_reset Fix #52

view details

push time in 9 days

issue closedDataDog/yubikey

reset.sh not working

My yubikey seems to suddenly have no GPG key on it so i tried to run git.sh and got this

Setting your git-config global user.name...
Setting your git-config global user.email...
Setting git to use this GPG key globally.
Also, turning on signing of all commits and tags by default.

Exporting your GPG public key to GitHub.
gpg: WARNING: nothing exported
It has been copied to your clipboard.
You may now add it to GitHub: https://github.com/settings/gpg/new
Opening GitHub...

and sure enough, nothing was exported!

So I tried to run reset.sh and ...

1) all
2) 10350924
3) cancel
#? 2
You chose 10350924
Are you sure you want to reset 10350924 ? yes/no

Reset 10350924
./reset.sh: line 27: serial: command not found
Usage: ykman [OPTIONS] COMMAND [ARGS]...
Try "ykman -h" for help.

Error: Invalid value for "-d" / "--device":  is not a valid integer
./reset.sh: line 28: serial: command not found
Usage: ykman [OPTIONS] COMMAND [ARGS]...
Try "ykman -h" for help.

Error: Invalid value for "-d" / "--device":  is not a valid integer
./reset.sh: line 29: serial: command not found
Usage: ykman [OPTIONS] COMMAND [ARGS]...
Try "ykman -h" for help.

Error: Invalid value for "-d" / "--device":  is not a valid integer
./reset.sh: line 30: serial: command not found
Usage: ykman [OPTIONS] COMMAND [ARGS]...
Try "ykman -h" for help.

Error: Invalid value for "-d" / "--device":  is not a valid integer
./reset.sh: line 31: serial: command not found
Usage: ykman [OPTIONS] COMMAND [ARGS]...
Try "ykman -h" for help.

Error: Invalid value for "-d" / "--device":  is not a valid integer
./reset.sh: line 32: serial: command not found
Usage: ykman [OPTIONS] COMMAND [ARGS]...
Try "ykman -h" for help.

Error: Invalid value for "-d" / "--device":  is not a valid integer```

closed time in 9 days

andrewwatson

PR merged DataDog/yubikey

Fix #52

Fix syntax error that was trying to execute a serial string

+6 -6

0 comment

1 changed file

daisukixci

pr closed time in 9 days

PullRequestReviewEvent

issue commentDataDog/yubikey

reset.sh not working

card status looks promising

I'm not so sure: doesn't look like anything's on there. This doesn't look like what you'd see on an empty card (e.g., the PIN retry counter should be something like 3 0 3).

andrewwatson

comment created time in 9 days

issue commentDataDog/yubikey

reset.sh not working

Try restarting scdaemon or something, or better yet, do a gpg --card-status on another computer (you won't see the full key info w/o first importing the pubkey tho)

On Mon, Sep 21, 2020 at 12:12 PM Andrew Watson notifications@github.com wrote:

Yeah, the key lights up when I insert it but I hope it hasn't failed! I'll have to check on ykman. I know I haven't updated it in a long time...

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/DataDog/yubikey/issues/52#issuecomment-696217484, or unsubscribe https://github.com/notifications/unsubscribe-auth/AH4ZEEIFLFOJCZSPUJHMX2LSG53O3ANCNFSM4RUSYL5A .

andrewwatson

comment created time in 9 days

issue commentDataDog/yubikey

reset.sh not working

Strange, the missing GPG key could be a some temporary software or, worse, permanent hardware failure. Also, have you checked whether your ykman version supports this flag?

andrewwatson

comment created time in 9 days

Pull request review commenttheupdateframework/taps

Add TAP introducing snapshot Merkle trees

+* TAP:+* Title: Snapshot Merkle Trees+* Version: 0+* Last-Modified: 17/09/2020+* Author: Marina Moore, Justin Cappos+* Type: Standardization+* Status: Draft+* Content-Type: markdown+* Created: 14/09/2020+* +TUF-Version:+* +Post-History:++ # Abstract++ To optimize the snapshot metadata file size for large registries, registries+ can use a snapshot Merkle tree to conceptually store version information about+ all images in a single snapshot without needing to distribute this entire+ snapshot to all clients. First, the client retrieves only a timestamp file,+ which changes according to some period p (such as every day or week). Second,+ the snapshot file is itself kept as a Merkle tree, with the root stored in+ timestamp metadata. This snapshot file is broken into a file for each target+ that contains the Merkle tree leaf with information about that target and a+ path to the root of the Merkle tree. A new snapshot Merkle tree is generated+ every time a new timestamp is generated. To prove that there has not been a+ reversion of the snapshot Merkle tree when downloading an image, the client+ and third-party auditors download the prior snapshot Merkle trees and check+ that the version numbers did not decrease at any point. To make this scalable+ as the number of timestamps increases, the client will only download version+ information signed by the current timestamp file. Thus, rotating this key+ enables the registry to discard old snapshot Merkle tree data.++The feature described in this TAP does not need to be implemented by all TUF+implementations. It is an option for any adopter who is interested in the+benefits provided by this feature, but may not make sense for implementations+with fewer target files.++# Motivation++For very large repositories, the snapshot metadata file could get very large.+This snapshot metadata file must be downloaded on every update cycle, and so+could significantly impact the metadata overhead. For example, if a repository+has 50,000,000 targets, the snapshot metadata will be about 380,000,000 bytes+(https://docs.google.com/spreadsheets/d/18iwWnWvAAZ4In33EWJBgdAWVFE720B_z0eQlB4FpjNc/edit?ts=5ed7d6f4#gid=0).+For this reason, it is necessary to create a more scalable solution for snapshot+metadata that does not significantly impact the security properties of TUF.++We designed a new approach to snapshot that improves scalability while+achieving similar security properties to the existing snapshot metadata+++# Rationale++Snapshot metadata provides a consistent view of the repository in order to+protect against mix-and-match attacks and rollback attacks. In order to provide+these protections, snapshot metadata is responsible for keeping track of the+version number of each target file, ensuring that all targets downloaded are+from the same snapshot, and ensuring that no target file decreases its version+number (except in the case of fast forward attack recovery). Any new solution+we develop must provide these same protections.++A snapshot Merkle tree manages version information for each target by including+this information in each leaf node. By using a Merkle tree to store these nodes,+this proposal can cryptographically verify that different targets are from the+same snapshot by ensuring that the Merkle tree roots match. Due to the+properties of secure hash functions, any two leaves of a Merkle tree with the+same root are from the same tree.++In order to prevent rollback attacks between Merkle trees, this proposal+introduces third-party auditors. These auditors are responsible for downloading+all nodes of each Merkle tree to ensure that no version numbers have decreased+between generated trees. This achieves rollback protection without every client+having to store the version information for every target.++# Specification++This proposal replaces the single snapshot metadata file with a snapshot Merkle+metadata file for each target. The repository generates these snapshot Merkle+metadata files by building a Merkle tree using all target files and storing the+path to each target in the snapshot Merkle metadata. The root of this Merkle+tree is stored in timestamp metadata to allow for client verification. The+client uses the path stored in the snapshot Merkle metadata for a target, along+with the root of the Merkle tree, to ensure that metadata is from the given+Merkle tree. The details of these files and procedures are described in+this section.++![Diagram of snapshot Merkle tree](merkletap-1.jpg)++## Merkle tree generation++When the repository generates snapshot metadata, instead of putting the version+information for all targets into a single file, it instead uses the version+information to generate a Merkle tree.  Each target’s version information forms+a leaf of the tree, then these leaves are used to build a Merkle tree. The+internal nodes of a Merkle tree contain the hash of the leaf nodes. The exact+algorithm for generating this Merkle tree (ie the order of leaves in the hash,+how version information is encoded), is left to the implementer, but this+algorithm should be documented in a POUF so that implementations can be+compatible and correctly verify Merkle tree data. However, all implementations+should meet the following requirements:+* Leaf nodes must be unique. A unique identifier of the target, such as the+filepath or hash must be included in the leaf data to ensure that no two leaf+node hashes are the same.+* The tree must be a Merkle tree. Each internal node must contain a hash that+includes both leaf nodes.++Once the Merkle tree is generated, the repository must create a snapshot Merkle+metadata file for each target. This file must contain the leaf contents and+the path to the root of the Merkle tree. This path must contain the hashes of+sibling nodes needed to reconstruct the tree during verification (see diagram).+In addition the path should contain direction information so that the client+will know whether each node is a left or right sibling when reconstructing the+tree.++This information will be included in the following metadata format:+```+{ “leaf_contents”: {METAFILES},+  “Merkle_path”: {INDEX:HASH}+  “path_directions”:{INDEX:DIR}+}+```++Where `METAFILES` is the version information as defined for snapshot metadata,+`INDEX` provides the ordering of nodes, `HASH` is the hash of the sibling node,+and `DIR` indicates whether the given node is a left or right sibling.++In addition, the following optional field will be added to timestamp metadata.+If this field is included, the client should use snapshot Merkle metadata to+verify updates instead:++```+("merkle_root": ROOT_HASH)+```++Where `ROOT_HASH` is the hash of the Merkle tree root.++Note that snapshot Merkle metadata files do not need to be signed by a snapshot+key because the path information will be verified based on the Merkle root+provided in timestamp. Removing these signatures will provide additional space+savings for clients.++## Merkle tree verification++If a client sees the `merkle_root` field in timestamp metadata, they will use+the snapshot Merkle metadata to check version information. If this field is+present, the client will download the snapshot Merkle metadata file only for+the target the client is attempting to update. The client will verify the+snapshot Merkle metadata file by reconstructing the Merkle tree and comparing+the computed root hash to the hash provided in timestamp metadata. If the+hashes do not match, the snapshot Merkle metadata is invalid. Otherwise, the+client will use the version information in the verified snapshot Merkle+metadata to proceed with the update.++For additional rollback protection, the client may download previous versions+of the snapshot Merkle metadata for the given target file. After verifying+these files, the client should compare the version information in the previous+Merkle trees to the information in the current Merkle tree to ensure that the+version numbers have never decreased. In order to allow for fast forward attack+recovery (discussed further in Security Analysis), the client should only+download previous versions that were signed with the same timestamp key.++## Auditing Merkle trees++In order to ensure the validity of all target version information in the+Merkle tree, third-party auditors should validate the entire tree each time it+is updated. Auditors should download every snapshot Merkle file, verify the+paths, check the root hash against the hash provided in timestamp metadata,+and ensure that the version information has not decreased for each target.+Alternatively, the repository may provide auditors with information about the+contents and ordering of leaf nodes so that the auditors can more efficiently+verify the entire tree.++Auditors may provide an additional signature for timestamp metadata that+indicates that they have verified the contents of the Merkle tree whose root+is in that timestamp file. Using this signature, clients can check whether a+particular third party has approved the Merkle tree.++## Garbage collection+When a threshold of timestamp keys are revoked and replaced, the repository no+longer needs to store snapshot Merkle files signed by the previous timestamp+key. Replacing the timestamp key is an opportunity for fast forward attack+recovery, and so all version information from before the replacement is no+longer valid. At this point, the repository may garbage collect all snapshot+Merkle metadata files.++# Security Analysis++This proposal impacts the snapshot metadata, so this section will discuss the+attacks that are mitigated by snapshot metadata in TUF.++## Rollback attack++In the event that the timestamp key is compromised, an attacker may provide an+invalid Merkle tree that contains a previous version of a target. This attack+is prevented by both the client’s verification and by auditors. When the client+verifies previous versions of the snapshot Merkle metadata for a target, they+ensure that the version number of that target has not decreased. However, if+the attacker controls the timestamp key(s) and the repository, the previous+snapshot Merkle metadata downloaded by the client may also be invalid. To+protect against this case, third party auditors store the previous version of+all metadata, and will detect when the version number decreases in a new+Merkle tree. As long as the client checks for an auditor’s verification, the+client will not install the rolled-back version of the target.++## Fast forward attack++If an attacker is able to compromise the timestamp key, they may arbitrarily+increase the version number of a target in the snapshot Merkle metadata. If+they increase it to a sufficiently large number (say the maximum integer value),+the client will not accept any future version of the target as the version+number will be below the previous version. To recover from this attack,+auditors and clients should not check version information from before a+timestamp key replacement. This allows a timestamp key replacement to be used+as a reset after a fast forward attack. The existing system handles fast+forward attack recovery in a similar manner, by instructing clients to delete+stored version information after a timestamp key replacement.++## Mix and match attack

Yes, but I think it's important to clarify that although mix-and-match attacks by MitM attackers are not possible, they are possible to do by attackers who control timestamp and snapshot keys.

mnm678

comment created time in 13 days

Pull request review commenttheupdateframework/taps

Add TAP introducing snapshot Merkle trees

+* TAP:+* Title: Snapshot Merkle Trees+* Version: 0+* Last-Modified: 17/09/2020+* Author: Marina Moore, Justin Cappos+* Type: Standardization+* Status: Draft+* Content-Type: markdown+* Created: 14/09/2020+* +TUF-Version:+* +Post-History:++ # Abstract++ To optimize the snapshot metadata file size for large registries, registries+ can use a snapshot Merkle tree to conceptually store version information about+ all images in a single snapshot without needing to distribute this entire+ snapshot to all clients. First, the client retrieves only a timestamp file,+ which changes according to some period p (such as every day or week). Second,+ the snapshot file is itself kept as a Merkle tree, with the root stored in+ timestamp metadata. This snapshot file is broken into a file for each target+ that contains the Merkle tree leaf with information about that target and a+ path to the root of the Merkle tree. A new snapshot Merkle tree is generated+ every time a new timestamp is generated. To prove that there has not been a+ reversion of the snapshot Merkle tree when downloading an image, the client+ and third-party auditors download the prior snapshot Merkle trees and check+ that the version numbers did not decrease at any point. To make this scalable+ as the number of timestamps increases, the client will only download version+ information signed by the current timestamp file. Thus, rotating this key+ enables the registry to discard old snapshot Merkle tree data.++The feature described in this TAP does not need to be implemented by all TUF+implementations. It is an option for any adopter who is interested in the+benefits provided by this feature, but may not make sense for implementations+with fewer target files.++# Motivation++For very large repositories, the snapshot metadata file could get very large.+This snapshot metadata file must be downloaded on every update cycle, and so+could significantly impact the metadata overhead. For example, if a repository+has 50,000,000 targets, the snapshot metadata will be about 380,000,000 bytes+(https://docs.google.com/spreadsheets/d/18iwWnWvAAZ4In33EWJBgdAWVFE720B_z0eQlB4FpjNc/edit?ts=5ed7d6f4#gid=0).+For this reason, it is necessary to create a more scalable solution for snapshot+metadata that does not significantly impact the security properties of TUF.++We designed a new approach to snapshot that improves scalability while+achieving similar security properties to the existing snapshot metadata+++# Rationale++Snapshot metadata provides a consistent view of the repository in order to+protect against mix-and-match attacks and rollback attacks. In order to provide+these protections, snapshot metadata is responsible for keeping track of the+version number of each target file, ensuring that all targets downloaded are+from the same snapshot, and ensuring that no target file decreases its version+number (except in the case of fast forward attack recovery). Any new solution+we develop must provide these same protections.++A snapshot Merkle tree manages version information for each target by including+this information in each leaf node. By using a Merkle tree to store these nodes,+this proposal can cryptographically verify that different targets are from the+same snapshot by ensuring that the Merkle tree roots match. Due to the+properties of secure hash functions, any two leaves of a Merkle tree with the+same root are from the same tree.++In order to prevent rollback attacks between Merkle trees, this proposal+introduces third-party auditors. These auditors are responsible for downloading+all nodes of each Merkle tree to ensure that no version numbers have decreased+between generated trees. This achieves rollback protection without every client+having to store the version information for every target.++# Specification++This proposal replaces the single snapshot metadata file with a snapshot Merkle+metadata file for each target. The repository generates these snapshot Merkle+metadata files by building a Merkle tree using all target files and storing the+path to each target in the snapshot Merkle metadata. The root of this Merkle+tree is stored in timestamp metadata to allow for client verification. The+client uses the path stored in the snapshot Merkle metadata for a target, along+with the root of the Merkle tree, to ensure that metadata is from the given+Merkle tree. The details of these files and procedures are described in+this section.++![Diagram of snapshot Merkle tree](merkletap-1.jpg)++## Merkle tree generation++When the repository generates snapshot metadata, instead of putting the version+information for all targets into a single file, it instead uses the version+information to generate a Merkle tree.  Each target’s version information forms+a leaf of the tree, then these leaves are used to build a Merkle tree. The+internal nodes of a Merkle tree contain the hash of the leaf nodes. The exact+algorithm for generating this Merkle tree (ie the order of leaves in the hash,+how version information is encoded), is left to the implementer, but this+algorithm should be documented in a POUF so that implementations can be+compatible and correctly verify Merkle tree data. However, all implementations+should meet the following requirements:+* Leaf nodes must be unique. A unique identifier of the target, such as the+filepath or hash must be included in the leaf data to ensure that no two leaf+node hashes are the same.+* The tree must be a Merkle tree. Each internal node must contain a hash that+includes both leaf nodes.++Once the Merkle tree is generated, the repository must create a snapshot Merkle+metadata file for each target. This file must contain the leaf contents and+the path to the root of the Merkle tree. This path must contain the hashes of+sibling nodes needed to reconstruct the tree during verification (see diagram).+In addition the path should contain direction information so that the client+will know whether each node is a left or right sibling when reconstructing the+tree.++This information will be included in the following metadata format:+```+{ “leaf_contents”: {METAFILES},+  “Merkle_path”: {INDEX:HASH}+  “path_directions”:{INDEX:DIR}+}+```++Where `METAFILES` is the version information as defined for snapshot metadata,+`INDEX` provides the ordering of nodes, `HASH` is the hash of the sibling node,+and `DIR` indicates whether the given node is a left or right sibling.++In addition, the following optional field will be added to timestamp metadata.+If this field is included, the client should use snapshot Merkle metadata to+verify updates instead:++```+("merkle_root": ROOT_HASH)+```++Where `ROOT_HASH` is the hash of the Merkle tree root.++Note that snapshot Merkle metadata files do not need to be signed by a snapshot+key because the path information will be verified based on the Merkle root+provided in timestamp. Removing these signatures will provide additional space+savings for clients.++## Merkle tree verification++If a client sees the `merkle_root` field in timestamp metadata, they will use+the snapshot Merkle metadata to check version information. If this field is+present, the client will download the snapshot Merkle metadata file only for+the target the client is attempting to update. The client will verify the+snapshot Merkle metadata file by reconstructing the Merkle tree and comparing+the computed root hash to the hash provided in timestamp metadata. If the+hashes do not match, the snapshot Merkle metadata is invalid. Otherwise, the+client will use the version information in the verified snapshot Merkle+metadata to proceed with the update.++For additional rollback protection, the client may download previous versions+of the snapshot Merkle metadata for the given target file. After verifying+these files, the client should compare the version information in the previous+Merkle trees to the information in the current Merkle tree to ensure that the+version numbers have never decreased. In order to allow for fast forward attack+recovery (discussed further in Security Analysis), the client should only+download previous versions that were signed with the same timestamp key.++## Auditing Merkle trees++In order to ensure the validity of all target version information in the+Merkle tree, third-party auditors should validate the entire tree each time it+is updated. Auditors should download every snapshot Merkle file, verify the+paths, check the root hash against the hash provided in timestamp metadata,+and ensure that the version information has not decreased for each target.+Alternatively, the repository may provide auditors with information about the+contents and ordering of leaf nodes so that the auditors can more efficiently+verify the entire tree.++Auditors may provide an additional signature for timestamp metadata that+indicates that they have verified the contents of the Merkle tree whose root+is in that timestamp file. Using this signature, clients can check whether a+particular third party has approved the Merkle tree.++## Garbage collection+When a threshold of timestamp keys are revoked and replaced, the repository no+longer needs to store snapshot Merkle files signed by the previous timestamp+key. Replacing the timestamp key is an opportunity for fast forward attack+recovery, and so all version information from before the replacement is no+longer valid. At this point, the repository may garbage collect all snapshot+Merkle metadata files.++# Security Analysis

I like the descriptions and analysis of these attacks, although I think it could be made clearer to non-experts by illustrating a short and concrete example. WDYT?

mnm678

comment created time in 13 days

Pull request review commenttheupdateframework/taps

Add TAP introducing snapshot Merkle trees

+* TAP:+* Title: Snapshot Merkle Trees+* Version: 0+* Last-Modified: 17/09/2020+* Author: Marina Moore, Justin Cappos+* Type: Standardization+* Status: Draft+* Content-Type: markdown+* Created: 14/09/2020+* +TUF-Version:+* +Post-History:++ # Abstract++ To optimize the snapshot metadata file size for large registries, registries+ can use a snapshot Merkle tree to conceptually store version information about+ all images in a single snapshot without needing to distribute this entire+ snapshot to all clients. First, the client retrieves only a timestamp file,+ which changes according to some period p (such as every day or week). Second,+ the snapshot file is itself kept as a Merkle tree, with the root stored in+ timestamp metadata. This snapshot file is broken into a file for each target+ that contains the Merkle tree leaf with information about that target and a+ path to the root of the Merkle tree. A new snapshot Merkle tree is generated+ every time a new timestamp is generated. To prove that there has not been a+ reversion of the snapshot Merkle tree when downloading an image, the client+ and third-party auditors download the prior snapshot Merkle trees and check+ that the version numbers did not decrease at any point. To make this scalable+ as the number of timestamps increases, the client will only download version+ information signed by the current timestamp file. Thus, rotating this key+ enables the registry to discard old snapshot Merkle tree data.++The feature described in this TAP does not need to be implemented by all TUF+implementations. It is an option for any adopter who is interested in the+benefits provided by this feature, but may not make sense for implementations+with fewer target files.++# Motivation++For very large repositories, the snapshot metadata file could get very large.+This snapshot metadata file must be downloaded on every update cycle, and so+could significantly impact the metadata overhead. For example, if a repository+has 50,000,000 targets, the snapshot metadata will be about 380,000,000 bytes+(https://docs.google.com/spreadsheets/d/18iwWnWvAAZ4In33EWJBgdAWVFE720B_z0eQlB4FpjNc/edit?ts=5ed7d6f4#gid=0).+For this reason, it is necessary to create a more scalable solution for snapshot+metadata that does not significantly impact the security properties of TUF.++We designed a new approach to snapshot that improves scalability while+achieving similar security properties to the existing snapshot metadata+++# Rationale++Snapshot metadata provides a consistent view of the repository in order to+protect against mix-and-match attacks and rollback attacks. In order to provide+these protections, snapshot metadata is responsible for keeping track of the+version number of each target file, ensuring that all targets downloaded are+from the same snapshot, and ensuring that no target file decreases its version+number (except in the case of fast forward attack recovery). Any new solution+we develop must provide these same protections.++A snapshot Merkle tree manages version information for each target by including+this information in each leaf node. By using a Merkle tree to store these nodes,+this proposal can cryptographically verify that different targets are from the+same snapshot by ensuring that the Merkle tree roots match. Due to the+properties of secure hash functions, any two leaves of a Merkle tree with the+same root are from the same tree.++In order to prevent rollback attacks between Merkle trees, this proposal+introduces third-party auditors. These auditors are responsible for downloading+all nodes of each Merkle tree to ensure that no version numbers have decreased+between generated trees. This achieves rollback protection without every client+having to store the version information for every target.++# Specification++This proposal replaces the single snapshot metadata file with a snapshot Merkle+metadata file for each target. The repository generates these snapshot Merkle+metadata files by building a Merkle tree using all target files and storing the+path to each target in the snapshot Merkle metadata. The root of this Merkle+tree is stored in timestamp metadata to allow for client verification. The+client uses the path stored in the snapshot Merkle metadata for a target, along+with the root of the Merkle tree, to ensure that metadata is from the given+Merkle tree. The details of these files and procedures are described in+this section.++![Diagram of snapshot Merkle tree](merkletap-1.jpg)++## Merkle tree generation++When the repository generates snapshot metadata, instead of putting the version+information for all targets into a single file, it instead uses the version+information to generate a Merkle tree.  Each target’s version information forms+a leaf of the tree, then these leaves are used to build a Merkle tree. The+internal nodes of a Merkle tree contain the hash of the leaf nodes. The exact+algorithm for generating this Merkle tree (ie the order of leaves in the hash,+how version information is encoded), is left to the implementer, but this+algorithm should be documented in a POUF so that implementations can be
algorithm should be documented in a [POUF](https://github.com/theupdateframework/taps/blob/master/tap11.md) so that implementations can be
mnm678

comment created time in 13 days

PullRequestReviewEvent
PullRequestReviewEvent
PullRequestReviewEvent

issue commentDataDog/yubikey

Q: How to setup on multiple machines?

@trishankatdatadog will add a script to do it or in documentation

Is that an order or a notice? 😅

donferi

comment created time in 13 days

issue commentsecure-systems-lab/signing-spec

What is secure-systems-lab/signing-spec?

Will signing-spec recommend a particular on-disk format?

I don't think we should. I think we should avoid even recommending specifics such as files and metadata formats. Those can be discussed in addendum similar to POUFs.

lukpueh

comment created time in 13 days

Pull request review commentDataDog/datadog-agent

Add new config param to allow GroupExec perm for the SecretBackend command

 import ( 	"syscall" ) -func checkRights(path string) error {+func checkRights(path string, allowGroupExec bool) error { 	var stat syscall.Stat_t 	if err := syscall.Stat(path, &stat); err != nil { 		return fmt.Errorf("invalid executable '%s': can't stat it: %s", path, err) 	} -	// checking that group and others don't have any rights-	if stat.Mode&(syscall.S_IRWXG|syscall.S_IRWXO) != 0 {-		return fmt.Errorf("invalid executable '%s', 'groups' or 'others' have rights on it", path)+	// get information about current user+	usr, err := user.Current()+	if err != nil {+		return fmt.Errorf("can't query current user UID: %s", err) 	} -	// checking that the owner have exec rights-	if stat.Mode&syscall.S_IXUSR == 0 {-		return fmt.Errorf("invalid executable: '%s' is not executable", path)+	if !allowGroupExec {+		return checkUserPermission(&stat, usr, path) 	} -	// checking that we own the executable-	usr, err := user.Current()+	userGroups, err := usr.GroupIds() 	if err != nil { 		return fmt.Errorf("can't query current user UID: %s", err) 	}+	return checkGroupPermission(&stat, usr, userGroups, path)+} -	// checking we own the executable. This is useless since we won't be able-	// to execute it if not, but it gives a better error message to the-	// user.+// checkUserPermission check that only the current User can exec and own the file path+func checkUserPermission(stat *syscall.Stat_t, usr *user.User, path string) error { 	if fmt.Sprintf("%d", stat.Uid) != usr.Uid {-		return fmt.Errorf("invalid executable: '%s' isn't owned by the user running the agent: name '%s', UID %s. We can't execute it", path, usr.Username, usr.Uid)+		return fmt.Errorf("invalid executable: '%s' isn't owned by this user: username '%s', UID %s. We can't execute it", path, usr.Username, usr.Uid)+	}++	// checking that the owner have exec rights+	if stat.Mode&syscall.S_IXUSR == 0 {+		return fmt.Errorf("invalid executable: '%s' is not executable", path)+	}++	// If *user* executable, user can RWX, and nothing else for anyone.+	if stat.Mode&(syscall.S_IRWXG|syscall.S_IRWXO) != 0 {+		return fmt.Errorf("invalid executable '%s', 'group' or 'others' have rights on it", path)+	}++	return nil+}++// checkUserPermission check that only the current User or one of his group can exec the path
// checkGroupPermission check that only the current User or one of his group can exec the path
clamoriniere

comment created time in 14 days

Pull request review commentDataDog/datadog-agent

Add new config param to allow GroupExec perm for the SecretBackend command

 import ( 	"syscall" ) -func checkRights(path string) error {+func checkRights(path string, allowGroupExec bool) error { 	var stat syscall.Stat_t 	if err := syscall.Stat(path, &stat); err != nil { 		return fmt.Errorf("invalid executable '%s': can't stat it: %s", path, err) 	} -	// checking that group and others don't have any rights-	if stat.Mode&(syscall.S_IRWXG|syscall.S_IRWXO) != 0 {-		return fmt.Errorf("invalid executable '%s', 'groups' or 'others' have rights on it", path)+	// get information about current user+	usr, err := user.Current()+	if err != nil {+		return fmt.Errorf("can't query current user UID: %s", err) 	} -	// checking that the owner have exec rights-	if stat.Mode&syscall.S_IXUSR == 0 {-		return fmt.Errorf("invalid executable: '%s' is not executable", path)+	if !allowGroupExec {+		return checkUserPermission(&stat, usr, path) 	} -	// checking that we own the executable-	usr, err := user.Current()+	userGroups, err := usr.GroupIds() 	if err != nil { 		return fmt.Errorf("can't query current user UID: %s", err) 	}+	return checkGroupPermission(&stat, usr, userGroups, path)+} -	// checking we own the executable. This is useless since we won't be able-	// to execute it if not, but it gives a better error message to the-	// user.+// checkUserPermission check that only the current User can exec and own the file path+func checkUserPermission(stat *syscall.Stat_t, usr *user.User, path string) error { 	if fmt.Sprintf("%d", stat.Uid) != usr.Uid {-		return fmt.Errorf("invalid executable: '%s' isn't owned by the user running the agent: name '%s', UID %s. We can't execute it", path, usr.Username, usr.Uid)+		return fmt.Errorf("invalid executable: '%s' isn't owned by this user: username '%s', UID %s. We can't execute it", path, usr.Username, usr.Uid)+	}++	// checking that the owner have exec rights+	if stat.Mode&syscall.S_IXUSR == 0 {+		return fmt.Errorf("invalid executable: '%s' is not executable", path)+	}++	// If *user* executable, user can RWX, and nothing else for anyone.+	if stat.Mode&(syscall.S_IRWXG|syscall.S_IRWXO) != 0 {+		return fmt.Errorf("invalid executable '%s', 'group' or 'others' have rights on it", path)+	}++	return nil+}++// checkUserPermission check that only the current User or one of his group can exec the path+func checkGroupPermission(stat *syscall.Stat_t, usr *user.User, userGroups []string, path string) error {+	var isUserFile bool+	if fmt.Sprintf("%d", stat.Uid) == usr.Uid {+		isUserFile = true+	}+	// If the file is not own by the user, lets check for on of his groups+	if !isUserFile {

if isUserFile, you might wish to check that user can exec (check that stat.Mode&syscall.S_IXUSR == 0 is not an issue like in checkUserPermission)

clamoriniere

comment created time in 13 days

PullRequestReviewEvent

Pull request review commentDataDog/datadog-agent

Add new config param to allow GroupExec perm for the SecretBackend command

 func setCorrectRight(path string) { }  func TestWrongPath(t *testing.T) {-	require.NotNil(t, checkRights("does not exists"))+	require.NotNil(t, checkRights("does not exists", false)) }  func TestGroupOtherRights(t *testing.T) { 	tmpfile, err := ioutil.TempFile("", "agent-collector-test") 	require.Nil(t, err) 	defer os.Remove(tmpfile.Name()) +	allowGroupExec := false+ 	// file exists-	require.NotNil(t, checkRights("/does not exists"))+	require.NotNil(t, checkRights("/does not exists", allowGroupExec))  	require.Nil(t, os.Chmod(tmpfile.Name(), 0700))-	require.Nil(t, checkRights(tmpfile.Name()))+	require.Nil(t, checkRights(tmpfile.Name(), allowGroupExec))  	// we should at least be able to execute it 	require.Nil(t, os.Chmod(tmpfile.Name(), 0100))-	require.Nil(t, checkRights(tmpfile.Name()))+	require.Nil(t, checkRights(tmpfile.Name(), allowGroupExec)) -	// owner have exec right+	// owner have R&W but not X permission 	require.Nil(t, os.Chmod(tmpfile.Name(), 0600))-	require.NotNil(t, checkRights(tmpfile.Name()))+	require.NotNil(t, checkRights(tmpfile.Name(), allowGroupExec))  	// group should have no right 	require.Nil(t, os.Chmod(tmpfile.Name(), 0710))-	require.NotNil(t, checkRights(tmpfile.Name()))+	require.NotNil(t, checkRights(tmpfile.Name(), allowGroupExec))  	// other should have no right 	require.Nil(t, os.Chmod(tmpfile.Name(), 0701))-	require.NotNil(t, checkRights(tmpfile.Name()))+	require.NotNil(t, checkRights(tmpfile.Name(), allowGroupExec))++	allowGroupExec = true++	// even if allowGroupExec=true, group may have no permission+	require.Nil(t, os.Chmod(tmpfile.Name(), 0700))+	require.Nil(t, checkRights(tmpfile.Name(), allowGroupExec))++	// group can have read and exec permission+	require.Nil(t, os.Chmod(tmpfile.Name(), 0750))+	require.Nil(t, checkRights(tmpfile.Name(), allowGroupExec))++	// group should not have write right+	require.Nil(t, os.Chmod(tmpfile.Name(), 0770))+	require.NotNil(t, checkRights(tmpfile.Name(), allowGroupExec))++	// other should have no right

Consider adding a separate 0702 test here (others can write)

clamoriniere

comment created time in 13 days

Pull request review commentDataDog/datadog-agent

Add new config param to allow GroupExec perm for the SecretBackend command

 func setCorrectRight(path string) { }  func TestWrongPath(t *testing.T) {-	require.NotNil(t, checkRights("does not exists"))+	require.NotNil(t, checkRights("does not exists", false)) }  func TestGroupOtherRights(t *testing.T) { 	tmpfile, err := ioutil.TempFile("", "agent-collector-test") 	require.Nil(t, err) 	defer os.Remove(tmpfile.Name()) +	allowGroupExec := false+ 	// file exists-	require.NotNil(t, checkRights("/does not exists"))+	require.NotNil(t, checkRights("/does not exists", allowGroupExec))  	require.Nil(t, os.Chmod(tmpfile.Name(), 0700))-	require.Nil(t, checkRights(tmpfile.Name()))+	require.Nil(t, checkRights(tmpfile.Name(), allowGroupExec))  	// we should at least be able to execute it 	require.Nil(t, os.Chmod(tmpfile.Name(), 0100))-	require.Nil(t, checkRights(tmpfile.Name()))+	require.Nil(t, checkRights(tmpfile.Name(), allowGroupExec)) -	// owner have exec right+	// owner have R&W but not X permission 	require.Nil(t, os.Chmod(tmpfile.Name(), 0600))-	require.NotNil(t, checkRights(tmpfile.Name()))+	require.NotNil(t, checkRights(tmpfile.Name(), allowGroupExec))  	// group should have no right 	require.Nil(t, os.Chmod(tmpfile.Name(), 0710))

Consider adding a separate 0720 test here (group can write)

clamoriniere

comment created time in 13 days

Pull request review commentDataDog/datadog-agent

Add new config param to allow GroupExec perm for the SecretBackend command

 var (  // checkRights check that the given filename has access controls set only for // Administrator, Local System and the datadog user.-func checkRights(filename string) error {+func checkRights(filename string, _ bool) error {

Maybe instead of ignoring the bool here, return an error if it is true?

clamoriniere

comment created time in 14 days

Pull request review commentDataDog/datadog-agent

Add new config param to allow GroupExec perm for the SecretBackend command

 func setCorrectRight(path string) { }  func TestWrongPath(t *testing.T) {-	require.NotNil(t, checkRights("does not exists"))+	require.NotNil(t, checkRights("does not exists", false)) }  func TestGroupOtherRights(t *testing.T) { 	tmpfile, err := ioutil.TempFile("", "agent-collector-test") 	require.Nil(t, err) 	defer os.Remove(tmpfile.Name()) +	allowGroupExec := false+ 	// file exists-	require.NotNil(t, checkRights("/does not exists"))+	require.NotNil(t, checkRights("/does not exists", allowGroupExec))  	require.Nil(t, os.Chmod(tmpfile.Name(), 0700))-	require.Nil(t, checkRights(tmpfile.Name()))+	require.Nil(t, checkRights(tmpfile.Name(), allowGroupExec))  	// we should at least be able to execute it 	require.Nil(t, os.Chmod(tmpfile.Name(), 0100))-	require.Nil(t, checkRights(tmpfile.Name()))+	require.Nil(t, checkRights(tmpfile.Name(), allowGroupExec)) -	// owner have exec right+	// owner have R&W but not X permission 	require.Nil(t, os.Chmod(tmpfile.Name(), 0600))-	require.NotNil(t, checkRights(tmpfile.Name()))+	require.NotNil(t, checkRights(tmpfile.Name(), allowGroupExec))  	// group should have no right 	require.Nil(t, os.Chmod(tmpfile.Name(), 0710))-	require.NotNil(t, checkRights(tmpfile.Name()))+	require.NotNil(t, checkRights(tmpfile.Name(), allowGroupExec))  	// other should have no right 	require.Nil(t, os.Chmod(tmpfile.Name(), 0701))

Consider adding a separate 0702 test here (others can write)

clamoriniere

comment created time in 13 days

more