profile
viewpoint

cloudera/livy 897

Livy is an open source REST interface for interacting with Apache Spark from anywhere

cadencemarseille/rust-pcre 18

Rust wrapper for libpcre

erickt/bonjour-forwarder 11

An example on how to have bonjour talk to zeromq

brson/llvm 7

Temporary fork of LLVM for Rust

erickt/celestia 4

Mirror of celestia

erickt/cargo-central 3

rust-lang.org maintained cargo repository

erickt/cchashmap 3

Rust implementation of the Cache Conscious HashMap

erickt/couchrest 2

A RESTful CouchDB client based on Heroku's RestClient and Couch.js

erickt/advisory-db 1

Security advisory database for Rust crates published through crates.io

brson/clang 0

Mirror of official clang git repository located at http://llvm.org/git/clang. Updated hourly.

push eventheartsucker/rust-tuf

Kevin Wells

commit sha ecd7431a0bf28e607ba71d65f30d46404f3d1f0f

Support unknown fields in signed metadata Per the TUF spec, all metadata formats "include the ability to add more attribute-value fields for backwards-compatible format changes." So, if the signed portion of some metadata contains unexpected fields, those fields should not be ignored until after verifying the signature of the metdata. This change: * replaces the parsed metadata in the SignedMetadata struct with a D::RawData, which will preserve unknown fields. * changes the verify method on SignedMetadata to produce the fully parsed metadata without any attached signatures, as those are no longer able to verify the parsed metadata. Fixed: #223

view details

Kevin Wells

commit sha 81b100f0defb7714e81290cbfea4ab9853b45600

Test that signatures respect unknown fields

view details

Kevin Wells

commit sha c1f01d192da695fec59131447fb4718bdf39f929

Remove MetadataMetadata type

view details

Erick Tryzelaar

commit sha 7c6f1d1957452edcaf62fd433feb6f796ec818cd

Merge pull request #282 from wellsie1116/signature-preserves-unknown-fields Signature preserves unknown fields

view details

push time in 9 days

PR merged heartsucker/rust-tuf

Signature preserves unknown fields

Per the TUF spec, all metadata formats "include the ability to add more attribute-value fields for backwards-compatible format changes." So, if the signed portion of some metadata contains unexpected fields, those fields should not be ignored until after verifying the signature of the metdata.

These changes:

  • replace the parsed metadata in the SignedMetadata struct with a D::RawData, which will preserve unknown fields.
  • changes the verify method on SignedMetadata to produce the fully parsed metadata without any attached signatures, as those are no longer able to verify the parsed metadata.
  • verify that signatures of metadata containing unknown fields verify correctly.

Fixed: #223

+284 -139

0 comment

5 changed files

wellsie1116

pr closed time in 9 days

issue closedheartsucker/rust-tuf

rust-tuf should preserve unknown fields in metadata

While the TUF spec explicitly calls out some areas where metadata can be customized (like the target custom field), it does not forbid metadata from containing other non-standard fields. For example, python-tuf adds a keyid_hash_algorithms field to the root key structure, which is incorporated into that key's key id. rust-tuf currently deserializes metadata into a rust struct, then re-serializes it into cjson in order to validate metadata. Unfortunately though this means that we throw away unknown fields, and therefore will fail to validate the signature.

To fix this, I think we have two options:

  1. Rather than directly deserializing some signed metadata like the root metadata into:
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]
pub struct SignedMetadata<D, M> {
    signatures: Vec<Signature>,
    #[serde(rename = "signed")]
    metadata: M,
    #[serde(skip_serializing, skip_deserializing)]
    _interchage: PhantomData<D>,
}

Instead, we should deserialize into a type that stores the signed metadata in a serde_json::Object, which can represent any json value. So something like this:

#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]
pub struct SignedMetadata<D, M> {
    signatures: Vec<Signature>,
    signed: serde_json::Value,
    #[serde(skip_serializing, skip_deserializing)]
    metadata: Option<M>,
    #[serde(skip_serializing, skip_deserializing)]
    _interchage: PhantomData<D>,
}

This structure then is what is serialized to disk, and the signed portion is used to validate the signature. Once validated, the signed portion can be deserialized into the in memory metadata field for further use.

  1. We could annotate all structs with #[serde(flatten)] to capture unknown values into a side structure. This then should preserve those values in order to pass validation, but has the side effect of adding a lot of fields, and it might be easy to forget to capture or propagate these fields everywhere, so I think option 1 might be a more robust implementation.

closed time in 9 days

erickt

Pull request review commentheartsucker/rust-tuf

Signature preserves unknown fields

 where                 break;             }         }--        if signatures_needed == 0 {-            Ok(())-        } else {-            Err(Error::VerificationFailure(format!(+        if signatures_needed > 0 {+            return Err(Error::VerificationFailure(format!(                 "Signature threshold not met: {}/{}",                 threshold - signatures_needed,                 threshold-            )))+            )));         }++        // "assume" the metadata is valid because we just verified that it is.+        self.assume_valid()     } } -impl<D, M> AsRef<M> for SignedMetadata<D, M> {-    fn as_ref(&self) -> &M {-        &self.metadata+impl<D> SignedMetadata<D, RootMetadata>+where+    D: DataInterchange,+{+    /// Parse common metadata fields from this metadata without verifying signatures.+    ///+    /// This operation is generally unsafe to do with metadata obtained from an untrusted source,+    /// but rolling forward to the most recent root.json requires using the version number of the+    /// latest root.json.+    pub fn untrusted_info(&self) -> Result<MetadataMetadata<RootMetadata>> {

I don't think this needs to be public, you didn't use it in any of the tests.

wellsie1116

comment created time in 9 days

Pull request review commentheartsucker/rust-tuf

Signature preserves unknown fields

 where                 break;             }         }--        if signatures_needed == 0 {-            Ok(())-        } else {-            Err(Error::VerificationFailure(format!(+        if signatures_needed > 0 {+            return Err(Error::VerificationFailure(format!(                 "Signature threshold not met: {}/{}",                 threshold - signatures_needed,                 threshold-            )))+            )));         }++        // "assume" the metadata is valid because we just verified that it is.+        self.assume_valid()     } } -impl<D, M> AsRef<M> for SignedMetadata<D, M> {-    fn as_ref(&self) -> &M {-        &self.metadata+impl<D> SignedMetadata<D, RootMetadata>+where+    D: DataInterchange,+{+    /// Parse common metadata fields from this metadata without verifying signatures.+    ///+    /// This operation is generally unsafe to do with metadata obtained from an untrusted source,+    /// but rolling forward to the most recent root.json requires using the version number of the+    /// latest root.json.+    pub fn untrusted_info(&self) -> Result<MetadataMetadata<RootMetadata>> {+        D::deserialize(&self.metadata)     } } -impl<D, M> Metadata for SignedMetadata<D, M>+/// Metadata common to all signed metadata files.+#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]+pub struct MetadataMetadata<M> {

This doesn't need to be public either.

I don't think we need this either. We could instead make parse_version_untrusted a pub(crate) to extract out the version number.

wellsie1116

comment created time in 9 days

push eventheartsucker/rust-tuf

Kevin Wells

commit sha c696b6a9520b69fdea70ba8e111153dc42686f3f

Remove unused signatures from tuf::Tuf

view details

Erick Tryzelaar

commit sha da8e00230da06b4a077ff57a9f524a2fb96b2d7d

Merge pull request #281 from wellsie1116/unused-signatures Remove unused signatures from tuf::Tuf

view details

push time in 9 days

PR merged heartsucker/rust-tuf

Reviewers
Remove unused signatures from tuf::Tuf

Before SignedMetadata can store the D::RawData of inner metadata to verify signatures that contains unknown fields, tuf::Tuf needs to store the metadata in a form it will be able to use. Once it verifies the signatures of the metadata, it can fully parse it and store just the parsed inner metadata.

+40 -73

0 comment

3 changed files

wellsie1116

pr closed time in 9 days

push eventheartsucker/rust-tuf

Kevin Wells

commit sha b971a4ccf1851a60ec696365356c45805c1dd3cb

Split read/write half of Repository trait A TUF client has no need to store metadata in a remote repository, and some repository implementations cannot implement the store_* methods, so this change splits defines the Repository trait to be a supertrait of RepositoryProvider (the read half) and RepositoryStorage (the write half). Other than requiring clients to import additional traits, this change should not change any existing functionality.

view details

Kevin Wells

commit sha c58e38972a0d3d9eee24543e55987c99903ad549

Unimplement RepositoryStorage for HttpRepository

view details

Kevin Wells

commit sha d37c57bb1ff7f3600ac1cc80f59e4e94340385f1

Split SafeReader into separate types Since the Repository traits can be implemented outside this crate, it is not safe to assume that those implementations will verify the maximum length or hash of metadata/targets before succeeding a fetch request. In preparation to move the length check and hash verification into the tuf client, this change splits the SafeReader into 2 types: * EnforceMinimumBitrate enforces a minimum transfer rate, currently utilized by only the http repository implementation * SafeReader, which retains the logic to enforce a maximum file size and hash value. This change also defines an extension trait on AsyncRead to easily wrap an AsyncRead in these types. A future change will move hash and length enforcement out of the Repository implementations.

view details

Kevin Wells

commit sha 2fd6f11b08a477c1aeb27fd9975f23f94c11036c

Metadata trait methods preserve original bytes The current definition the fetch_metadata and store_metadata repository trait methods require the repository trait implementations to parse the metadata, and the SignedMetadata structs and DataInterchange traits cannot provide a guarantee that parsing and re-serializing metadata will produce byte-for-byte identical metadata. This behavior is an issue for TUF as during the metadata update workflow, clients fetch metadata from a remote repository and store it in the local repository for later use. However, for metadata referenced by snapshot, the hash of the data must match in order to use the local version. This change: * modifies the fetch_metadata and store_metadata trait methods to interact with AsyncRead instead of SignedMetadata so that the original unparsed bytes can be stored as-is in the local repository as long as the metadata is determined to be valid. * defines the RawSignedMetadata type, which is a simple wrapper around raw bytes and contains type information identifying its inner Metadata type and serialization format. * introduces a new type to wrap an instance of the Repository traits and provide an API surface that can store the raw metadata and fetch both raw and parsed metadata to simplify interacting with the now less ergonomic trait methods. * moves the metadata max length checks and hash checks out of the repository trait implementations and into the RepositoryClient, ensuring invalid metadata provided by external RepositoryProviders will not be trusted (for metadata that is is hash-checked).

view details

Kevin Wells

commit sha a8b3608f02e090ef3dcfd9f45cac535c6c68a601

Allow Client builders to accept Repository values This change modifies the local/remote parameters for Client's constructors to be `impl Into<Repository<R, D>>`, which both preserves the existing behavior where implementations of RepositoryProvider and/or RepositoryStorage are accepted and allows callers to provide a Repository directly. As long as one of the parameters is a Repository instance, there won't be a need to turbofish or annotate the Client type to include the DataInterchange parameter.

view details

Kevin Wells

commit sha ecb305e46ab3f60a605a667c2e88cf3a95533a51

Add some tests for the Repository type

view details

Kevin Wells

commit sha 51b100d12d61d2577ba5563ce7aa697d8375fc25

Merge branch 'develop' of github.com:heartsucker/rust-tuf into repo-split

view details

Kevin Wells

commit sha 21d5466e5cf5a277f1a5e3184323bf0a29c8c3db

Re-add D type param to Repository traits

view details

Kevin Wells

commit sha 34d812943af00c1e78150ddffdd939be0720f1fb

Remove Repository from public interface

view details

Kevin Wells

commit sha 6a70696085b8343aacf316848f1d47e2363e43b8

with_trusted_root_keys uses metadata root version

view details

Kevin Wells

commit sha c5d53b80cda48cea5855aa3cb3c8972133a52834

Add bug number

view details

Kevin Wells

commit sha 3aa93c9be47476c6512709c990c77b4209c5c80c

Merge branch 'develop' into repo-split

view details

Kevin Wells

commit sha 7a087c6f41b815beda962922ab440d8c09314143

Remove extension from EphemeralRepository storage Now that it is generic on D again, it doesn't need to support storing more than one format of metadata.

view details

Kevin Wells

commit sha ab5b5ce12328e517ae2f54ad408e651bb574f8d7

Cleanup trait constraints Got a bit overzealous with Sync.

view details

Kevin Wells

commit sha 91946cad62dc8be42b60c9d40a1bef9aaf512971

Remove Repository From conversion

view details

Kevin Wells

commit sha f7f706d86a83185e1f61c33e5e9d30c11f5d2cf3

Merge branch 'develop' into repo-split

view details

Kevin Wells

commit sha c56a89c54f61108835b766b74a718d5b5c9efdb7

Merge branch 'develop' into repo-split

view details

Erick Tryzelaar

commit sha daa946152278a2655e871acb4861eb21277f369d

Merge pull request #274 from wellsie1116/repo-split Split repository traits, Preserve metadata hashes

view details

push time in 10 days

PR merged heartsucker/rust-tuf

Split repository traits, Preserve metadata hashes

This series of changes:

  • Splits the Repository trait into a read half (RepositoryProvider) and write half (RepositoryStorage), resolving #119
  • Moves hash and max length verification out of the repository trait implementations and into a Repository struct that wraps a repository implementation, allowing this crate to enforce those constraints even with externally implemented RepositoryProviders.
  • Preserves the original bytes of metadata long enough to store it locally, allowing hash checks to pass when re-reading locally stored metadata.
+934 -415

1 comment

10 changed files

wellsie1116

pr closed time in 10 days

Pull request review commentheartsucker/rust-tuf

Split repository traits, Preserve metadata hashes

 where     } } -impl<T, D> Repository<D> for &T+impl<R, D> Repository<R, D> where-    T: Repository<D>,+    R: RepositoryProvider,     D: DataInterchange + Sync, {-    fn store_metadata<'a, M>(+    /// Fetch and parse metadata identified by `meta_path`, `version`, and+    /// [`D::extension()`][extension].+    ///+    /// If `max_length` is provided, this method will return an error if the metadata exceeds+    /// `max_length` bytes. If `hash_data` is provided, this method will return and error if the+    /// hashed bytes of the metadata do not match `hash_data`.+    ///+    /// [extension]: crate::interchange::DataInterchange::extension+    pub async fn fetch_metadata<'a, M>(         &'a self,         meta_path: &'a MetadataPath,         version: &'a MetadataVersion,-        metadata: &'a SignedMetadata<D, M>,-    ) -> BoxFuture<'a, Result<()>>+        max_length: Option<usize>,+        hash_data: Option<(&'static HashAlgorithm, HashValue)>,+    ) -> Result<(RawSignedMetadata<D, M>, SignedMetadata<D, M>)>     where-        M: Metadata + Sync + 'static,+        M: Metadata,     {-        (**self).store_metadata(meta_path, version, metadata)+        let raw_signed_meta = self+            .fetch_raw_metadata(meta_path, version, max_length, hash_data)+            .await?;+        let signed_meta = raw_signed_meta.parse()?;

I've got a little concern that if we ever have two fetch_metadata calls in the same scope, it'd be a little easier to hash check one metadata, but use the other when trying to update the tuf database. This probably isn't a large problem in practice since we still go through signature verification and the rest of the validation, but it'd be nice to avoid this.

We can defer addressing this for a future patch though.

wellsie1116

comment created time in 10 days

push eventheartsucker/rust-tuf

Kevin Wells

commit sha fcf70196db1ec502609163963f33878408677805

Verify top-level delegated targets signatures The previous implementation did not consider delegated roles from the top-level "targets" role when looking for keys to verify the delegated targets with, which would result in delegated targets being accepted by Tuf without signature verification.

view details

Kevin Wells

commit sha 960c3cd6ec126537ff9dc1f9aae3149168673162

Add invalid delegations tests

view details

Kevin Wells

commit sha 14a72a7b2fdb6fbdfe7407fa12bedc23090c2347

Provide update_delegations parent role So it can verify that the provided delegations are valid from the point of view of the parent. This avoids the need to check more than 1 signature in update_delegations.

view details

Erick Tryzelaar

commit sha 09c24d43cd1b76af23fad44233a85de7bedea4ee

Merge pull request #275 from wellsie1116/delegations-fix Verify top-level delegated targets signatures

view details

push time in 16 days

PR merged heartsucker/rust-tuf

Verify top-level delegated targets signatures

The previous implementation did not consider delegated roles from the top-level "targets" role when looking for keys to verify the delegated targets with, which would result in delegated targets being accepted by Tuf without signature verification.

+414 -42

1 comment

3 changed files

wellsie1116

pr closed time in 16 days

Pull request review commentheartsucker/rust-tuf

Verify top-level delegated targets signatures

 where                             current_depth + 1,                             target,                             snapshot,-                            Some(meta.as_ref()),+                            Some((meta.as_ref(), delegation.role().clone())),

Yeah I think this is the right role.

wellsie1116

comment created time in 16 days

push eventheartsucker/rust-tuf

Kevin Wells

commit sha 483bc904b7c6448b45c0de2ba8f2fcf44690d3a9

Add tests for EnforceMinimumBitrate

view details

Erick Tryzelaar

commit sha 57dc57d7a1e15c1983c6043da58207e1f8f293ed

Merge pull request #277 from wellsie1116/split-safe-reader Add tests for EnforceMinimumBitrate

view details

push time in 18 days

Pull request review commentheartsucker/rust-tuf

Verify top-level delegated targets signatures

 impl<D: DataInterchange> Tuf<D> {         Ok(true)     } +    /// Walk the base targets delegations and any known nested delegations for the authorized+    /// signing keys and metadata for the delegation given by `role`.+    fn find_delegation(&self, role: &MetadataPath) -> Vec<(Vec<&PublicKey>, &Delegation)> {

Good old tree searching. Unfortunately delegations could potentially have a loop, so can you add in a "I visited this node already" set to avoid infinite loops?

wellsie1116

comment created time in 18 days

Pull request review commentheartsucker/rust-tuf

Verify top-level delegated targets signatures

 impl<D: DataInterchange> Tuf<D> {                 return Ok(false);             } -            for delegated_targets in self.delegations.values() {-                let parent = match delegated_targets.as_ref().delegations() {-                    Some(d) => d,-                    None => &targets_delegations,-                };+            // FIXME Since the delegating targets defines the valid keys a delegated targets can be+            // signed by, it is possible for delegated targets to be valid when accessed via one+            // target path and invalid when accessed via another. Instead of verifying all known+            // references to this delegation, this verification step should be choosing just the+            // one for the target path the outer tuf::Client is trying to resolve, but that+            // currently depends on state in tuf::Client to do so.

Can you file a ticket for this fixme?

Ideally Tuf wouldn't have to trust outside state from tuf::Client. For the short term though, what do you think about update_delegation also getting passed in the parent Role name? Then we could directly look that up in our map and use that to verify this delegation. That should allow us to support the whole "this delegation is valid when found through this delegation chain, but not valid when found through that delegation chain".

wellsie1116

comment created time in 18 days

Pull request review commentheartsucker/rust-tuf

Verify top-level delegated targets signatures

 impl<D: DataInterchange> Tuf<D> {                 return Ok(false);             } -            for delegated_targets in self.delegations.values() {-                let parent = match delegated_targets.as_ref().delegations() {-                    Some(d) => d,-                    None => &targets_delegations,-                };+            // FIXME Since the delegating targets defines the valid keys a delegated targets can be

Can you file a ticket for this fixme?

wellsie1116

comment created time in 18 days

Pull request review commentheartsucker/rust-tuf

Split SafeReader into separate types

-use chrono::offset::Utc;

nope, it's used elsewhere.

wellsie1116

comment created time in 18 days

push eventheartsucker/rust-tuf

Kevin Wells

commit sha 414342a7059b8e08438ef255c4a501fbe777358c

Split SafeReader into separate types Since the Repository traits can be implemented outside this crate, it is not safe to assume that those implementations will verify the maximum length or hash of metadata/targets before succeeding a fetch request. In preparation to move the length check and hash verification into the tuf client, this change splits the SafeReader into 2 types: * EnforceMinimumBitrate enforces a minimum transfer rate, currently utilized by only the http repository implementation * SafeReader, which retains the logic to enforce a maximum file size and hash value. This change also defines an extension trait on AsyncRead to easily wrap an AsyncRead in these types. A future change will move hash and length enforcement out of the Repository implementations.

view details

Erick Tryzelaar

commit sha 7dd5753e3b6d6bceaaef3e29aac3b7927f416fac

Merge pull request #276 from wellsie1116/split-safe-reader Split SafeReader into separate types

view details

push time in 18 days

PR merged heartsucker/rust-tuf

Split SafeReader into separate types

Since the Repository traits can be implemented outside this crate, it is not safe to assume that those implementations will verify the maximum length or hash of metadata/targets before succeeding a fetch request.

In preparation to move the length check and hash verification into the tuf client, this change splits the SafeReader into 2 types:

  • EnforceMinimumBitrate enforces a minimum transfer rate, currently utilized by only the http repository implementation
  • SafeReader, which retains the logic to enforce a maximum file size and hash value. This change also defines an extension trait on AsyncRead to easily wrap an AsyncRead in these types.

A future change will move hash and length enforcement out of the Repository implementations.

+140 -101

0 comment

5 changed files

wellsie1116

pr closed time in 18 days

Pull request review commentheartsucker/rust-tuf

Split SafeReader into separate types

-use chrono::offset::Utc;-use chrono::DateTime; use futures_io::AsyncRead; use futures_util::ready; use ring::digest; use std::io::{self, ErrorKind}; use std::marker::Unpin; use std::pin::Pin; use std::task::{Context, Poll};+use std::time::{Duration, Instant};  use crate::crypto::{HashAlgorithm, HashValue}; use crate::Result; +pub(crate) trait SafeAsyncRead: Sized {

It's probably fine that this is pub(crate) for now, but I can imagine custom Repository implementations might want to use this functionality.

wellsie1116

comment created time in 18 days

Pull request review commentheartsucker/rust-tuf

Split SafeReader into separate types

-use chrono::offset::Utc;

Can we remove the chrono dependency?

wellsie1116

comment created time in 18 days

Pull request review commentheartsucker/rust-tuf

Split repository traits, Preserve metadata hashes

 where     } } -impl<C, D> Repository<D> for HttpRepository<C, D>+impl<C> RepositoryProvider for HttpRepository<C> where     C: Connect + Sync + 'static,-    D: DataInterchange + Send + Sync, {-    /// This always returns `Err` as storing over HTTP is not yet supported.-    fn store_metadata<'a, M>(-        &'a self,-        _: &'a MetadataPath,-        _: &'a MetadataVersion,-        _: &'a SignedMetadata<D, M>,-    ) -> BoxFuture<'a, Result<()>>-    where-        M: Metadata + 'static,-    {-        async {-            Err(Error::Opaque(-                "Http repo store metadata not implemented".to_string(),-            ))-        }-        .boxed()-    }--    fn fetch_metadata<'a, M>(+    fn fetch_metadata<'a, D>(         &'a self,         meta_path: &'a MetadataPath,         version: &'a MetadataVersion,-        max_length: Option<usize>,-        hash_data: Option<(&'static HashAlgorithm, HashValue)>,-    ) -> BoxFuture<'a, Result<SignedMetadata<D, M>>>+        _max_length: Option<usize>,+        _hash_data: Option<(&'static HashAlgorithm, HashValue)>,+    ) -> BoxFuture<'a, Result<Box<dyn AsyncRead + Send + Unpin>>>     where-        M: Metadata + 'static,+        D: DataInterchange + Sync,     {         async move {-            Self::check::<M>(meta_path)?;-             let components = meta_path.components::<D>(&version);             let resp = self.get(&self.metadata_prefix, &components).await?; -            let stream = resp+            // TODO check content length if known and fail early if the payload is too large.++            let reader = resp                 .into_body()                 .compat()-                .map_err(|err| io::Error::new(io::ErrorKind::Other, err));--            let mut reader = SafeReader::new(-                stream.into_async_read(),-                max_length.unwrap_or(::std::usize::MAX) as u64,-                self.min_bytes_per_second,-                hash_data,-            )?;+                .map_err(|err| io::Error::new(io::ErrorKind::Other, err))+                .into_async_read()+                .enforce_minimum_bitrate(self.min_bytes_per_second); -            let mut buf = Vec::new();-            reader.read_to_end(&mut buf).await?;--            Ok(D::from_slice(&buf)?)+            let reader: Box<dyn AsyncRead + Send + Unpin> = Box::new(reader);+            Ok(reader)         }         .boxed()     } -    /// This always returns `Err` as storing over HTTP is not yet supported.-    fn store_target<'a, R>(&'a self, _: R, _: &'a TargetPath) -> BoxFuture<'a, Result<()>>-    where-        R: AsyncRead + 'a,-    {-        async { Err(Error::Opaque("Http repo store not implemented".to_string())) }.boxed()-    }-     fn fetch_target<'a>(         &'a self,         target_path: &'a TargetPath,-        target_description: &'a TargetDescription,+        _target_description: &'a TargetDescription,     ) -> BoxFuture<'a, Result<Box<dyn AsyncRead + Send + Unpin>>> {         async move {-            let (alg, value) = crypto::hash_preference(target_description.hashes())?;             let components = target_path.components();             let resp = self.get(&self.targets_prefix, &components).await?; -            let stream = resp+            // TODO check content length if known and fail early if the payload is too large.

Can you file a ticket for this, and link to it here?

wellsie1116

comment created time in 18 days

Pull request review commentheartsucker/rust-tuf

Split repository traits, Preserve metadata hashes

 where     } } -impl<T, D> Repository<D> for &T+impl<R, D> Repository<R, D> where-    T: Repository<D>,+    R: RepositoryProvider,     D: DataInterchange + Sync, {-    fn store_metadata<'a, M>(+    /// Fetch and parse metadata identified by `meta_path`, `version`, and+    /// [`D::extension()`][extension].+    ///+    /// If `max_length` is provided, this method will return an error if the metadata exceeds+    /// `max_length` bytes. If `hash_data` is provided, this method will return and error if the+    /// hashed bytes of the metadata do not match `hash_data`.+    ///+    /// [extension]: crate::interchange::DataInterchange::extension+    pub async fn fetch_metadata<'a, M>(         &'a self,         meta_path: &'a MetadataPath,         version: &'a MetadataVersion,-        metadata: &'a SignedMetadata<D, M>,-    ) -> BoxFuture<'a, Result<()>>+        max_length: Option<usize>,+        hash_data: Option<(&'static HashAlgorithm, HashValue)>,+    ) -> Result<(RawSignedMetadata<D, M>, SignedMetadata<D, M>)>     where-        M: Metadata + Sync + 'static,+        M: Metadata,     {-        (**self).store_metadata(meta_path, version, metadata)+        let raw_signed_meta = self+            .fetch_raw_metadata(meta_path, version, max_length, hash_data)+            .await?;+        let signed_meta = raw_signed_meta.parse()?;

might be better to move this to the caller, to make it easier to see where the parsing happens?

wellsie1116

comment created time in 19 days

Pull request review commentheartsucker/rust-tuf

Split repository traits, Preserve metadata hashes

 pub use self::http::{HttpRepository, HttpRepositoryBuilder}; mod ephemeral; pub use self::ephemeral::EphemeralRepository; -/// Top-level trait that represents a TUF repository and contains all the ways it can be interacted-/// with.-pub trait Repository<D>-where-    D: DataInterchange + Sync,-{-    /// Store signed metadata.+/// A readable TUF repository.+pub trait RepositoryProvider {

Why did you move the DataInterchange parameter onto the methods? This has the side effect of us not being able to create a Box<dyn RepositoryProvider>.

I suppose this would allow us to interact with a repository that supports interchanges, but I'm not sure if that's worth trying to support just yet, since we (and the tuf spec) hasn't actually figured out how alternative interchanges would work.

wellsie1116

comment created time in 18 days

Pull request review commentheartsucker/rust-tuf

Split repository traits, Preserve metadata hashes

 pub use self::http::{HttpRepository, HttpRepositoryBuilder}; mod ephemeral; pub use self::ephemeral::EphemeralRepository; -/// Top-level trait that represents a TUF repository and contains all the ways it can be interacted-/// with.-pub trait Repository<D>-where-    D: DataInterchange + Sync,-{-    /// Store signed metadata.+/// A readable TUF repository.+pub trait RepositoryProvider {+    /// Fetch signed metadata identified by `meta_path`, `version`, and+    /// [`D::extension()`][extension].+    ///+    /// Implementations may ignore `max_length` and `hash_data` as [`Repository`] will verify these+    /// constraints itself. However, it may be more efficient for an implementation to detect+    /// invalid metadata and fail the fetch operation before streaming all of the bytes of the+    /// metadata.     ///-    /// Note: This **MUST** canonicalize the bytes before storing them as a read will expect the-    /// hashes of the metadata to match.-    fn store_metadata<'a, M>(+    /// [extension]: crate::interchange::DataInterchange::extension+    fn fetch_metadata<'a, D>(         &'a self,         meta_path: &'a MetadataPath,         version: &'a MetadataVersion,-        metadata: &'a SignedMetadata<D, M>,-    ) -> BoxFuture<'a, Result<()>>+        max_length: Option<usize>,+        hash_data: Option<(&'static HashAlgorithm, HashValue)>,+    ) -> BoxFuture<'a, Result<Box<dyn AsyncRead + Send + Unpin>>>     where-        M: Metadata + Sync + 'static;+        D: DataInterchange + Sync;++    /// Fetch the given target.+    ///+    /// Implementations may ignore the `length` and `hashes` fields in `target_description` as+    /// [`Repository`] will verify these constraints itself. However, it may be more efficient for+    /// an implementation to detect invalid targets and fail the fetch operation before streaming+    /// all of the bytes.+    fn fetch_target<'a>(+        &'a self,+        target_path: &'a TargetPath,+        target_description: &'a TargetDescription,+    ) -> BoxFuture<'a, Result<Box<dyn AsyncRead + Send + Unpin>>>;+} -    /// Fetch signed metadata.-    fn fetch_metadata<'a, M>(+/// A writable TUF repository. Most implementors of this trait should also implement+/// `RepositoryProvider`.+pub trait RepositoryStorage {+    /// Store the provided `metadata` in a location identified by `meta_path`, `version`, and+    /// [`D::extension()`][extension], overwriting any existing metadata at that location.+    ///+    /// [extension]: crate::interchange::DataInterchange::extension+    fn store_metadata<'a, R, D>(         &'a self,         meta_path: &'a MetadataPath,         version: &'a MetadataVersion,-        max_length: Option<usize>,-        hash_data: Option<(&'static HashAlgorithm, HashValue)>,-    ) -> BoxFuture<'a, Result<SignedMetadata<D, M>>>+        metadata: R,+    ) -> BoxFuture<'a, Result<()>>     where-        M: Metadata + 'static;+        R: AsyncRead + Send + Unpin + 'a,+        D: DataInterchange + Sync; -    /// Store the given target.+    /// Store the provided `target` in a location identified by `target_path`, overwriting any+    /// existing target at that location.     fn store_target<'a, R>(         &'a self,-        read: R,+        target: R,         target_path: &'a TargetPath,     ) -> BoxFuture<'a, Result<()>>     where         R: AsyncRead + Send + Unpin + 'a;+}++impl<T> RepositoryProvider for &T+where+    T: RepositoryProvider,+{+    fn fetch_metadata<'a, D>(+        &'a self,+        meta_path: &'a MetadataPath,+        version: &'a MetadataVersion,+        max_length: Option<usize>,+        hash_data: Option<(&'static HashAlgorithm, HashValue)>,+    ) -> BoxFuture<'a, Result<Box<dyn AsyncRead + Send + Unpin>>>+    where+        D: DataInterchange + Sync,+    {+        (**self).fetch_metadata::<D>(meta_path, version, max_length, hash_data)+    } -    /// Fetch the given target.     fn fetch_target<'a>(         &'a self,         target_path: &'a TargetPath,         target_description: &'a TargetDescription,-    ) -> BoxFuture<'a, Result<Box<dyn AsyncRead + Send + Unpin>>>;+    ) -> BoxFuture<'a, Result<Box<dyn AsyncRead + Send + Unpin>>> {+        (**self).fetch_target(target_path, target_description)+    }+}++impl<T> RepositoryStorage for &T+where+    T: RepositoryStorage,+{+    fn store_metadata<'a, R, D>(+        &'a self,+        meta_path: &'a MetadataPath,+        version: &'a MetadataVersion,+        metadata: R,+    ) -> BoxFuture<'a, Result<()>>+    where+        R: AsyncRead + Send + Unpin + 'a,+        D: DataInterchange + Sync,+    {+        (**self).store_metadata::<_, D>(meta_path, version, metadata)+    } -    /// Perform a sanity check that `M`, `Role`, and `MetadataPath` all desrcribe the same entity.+    fn store_target<'a, R>(+        &'a self,+        target: R,+        target_path: &'a TargetPath,+    ) -> BoxFuture<'a, Result<()>>+    where+        R: AsyncRead + Send + Unpin + 'a,+    {+        (**self).store_target(target, target_path)+    }+}++/// A wrapper around an implementation of [`RepositoryProvider`] and/or [`RepositoryStorage`] tied+/// to a specific [`DataInterchange`](crate::interchange::DataInterchange) that will enforce+/// provided length limits and hash checks.+#[derive(Debug, Clone)]+pub struct Repository<R, D> {

I think we could make this pub(crate). It's mainly about making sure the client doesn't accidentally access the repositories directly. I'm not sure if external users would need this.

wellsie1116

comment created time in 18 days

Pull request review commentheartsucker/rust-tuf

Split repository traits, Preserve metadata hashes

 where }  async fn init_server<'a, T>(-    remote: &'a mut EphemeralRepository<Json>,+    remote: &'a mut Repository<EphemeralRepository, Json>,

What do you think about just passing in EphemeralRepository and not wrap it in Repository? The only thing that buys us is that we call Repository::check to see if the metadata path matches the role, but I'm not sure if that check actually buys us that much. If a server messed up and wrote the root metadata as timestamp.json, the client would fetch that file then it should fail to deserialize into TimestampMetadata. So really, this check is more for making sure we don't generate a malformed repository, but given how low-level the current implementation is, the whole thing probably needs to be architected (see #232).

wellsie1116

comment created time in 18 days

Pull request review commentheartsucker/rust-tuf

Split repository traits, Preserve metadata hashes

-use chrono::offset::Utc;-use chrono::DateTime; use futures_io::AsyncRead; use futures_util::ready; use ring::digest; use std::io::{self, ErrorKind}; use std::marker::Unpin; use std::pin::Pin; use std::task::{Context, Poll};+use std::time::{Duration, Instant};  use crate::crypto::{HashAlgorithm, HashValue}; use crate::Result; +pub(crate) trait SafeAsyncRead: Sized {+    /// Creates an `AsyncRead` adapter which will fail transfers slower than+    /// `min_bytes_per_second`.+    fn enforce_minimum_bitrate(self, min_bytes_per_second: u32) -> EnforceMinimumBitrate<Self>;++    /// Creates an `AsyncRead` adapter that ensures the consumer can't read more than `max_length`+    /// bytes. Also, when the underlying `AsyncRead` is fully consumed, the hash of the data is+    /// optionally calculated and checked against `hash_data`. Consumers should purge and untrust+    /// all read bytes if the returned `AsyncRead` ever returns an `Err`.+    ///+    /// It is **critical** that none of the bytes from this struct are used until it has been fully+    /// consumed as the data is untrusted.+    fn check_length_and_hash(+        self,+        max_length: u64,+        hash_data: Option<(&HashAlgorithm, HashValue)>,+    ) -> Result<SafeReader<Self>>;+}++impl<R> SafeAsyncRead for R+where+    R: AsyncRead + Unpin,+{+    fn enforce_minimum_bitrate(self, min_bytes_per_second: u32) -> EnforceMinimumBitrate<Self> {+        EnforceMinimumBitrate::new(self, min_bytes_per_second)+    }++    fn check_length_and_hash(+        self,+        max_length: u64,+        hash_data: Option<(&HashAlgorithm, HashValue)>,+    ) -> Result<SafeReader<Self>> {+        SafeReader::new(self, max_length, hash_data)+    }+}++/// Wraps an `AsyncRead` to detect and fail transfers slower than a minimum bitrate.+pub(crate) struct EnforceMinimumBitrate<R> {

This needs a test.

wellsie1116

comment created time in 18 days

Pull request review commentheartsucker/rust-tuf

Split repository traits, Preserve metadata hashes

 pub use self::http::{HttpRepository, HttpRepositoryBuilder}; mod ephemeral; pub use self::ephemeral::EphemeralRepository; -/// Top-level trait that represents a TUF repository and contains all the ways it can be interacted-/// with.-pub trait Repository<D>-where-    D: DataInterchange + Sync,-{-    /// Store signed metadata.+/// A readable TUF repository.+pub trait RepositoryProvider {+    /// Fetch signed metadata identified by `meta_path`, `version`, and+    /// [`D::extension()`][extension].+    ///+    /// Implementations may ignore `max_length` and `hash_data` as [`Repository`] will verify these+    /// constraints itself. However, it may be more efficient for an implementation to detect+    /// invalid metadata and fail the fetch operation before streaming all of the bytes of the+    /// metadata.     ///-    /// Note: This **MUST** canonicalize the bytes before storing them as a read will expect the-    /// hashes of the metadata to match.-    fn store_metadata<'a, M>(+    /// [extension]: crate::interchange::DataInterchange::extension+    fn fetch_metadata<'a, D>(         &'a self,         meta_path: &'a MetadataPath,         version: &'a MetadataVersion,-        metadata: &'a SignedMetadata<D, M>,-    ) -> BoxFuture<'a, Result<()>>+        max_length: Option<usize>,+        hash_data: Option<(&'static HashAlgorithm, HashValue)>,+    ) -> BoxFuture<'a, Result<Box<dyn AsyncRead + Send + Unpin>>>     where-        M: Metadata + Sync + 'static;+        D: DataInterchange + Sync;++    /// Fetch the given target.+    ///+    /// Implementations may ignore the `length` and `hashes` fields in `target_description` as+    /// [`Repository`] will verify these constraints itself. However, it may be more efficient for+    /// an implementation to detect invalid targets and fail the fetch operation before streaming+    /// all of the bytes.+    fn fetch_target<'a>(+        &'a self,+        target_path: &'a TargetPath,+        target_description: &'a TargetDescription,+    ) -> BoxFuture<'a, Result<Box<dyn AsyncRead + Send + Unpin>>>;+} -    /// Fetch signed metadata.-    fn fetch_metadata<'a, M>(+/// A writable TUF repository. Most implementors of this trait should also implement+/// `RepositoryProvider`.+pub trait RepositoryStorage {

As above, I think this should take D as a generic parameter.

wellsie1116

comment created time in 18 days

Pull request review commentheartsucker/rust-tuf

Split repository traits, Preserve metadata hashes

 where         )         .await?; +        // FIXME why does with_trusted_root_keyids store using the metadata version but+        // with_trusted_root_keys use the version from the caller?+        let root_version = MetadataVersion::Number(trusted_root.version());

I think I just forgot to copy this case into with_truted_root_keys. We haven't gotten any checks yet that the metadata returned the version we requested (see #253), so the safest thing is to use the metadata version, since the whole system could get rather confused if we stored it with the wrong version.

wellsie1116

comment created time in 19 days

Pull request review commentheartsucker/rust-tuf

Split repository traits, Preserve metadata hashes

 where     /// # })     /// # }     /// ```-    pub async fn with_trusted_local(config: Config<T>, local: L, remote: R) -> Result<Self> {+    pub async fn with_trusted_local(+        config: Config<T>,+        local: impl Into<Repository<L, D>>,+        remote: impl Into<Repository<R, D>>,+    ) -> Result<Self> {         let root_path = MetadataPath::from_role(&Role::Root);+        let (local, remote) = (local.into(), remote.into());

Small nit, I recommend doing the .into() first, since logically the root_path and root_version are more closely related than this (local, remote).

wellsie1116

comment created time in 19 days

Pull request review commentheartsucker/rust-tuf

Split repository traits, Preserve metadata hashes

 where     /// # })     /// # }     /// ```-    pub async fn with_trusted_local(config: Config<T>, local: L, remote: R) -> Result<Self> {+    pub async fn with_trusted_local(+        config: Config<T>,+        local: impl Into<Repository<L, D>>,

Since this is part of the public interface, can you switch to using where clauses? Using impl Into prevents users from turbofish-ing this type, and since this is a fairly generic type, it's possible we might need a turbofish for some complex situations.

wellsie1116

comment created time in 19 days

issue closedheartsucker/rust-tuf

Switch `PublicKey::from_ed25519` from defaulting with keyid_hash_algorithms

Since python-tuf is thinking about getting rid of keyid_hash_algorithms, we should at least change our default to not assuming the metadata contains this field.

closed time in 20 days

erickt

issue commentheartsucker/rust-tuf

Switch `PublicKey::from_ed25519` from defaulting with keyid_hash_algorithms

Fixed in https://github.com/heartsucker/rust-tuf/pull/266.

erickt

comment created time in 20 days

push eventheartsucker/rust-tuf

Kevin Wells

commit sha 0d380e3c29554740649292b897fd1d5c0f9fab95

Move Repository implementations This change moves the various repository implementations into their own private modules and re-exports their types. No functional changes expected.

view details

Kevin Wells

commit sha a7c83f0e0aa30f0c1d057233f6250bfe5af9b5d3

Dedup logic to create a digest::Context

view details

Erick Tryzelaar

commit sha a3c00831f6e1aee68d5aefe535204ba14c2b5abb

Merge pull request #273 from wellsie1116/move-repo-types Move Repository implementations to their own modules

view details

push time in 20 days

PR merged heartsucker/rust-tuf

Reviewers
Move Repository implementations to their own modules

repository.rs was getting a bit large, and putting the implementations in their own modules should help simplify #149 if/when that needs to happen.

Also dedup some logic when creating digest::Context.

No functional changes expected.

+1040 -983

0 comment

6 changed files

wellsie1116

pr closed time in 20 days

pull request commentheartsucker/rust-tuf

update to hyper 0.13 and http 0.2

Looks good to me, but we need to hold off on this until we're ready to update Fuchsia.

tamird

comment created time in 24 days

push eventheartsucker/rust-tuf

Kevin Wells

commit sha d3fbfec7740b26166316d1a11d5c2a1456b20728

Fixup io::Error not_found conversion When a Client is configured with a local FileSystemRepository, it is expected that it does not contain metadata until it is fetched from a remote repository. The client initializer will first check the local repository for root.json metadata before checking with the remote repository, but it will only try the remote repository if the local fetch_metadata fails with Error::NotFound and the current implementation of FileSystemRepository produces an Error::Opaque when the requested metadata does not exist. This change maps io errors with kind() == io::ErrorKind::NotFound to Error::NotFound. and adds a test for a blank FileSystemRepository to show it produces the expected error.

view details

Erick Tryzelaar

commit sha b82e1adfd7be960a1dfb942e964cea8f59445ccf

Merge pull request #269 from wellsie1116/develop Fixup io::Error not_found conversion

view details

push time in 25 days

PR merged heartsucker/rust-tuf

Fixup io::Error not_found conversion

When a Client is configured with a local FileSystemRepository, it is expected that it does not contain metadata until it is fetched from a remote repository. The client initializer will first check the local repository for root.json metadata before checking with the remote repository, but it will only try the remote repository if the local fetch_metadata fails with Error::NotFound and the current implementation of FileSystemRepository produces an Error::Opaque when the requested metadata does not exist.

This change maps io errors with kind() == io::ErrorKind::NotFound to Error::NotFound. and adds a test for a blank FileSystemRepository to show it produces the expected error.

+29 -1

0 comment

2 changed files

wellsie1116

pr closed time in 25 days

delete branch erickt/rust-tuf

delete branch : deprecated

delete time in a month

pull request commentheartsucker/rust-tuf

remove deprecated functions

We still have the functionality of with_root_pinned, the function just got renamed to Client::with_trusted_root_keyids to distinguish it from creating clients with PublicKeys or with an initial trusted SignedMetadata<RootMetadata>.

erickt

comment created time in a month

delete branch erickt/rust-tuf

delete branch : fmt

delete time in a month

PR opened heartsucker/rust-tuf

remove deprecated functions
+0 -35

0 comment

1 changed file

pr created time in a month

create barncherickt/rust-tuf

branch : fmt

created branch time in a month

create barncherickt/rust-tuf

branch : deprecated

created branch time in a month

push eventheartsucker/rust-tuf

Erick Tryzelaar

commit sha 60f2b3a2007218a146eb9fbc8629777a3100ee06

Add tests for {Public,Private}Key::from_ed25519* Change-Id: I46d64bd2cc81f708bf538a00b8b84b02801e3828

view details

Erick Tryzelaar

commit sha 99f2c1119d2b7a5c5e725da5802a33456e417300

{Public,Private}Key::from_ed25519 should not set keyid_hash_algorithms python-tuf is [considering] getting rid of keyid_hash_algorithms, so we shouldn't default to generating keys with them specified. [considering]: https://github.com/theupdateframework/tuf/issues/848 Change-Id: I2c3af5d5eb7b0cc30793b54e45155320164cf706

view details

Erick Tryzelaar

commit sha c5b40e6a4ce96bee597f9f3a9a9ab5318d227055

Merge pull request #266 from erickt/default {Public,Private}::from_ed25519 should not default to adding keyid_hash_algorithms

view details

push time in a month

PR merged heartsucker/rust-tuf

{Public,Private}::from_ed25519 should not default to adding keyid_hash_algorithms

python-tuf is considering getting rid of keyid_hash_algorithms, so we shouldn't default to generating keys with them specified.

+69 -11

0 comment

1 changed file

erickt

pr closed time in a month

PR opened heartsucker/rust-tuf

{Public,Private}::from_ed25519 should not default to adding keyid_hash_algorithms

python-tuf is considering getting rid of keyid_hash_algorithms, so we shouldn't default to generating keys with them specified.

+69 -11

0 comment

1 changed file

pr created time in a month

create barncherickt/rust-tuf

branch : default

created branch time in a month

issue openedheartsucker/rust-tuf

Switch `PublicKey::from_ed25519` from defaulting with keyid_hash_algorithms

Since python-tuf is thinking about getting rid of keyid_hash_algorithms, we should at least change our default to not assuming the metadata contains this field.

created time in a month

pull request commentBurntSushi/bstr

Add unicode license for src/unicode/data

Thanks @BurntSushi !

erickt

comment created time in a month

push eventerickt/bstr

Erick Tryzelaar

commit sha 017cb1a4b55e19989b1957a62cb79270d96a1b6e

Add unicode license for src/unicode/data As a followup to #30, and analogous to this [patch] in rust-lang/regex, this adds the unicode license for the test unicode data in `src/unicode/data`. [patch]: https://github.com/rust-lang/regex/pull/535,

view details

push time in a month

PR opened BurntSushi/bstr

Add unicode license for src/unicode/data

As a followup to #30, and analogous to this patch in rust-lang/regex, this adds the unicode license for the test unicode data in src/unicode/data.

If accepted, could a release be made once this has landed to make sure the crate artifact includes this license?

Thanks again @BurntSushi!

+45 -0

0 comment

1 changed file

pr created time in a month

create barncherickt/bstr

branch : unicode

created branch time in a month

PR closed BurntSushi/bstr

Reviewers
Exclude src/unicode/data from package

According to this comment, the unicode data in src/unicode/data is only used by tests, therefore are unnecessary in the published package. This removes 30KB from the download size.

+1 -1

3 comments

1 changed file

erickt

pr closed time in a month

pull request commentBurntSushi/bstr

Exclude src/unicode/data from package

Ah no worries. I'm happy to close this then.

erickt

comment created time in a month

pull request commentBurntSushi/bstr

Exclude src/unicode/data from package

Not sure why the test failed, but since https://github.com/BurntSushi/bstr/pull/32 passed, I'm guessing the tests might pass if they are retried.

erickt

comment created time in a month

PR opened BurntSushi/bstr

Exclude .github from the packages
+1 -1

0 comment

1 changed file

pr created time in a month

PR opened BurntSushi/bstr

Exclude src/unicode/data from package

According to this comment, the unicode data in src/unicode/data is only used by tests, therefore are unnecessary in the published package. This removes 30KB from the download size.

+1 -1

0 comment

1 changed file

pr created time in a month

create barncherickt/bstr

branch : exclude

created branch time in a month

create barncherickt/bstr

branch : exclude-test-data

created branch time in a month

issue openedctz/hyper-rustls

Cut a 0.19.1 release?

Just to make sure my request in https://github.com/ctz/hyper-rustls/pull/100#issuecomment-574774033 doesn't get missed, would it be possible to have a 0.19.1 release made? We need this in order to update hyper-rustls in Fuchsia. Thanks so much!

created time in a month

fork erickt/bstr

A string type for Rust that is not required to be valid UTF-8.

fork in a month

issue openedBurntSushi/bstr

Adding license info

Hello @BurntSushi! Would it be possible to add some more license info?

  • In src/unicode, it appears that you've extracted some data from unicode.
  • In src/utf8.rs, you mention that code is based off of https://bjoern.hoehrmann.de/utf-8/decoder/dfa/, but don't reference what license that code falls under.

Could you add these licenses to the crate, and reference this in the Cargo.toml / readme? Thanks so much!

created time in a month

pull request commentctz/hyper-rustls

don't depend on hyper/tcp feature by default

Awesome, thanks!

erickt

comment created time in a month

pull request commentctz/hyper-rustls

don't depend on hyper/tcp feature by default

Thanks @lucab! Would it be possible to also get a 0.19.1 release cut for this change?

erickt

comment created time in a month

delete branch erickt/hyper-rustls

delete branch : no-required-tokio-rt

delete time in a month

issue commentheartsucker/rust-tuf

Dependency issue, untrusted = "^0.5"

Hello! We've fixed this in git. We're in the middle of a pretty large radical refactor, where we've migrated to futures, and made rust-tuf more compliant with the TUF-1.0 spec.

@heartsucker: I suppose we could cut a 0.3.0-alpha. I'm not sure if we want to release 0.3.0 yet, but things seem comparatively stable. We could cut a 0.3.0-alpha4 if you aren't worried about breaking compatibility with 0.3.0-alpha3.

Charles-Schleich

comment created time in a month

pull request commentheartsucker/rust-tuf

Fix typos

Thanks again @cavokz!

cavokz

comment created time in a month

push eventheartsucker/rust-tuf

Domenico Andreoli

commit sha f641a4330e21b0ce848e7f54e0fcf2314ca2fa28

Fix typo in comment of TargetsMetadataBuilder::delegations()

view details

Domenico Andreoli

commit sha 59147536d7ab408b361f313545cebf63acdb2305

Fix typo in nested_delegation() test

view details

Erick Tryzelaar

commit sha 2fdd3343f5a9a26dce3e3800ed50d38397f70e4f

Merge pull request #264 from cavokz/fix-typos Fix typos

view details

push time in a month

PR merged heartsucker/rust-tuf

Fix typos
+2 -2

0 comment

2 changed files

cavokz

pr closed time in a month

pull request commentctz/hyper-rustls

don't depend on hyper/tcp feature by default

@ctz gentle ping :)

erickt

comment created time in a month

pull request commentctz/hyper-rustls

don't depend on hyper/tcp feature by default

@CryZe No worries! This isn't blocking us yet, I'm just doing prep work to update to hyper 0.13. Thanks for doing the upgrade!

erickt

comment created time in 2 months

pull request commentctz/hyper-rustls

don't depend on hyper/tcp feature by default

I just checked, and it actually looks like we don't even need this feature for cargo test --no-default-features, so I removed the optional tcp feature.

erickt

comment created time in 2 months

push eventerickt/hyper-rustls

Erick Tryzelaar

commit sha 0f432b31cae5023e1440e73da19394408065d9c9

don't depend on hyper/tcp feature by default On fuchsia, we don't support the mio and tokio runtime yet, and so in order to use hyper-rustls, we need to compile it with `default-features = false`. Unfortunately, in https://github.com/ctz/hyper-rustls/pull/96, @CryZe added a hard requirement for the hyper "tcp" feature, which pulls in the tokio runtime. To restore our ability to use hyper-rustls, this patch removes the "hyper/tcp" hard feature requirement.

view details

push time in 2 months

PR opened ctz/hyper-rustls

don't depend on hyper/tcp feature by default

On fuchsia, we don't support the mio and tokio runtime yet, and so in order to use hyper-rustls, we need to compile it with default-features = false. Unfortunately, in https://github.com/ctz/hyper-rustls/pull/96, @CryZe added a hard requirement for the hyper "tcp" feature, which pulls in the tokio runtime.

To restore our ability to use hyper-rustls, this patch migrates the "hyper/tcp" hard feature requirement to an optional "tcp" feature. Note that we don't need to change the default case, because the "hyper/runtime" feature already depends on "hyper/tcp".

+2 -1

0 comment

1 changed file

pr created time in 2 months

create barncherickt/hyper-rustls

branch : no-required-tokio-rt

created branch time in 2 months

issue closedctz/hyper-rustls

hyper-rustls 0.19 is not compiling against tokio 0.2.7

It looks like the recently released tokio 0.2.7 has broken hyper-rustls:

   Compiling tokio v0.2.7
   Compiling pin-project v0.4.6
   Compiling futures-util v0.3.1
error[E0432]: unresolved import `tokio_macros::main_basic`
   --> /Users/etryzelaar/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.7/src/lib.rs:276:21
    |
276 |             pub use tokio_macros::main_basic as main;
    |                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ no `main_basic` in the root

error[E0432]: unresolved import `tokio_macros::test_basic`
   --> /Users/etryzelaar/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.7/src/lib.rs:277:21
    |
277 |             pub use tokio_macros::test_basic as test;
    |                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ no `test_basic` in the root

error: aborting due to 2 previous errors

For more information about this error, try `rustc --explain E0432`.

As best as I can tell, this should be fixed once https://github.com/tokio-rs/tokio/pull/2069 lands though.

closed time in 2 months

erickt

issue commentctz/hyper-rustls

hyper-rustls 0.19 is not compiling against tokio 0.2.7

Thanks @carllerche! I confirmed that 0.2.8 fixes this.

erickt

comment created time in 2 months

create barncherickt/hyper-rustls

branch : fix-tokio-macros

created branch time in 2 months

issue openedctz/hyper-rustls

hyper-rustls 0.19 is not compiling against tokio 0.2.7

It looks like the recently released tokio 0.2.7 has broken hyper-rustls:

   Compiling tokio v0.2.7
   Compiling pin-project v0.4.6
   Compiling futures-util v0.3.1
error[E0432]: unresolved import `tokio_macros::main_basic`
   --> /Users/etryzelaar/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.7/src/lib.rs:276:21
    |
276 |             pub use tokio_macros::main_basic as main;
    |                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ no `main_basic` in the root

error[E0432]: unresolved import `tokio_macros::test_basic`
   --> /Users/etryzelaar/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.7/src/lib.rs:277:21
    |
277 |             pub use tokio_macros::test_basic as test;
    |                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ no `test_basic` in the root

error: aborting due to 2 previous errors

For more information about this error, try `rustc --explain E0432`.

As best as I can tell, this should be fixed once https://github.com/tokio-rs/tokio/pull/2069 lands though.

created time in 2 months

Pull request review commentheartsucker/rust-tuf

Drop MetadataVersion::Hash, resolve #254

 where                 Err(e) => return (delegation.terminating(), Err(e)),             }; -            let version = if self.tuf.root().consistent_snapshot() {-                MetadataVersion::Hash(value.clone())

I think we want this conditional, just returning MetadataVersion::Number(role_meta.version()) when consistent snapshot is enabled.

cavokz

comment created time in 2 months

push eventheartsucker/rust-tuf

Daniel Johnson

commit sha 2bf539801cbbaa351287971ed93ad3bf67d93ac3

Disable cargo-audit on pull requests This check is failing on https://github.com/heartsucker/rust-tuf/pull/259 , and I don't think that's the right place to be surfacing this warning. It also looks like it automatically cancels all the other workflows when this happens, which is unhelpful. It's mildly useful info to know that spin's maintainer is gone, but completely un-actionable while writing or reviewing pull requests on rust-tuf.

view details

Erick Tryzelaar

commit sha 24651b0321fac2f0de74f9ba1836fa96792da674

Merge pull request #260 from heartsucker/ComputerDruid-patch-rust-workflow Disable cargo-audit on pull requests

view details

push time in 2 months

PR merged heartsucker/rust-tuf

Reviewers
Disable cargo-audit on pull requests

This check is failing on https://github.com/heartsucker/rust-tuf/pull/259 , and I don't think that's the right place to be surfacing this warning. It also looks like it automatically cancels all the other workflows when this happens, which is unhelpful.

It's mildly useful info to know that spin's maintainer is gone, but completely un-actionable while writing or reviewing pull requests on rust-tuf.

+0 -5

0 comment

1 changed file

ComputerDruid

pr closed time in 2 months

pull request commentheartsucker/rust-tuf

Update docs for keys generation.

Thanks again!

cavokz

comment created time in 2 months

push eventheartsucker/rust-tuf

Domenico Andreoli

commit sha 1ab57910e60afd2f5f96953deb080899f884f973

Update docs for keys generation. 'cargo install tuf' does not work (probably outdated). Running 'cargo run tuf' instead triggers tests/metadata/generate.rs which panics for not being able to load keys. Again, misleading at best.

view details

Erick Tryzelaar

commit sha 1a6f29f387fe78489dd890d013d9b195dddedf26

Merge pull request #255 from cavokz/update-docs-for-keys-generation Update docs for keys generation.

view details

push time in 2 months

PR merged heartsucker/rust-tuf

Update docs for keys generation.

'cargo install tuf' does not work (probably outdated).

Running 'cargo run tuf' instead triggers tests/metadata/generate.rs which panics for not being able to load keys. Again, misleading at best.

+0 -3

0 comment

1 changed file

cavokz

pr closed time in 2 months

delete branch erickt/rust-tuf

delete branch : fmt

delete time in 2 months

push eventheartsucker/rust-tuf

Erick Tryzelaar

commit sha 0e1b35f8e77bb11497f6a8c36d470ea6fb0c6a59

Reformat with rustfmt 1.4.8-stable (afb1ee1 2019-09-08) Change-Id: Ied3f8fb2bc8406c028dd3d496828ac4e85bfb2da

view details

Erick Tryzelaar

commit sha 4665df1ecdc58d93b4ff933ab55d12a162dde60f

Merge pull request #258 from erickt/fmt Reformat with rustfmt 1.4.8-stable (afb1ee1 2019-09-08)

view details

push time in 2 months

delete branch erickt/rust-tuf

delete branch : pub

delete time in 2 months

push eventheartsucker/rust-tuf

Erick Tryzelaar

commit sha b6f999c26563b73e0ddb969f7b6044d45b15ce63

Make PublicKey::from_ed25519_with_keyid_hash_algorithms public Change-Id: Id81eba97cb31f87d5f3be739afe1105a83cfc2ca

view details

Erick Tryzelaar

commit sha dab6f37a60a2e39191e19883993dc679943b2bdf

Merge pull request #257 from erickt/pub Make PublicKey::from_ed25519_with_keyid_hash_algorithms public

view details

push time in 2 months

delete branch erickt/rust-tuf

delete branch : hex

delete time in 2 months

push eventheartsucker/rust-tuf

Erick Tryzelaar

commit sha 8a9f405210eed413f69e6aa5ea292c90585ca262

hex encode public key in Debug impl TUF-1.0 encodes public keys in hex, so outputting keys in hex makes it easier to debug issues. Change-Id: I29c195e9dfa53f08d3863e60f71f9e1fb9c29ae6

view details

Erick Tryzelaar

commit sha fe637f912bb2de7bad677d419e515f8d57e68fc5

Merge pull request #256 from erickt/hex hex encode public key in Debug impl

view details

push time in 2 months

PR merged heartsucker/rust-tuf

hex encode public key in Debug impl

TUF-1.0 encodes public keys in hex, so outputting keys in hex makes it easier to debug issues.

+1 -1

0 comment

1 changed file

erickt

pr closed time in 2 months

PR opened heartsucker/rust-tuf

Reformat with rustfmt 1.4.8-stable (afb1ee1 2019-09-08)
+5 -13

0 comment

2 changed files

pr created time in 2 months

more