profile
viewpoint

djc/askama 1007

Type-safe, compiled Jinja-like templates for Rust

djc/couchdb-python 187

Python library for working with CouchDB

djc/awmy 26

arewemeetingyet.com: help communicate meeting times to timezone-challenged participants

djc/appdirs-rs 7

Rust crate for determining platform-specific application directories

djc/abna 6

Python library to automatically retrieve mutations from ABN Amro

djc/cs143-python 3

Python parser for CS143 AST serialization

djc/corda-rpc 2

Rust libraries for doing Corda RPC

djc/clang-format-find 1

Find the clang-format configuration that best fits your codebase

PullRequestEvent

pull request commentdjc/askama

Add format filter that swaps the first two arguments.

Should have led with that! I hadn't realized the composition advantage. How do you feel about restricting it to a single argument (so no more arguments after the format string)?

couchand

comment created time in 10 hours

PR closed djc/askama

Add format filter that swaps the first two arguments.

Swapping the first two arguments allows a more natural filter usage: {{ val | fmt("{:?}") }}.

Happy to bikeshed the name.

+50 -1

1 comment

4 changed files

couchand

pr closed time in 10 hours

pull request commentdjc/askama

Add format filter that swaps the first two arguments.

I don't see this adding much value -- it seems more likely to be confusing. Sorry!

couchand

comment created time in 10 hours

PR opened servo/rust-url

Set up GitHub Actions CI
+35 -26

0 comment

3 changed files

pr created time in 10 hours

create barnchdjc/rust-url

branch : gh-actions

created branch time in 10 hours

PR opened servo/rust-url

idna maintenance and performance improvements

I started looking at the performance profile of to_unicode() and here I am, 12 hours later:

  • Updated IDNA tests to latest version
  • Updated IDNA mapping table to Unicode 13.0
  • Improved performance of single Config::to_unicode() calls by about 20%
  • Added a Codec API that enables some of the allocation costs over multiple to_unicode()/to_ascii() calls

These changes have taken care not to change any of the existing APIs, and it comes out to fewer lines of code.

Additionally, I saw the old thread about asking for maintenance help. At this point I feel like I'm in a pretty good position to help out with idna maintenance (and potentially the other crates as well). Let me know if that'd be useful/how that could take shape.

+20596 -21080

0 comment

12 changed files

pr created time in 10 hours

push eventdjc/rust-url

Dirkjan Ochtman

commit sha e39a1cceb1f0a1336fbbe2a0610d3d743f677b7b

Update mapping table Python script for Python 3

view details

Dirkjan Ochtman

commit sha 77c31faab1af8cda680bf1ab89c45deb16a6f389

Update IDNA mapping table to Unicode 13.0

view details

Dirkjan Ochtman

commit sha a66c662901eef120f8e522c8ed0e30368e4d3116

Add simple to_unicode benchmark

view details

Dirkjan Ochtman

commit sha bd29d2915ecfe762442dcfb6c8e5ab0cd5f1dda1

Refactor mapping as an iterator API

view details

Dirkjan Ochtman

commit sha 7dd60a34c1e3b75cdba8f9f70ab1cd904ede559b

Separate bidi validation pass to simplify code

view details

Dirkjan Ochtman

commit sha 4d38a820af52535972c2a6824bace233ecee4494

Inline simple function

view details

Dirkjan Ochtman

commit sha 959c82231b3de4a61187d24fa6f120591eb0f2a0

Refactor processing() loop to limit indentation level

view details

Dirkjan Ochtman

commit sha e14b372c1df8b99956cc4cfeb1ceb13829fd2aca

Gather punycode insertions separately to avoid memcpys

view details

Dirkjan Ochtman

commit sha 48db55f2c60fb0e59f0d80cb9d11b993a62c0e1f

Separate decoding of encoded characters and merging

view details

Dirkjan Ochtman

commit sha fbfd88fe70179b44dc8c65fa274830ddff4e3425

Use iterator interface to yield characters from punycode decoder

view details

Dirkjan Ochtman

commit sha 5a321c5b2542a17e6349e9b43a3e51cd7af3238e

Add Codec API with amortized allocation

view details

push time in 10 hours

push eventdjc/rust-url

Dirkjan Ochtman

commit sha 12dbf8815e0f4f8d3253030b6ef6092dedc74fd9

idna: update IDNA tests to latest version

view details

Dirkjan Ochtman

commit sha 76d353bdaa6e45fd263d0320dadaf1a6251ba54a

Refactor mapping as an iterator API

view details

Dirkjan Ochtman

commit sha ad69d90efd3a0f221d820d7bf6291672d0cb518b

Separate bidi validation pass to simplify code

view details

Dirkjan Ochtman

commit sha 59db09d76f90d6f696ae2dfe6cf00234dca2d7a0

Inline simple function

view details

Dirkjan Ochtman

commit sha a34bc90e22bc509469371f2020623d15f6723305

Refactor processing() loop to limit indentation level

view details

Dirkjan Ochtman

commit sha 89c0ce587966c5d81b140600b2b15b01a7d438b5

Gather punycode insertions separately to avoid memcpys

view details

Dirkjan Ochtman

commit sha e040158938557795525d6b5891e213c4a44d77a2

Separate decoding of encoded characters and merging

view details

Dirkjan Ochtman

commit sha d7f36b3f602a379f502cfe2e03623763b9d04e7d

Use iterator interface to yield characters from punycode decoder

view details

Dirkjan Ochtman

commit sha 25a585bdc2db9fcc8619ff3a0334bb921b184e78

Add Codec API with amortized allocation

view details

Dirkjan Ochtman

commit sha 031b06a88582636bc2982dd96ef36246c8bd10a7

Update mapping table Python script for Python 3

view details

Dirkjan Ochtman

commit sha 364b5a904d0e57703bdd72560e669d713eb040d7

Update IDNA mapping table to Unicode 13.0

view details

push time in 11 hours

pull request commentbodil/smartstring

Implement std::fmt::Display

Thanks!

sfleener

comment created time in a day

create barnchdjc/rust-url

branch : no-alloc-idna

created branch time in a day

fork djc/rust-url

URL parser for Rust

https://docs.rs/url/

fork in a day

pull request commentservo/rust-url

Make crate about 50% lighter

I think it would maybe make more sense to flatten the repository, moving the url crate into a url directory in the repository root.

Keats

comment created time in 2 days

issue closedtokio-rs/bytes

loom dependency pulled in by default

Since #392 (by @taiki-e), it looks like loom went from a dev-dependency to a default dependency. It's guarded by cfg(loom), but I'm not sure if that's working as intended. My application now seems to be pulling in loom as a dependency of bytes even though I didn't specify any loom cfg. As far as I understand, loom should only be used for testing?

closed time in 2 days

djc

issue commenttokio-rs/bytes

loom dependency pulled in by default

Ah, sorry, I got confused by cargo update calling this out, but it doesn't look like this gets built without --cfg loom enabled. It just clutters up Cargo.lock files, which had me confused for a moment.

djc

comment created time in 2 days

push eventdjc/quinn

dependabot-preview[bot]

commit sha 541448f79b2cc473a318bc4b45e6025d422f5628

Update ct-logs requirement from 0.6 to 0.7 Updates the requirements on [ct-logs](https://github.com/ctz/ct-logs) to permit the latest version. - [Release notes](https://github.com/ctz/ct-logs/releases) - [Commits](https://github.com/ctz/ct-logs/compare/v/0.6.0...v/0.7.0) Signed-off-by: dependabot-preview[bot] <support@dependabot.com>

view details

Dirkjan Ochtman

commit sha 5484d37c7675eecbe48e499f51fd64fff129bccc

Migrate to directories-next directories is no longer being maintained and the final released version causes a dependency conflict (cfg-if, versus tracing).

view details

Charles Hall

commit sha 73fb0273571ee383b950e0d577098222364e1170

expose StreamId information from Send/RecvStream If nothing else, this should make it easier to track goings-on in streams in client code since they'll be able to easily distinguish between streams via StreamId::index().

view details

Charles Hall

commit sha 66f3c04814653b11c508ffde476834d5fad74b8a

add a way to get a static id for a quic connection Since remote IP addresses can change, it's useful to have an independent ID for a connection that is always the same. quinn_proto::ConnectionHandle provides a mechanism for this, but quinn::Connection has an Arc internally whose internal pointer can be used as a unique identifier since acquiring the ConnectionHandle would require locking a mutex.

view details

Benjamin Saunders

commit sha 5d95eef10a2e09b5d7e5c00010b0b9ec3ecd549f

Split up poll_transmit Pull common packet construction logic out into a separate pair of functions for ease of reuse.

view details

Benjamin Saunders

commit sha bedcb428633538fad102f8c45aec70a4b816fa9a

Detect spurious migrations Defends against off-path forwarding attacks by sending PATH_CHALLENGE on the previous path when migrating.

view details

Benjamin Saunders

commit sha df7645047af693822dffe5a9c4ac26cb082813c7

Test rejection of client connections missing a required certificate

view details

Benjamin Saunders

commit sha 7e023e3cd8a822163ceb8e70868b786bffbed886

Document state predicates

view details

Benjamin Saunders

commit sha a1e8c56cb0381097a5f0f7498ee4c0db187bfc35

Update initial salt, retry integrity key/nonce, and versions

view details

Benjamin Saunders

commit sha f2c4d35535ec8810e7a544afe1f398940a4b3d06

Clarify read_crypto logic

view details

Benjamin Saunders

commit sha 57160897fed519bc3abea2de3f158e07af9f2483

Update names to match current pseudocode

view details

Benjamin Saunders

commit sha 8ca2f04bf98d13df96e5156b3ed6371937ad3d4c

Track in-flight bytes per packet space

view details

Benjamin Saunders

commit sha 68922c07ea355027e4a8979f34c0f2820f93ab04

Draft 29 PTO logic

view details

Benjamin Saunders

commit sha 58ac32d055e4ae3e415fecbef0cd3608e1c19afe

Rename SERVER_BUSY to CONNECTION_REFUSED

view details

Benjamin Saunders

commit sha 48597c8f76814f3aa7b32e2ff854c2b50f0410d6

Use TRANSPORT_PARAMETER_ERRORs for CID authentication

view details

Alexander Jackson

commit sha e342cc974eb47bb3c2342621ebfa8706dc1c5c40

Fix some broken documentation links in quinn-proto Fix some broken documentation links in quinn-proto and correctly direct them to the enum variant they are referencing. These were raised by `cargo doc`.

view details

Alexander Jackson

commit sha 5e62c3ffc2abaf032fda301e905e7c1154003c68

Fix some broken documentation in quinn-h3 Fix some broken documentation links in quinn-h3 and clear up some ambiguity. `SendRequest` does not yield a `SendData`, and only the `send_response()` method on `server::Sender` will return one.

view details

Benjamin Saunders

commit sha 70cf90389eaa94f17bdf45fbf344edcc3140559b

Respect initial RTT in RttEstimator

view details

Benjamin Saunders

commit sha b612577c7cf8f91325573dd1f2cdb54e62223c16

Update RTT estimation to match recovery draft

view details

Benjamin Saunders

commit sha da0943610527269124a3ede63562b0888b541578

Update PTO computation to match current recovery draft

view details

push time in 2 days

fork djc/bytes-1

Utilities for working with bytes

fork in 2 days

issue openedtokio-rs/bytes

loom dependency pulled in by default

Since #392 (by @taiki-e), it looks like loom went from a dev-dependency to a default dependency. It's guarded by cfg(loom), but I'm not sure if that's working as intended. My application now seems to be pulling in loom as a dependency of bytes even though I didn't specify any loom cfg. As far as I understand, loom should only be used for testing?

created time in 2 days

pull request commentbodil/smartstring

Implement std::fmt::Display

Was also looking for this. Are you planning on pushing this out into a release soon? :)

sfleener

comment created time in 2 days

pull request commentdjc/quinn

Implement pacing

Good catch! Should be easy to fix, I think?

DemiMarie-parity

comment created time in 2 days

Pull request review commentdjc/quinn

Implement pacing

+//! Pacing of packet transmissions.++use std::{+    convert::TryInto,+    time::{Duration, Instant},+};++/// A simple token-bucket pacer. The bucket starts full and has an adjustable capacity. Once the+/// bucket is empty, further transmission is blocked. The bucket refills at a rate determined by+/// the congestion window and estimated round-trip time.+pub struct Pacer {+    capacity: u64,+    tokens: u64,+    last_instant: Instant,+}++impl Pacer {+    /// Obtains a new [`Pacer`].+    pub fn new(capacity: u64) -> Self {+        Self {+            capacity,+            tokens: capacity,+            last_instant: Instant::now(),+        }+    }++    /// Return how long we need to wait before sending a packet.+    ///+    /// If we can send a packet right away, this returns `None`. Otherwise, returns `Some(d)`,+    /// where `d` is the time before this function should be called again.+    ///+    /// The 5/4 ratio used here comes from the suggestion that N = 1.25 in the draft IETF RFC for+    /// QUIC.+    pub fn delay(+        &mut self,+        smoothed_rtt: Duration,+        mtu: u16,+        window: u64,+        now: Instant,+    ) -> Option<Instant> {+        debug_assert_ne!(+            window, 0,+            "zero-sized congestion control window is nonsense"+        );+        if let Some(time_elapsed) = now.checked_duration_since(self.last_instant) {+            self.last_instant = now;+            let new_tokens = window.saturating_mul(5) / 4;+            if smoothed_rtt.as_nanos() == 0 {+                return None;+            }+            let elapsed = (time_elapsed.as_nanos() / smoothed_rtt.as_nanos())+                .try_into()+                .unwrap_or(u64::max_value());++            let new_tokens = new_tokens.saturating_mul(elapsed);+            self.tokens = self.tokens.saturating_add(new_tokens).min(self.capacity);+        }++        // we disable pacing for extremely large windows+        if self.tokens >= mtu.into() || window > u32::max_value().into() {+            return None;+        }++        let window = window as u32;+        let rtt = smoothed_rtt+            .checked_mul((mtu - self.tokens as u16).into())+            .unwrap_or_else(|| Duration::new(u64::max_value(), 999_999_999));+        let new_duration = ((rtt / window) / 5) * 4;+        let new_instant = self.last_instant + new_duration;+        Some(new_instant)

The new_ prefixes here feel a bit redundant since there aren't other variables in the method scope that they're easily confused with. Also, a number of definitions here only have a single usage site, so it feels to me like inlining them into larger expressions would make it easier to follow the code.

DemiMarie-parity

comment created time in 3 days

Pull request review commentdjc/quinn

Implement pacing

 where             sending_ecn: self.path.sending_ecn || !maybe_rebinding,             challenge: Some(self.rng.gen()),             challenge_pending: true,+            pacing: pacing::Pacer::new(0),

Why does this make sense? Why not use the current congestion window from the congestion controller we're setting for the path?

DemiMarie-parity

comment created time in 3 days

Pull request review commentdjc/quinn

Implement pacing

+//! Pacing of packet transmissions.++use std::{+    convert::TryInto,+    time::{Duration, Instant},+};++/// A simple token-bucket pacer. The bucket starts full and has an adjustable capacity. Once the+/// bucket is empty, further transmission is blocked. The bucket refills at a rate determined by+/// the congestion window and estimated round-trip time.+pub struct Pacer {+    capacity: u64,+    tokens: u64,+    last_instant: Instant,+}++impl Pacer {+    /// Obtains a new [`Pacer`].+    pub fn new(capacity: u64) -> Self {+        Self {+            capacity,+            tokens: capacity,+            last_instant: Instant::now(),

Maybe just call this last or prev? The _instant suffix feels redundant given the type.

DemiMarie-parity

comment created time in 3 days

Pull request review commentdjc/quinn

Implement pacing

+//! Pacing of packet transmissions.++use std::{+    convert::TryInto,+    time::{Duration, Instant},+};++/// A simple token-bucket pacer. The bucket starts full and has an adjustable capacity. Once the+/// bucket is empty, further transmission is blocked. The bucket refills at a rate determined by+/// the congestion window and estimated round-trip time.+pub struct Pacer {+    capacity: u64,+    tokens: u64,+    last_instant: Instant,+}++impl Pacer {+    /// Obtains a new [`Pacer`].+    pub fn new(capacity: u64) -> Self {+        Self {+            capacity,+            tokens: capacity,+            last_instant: Instant::now(),+        }+    }++    /// Return how long we need to wait before sending a packet.+    ///+    /// If we can send a packet right away, this returns `None`. Otherwise, returns `Some(d)`,+    /// where `d` is the time before this function should be called again.+    ///+    /// The 5/4 ratio used here comes from the suggestion that N = 1.25 in the draft IETF RFC for+    /// QUIC.+    pub fn delay(+        &mut self,+        smoothed_rtt: Duration,+        mtu: u16,+        window: u64,+        now: Instant,+    ) -> Option<Instant> {+        debug_assert_ne!(+            window, 0,+            "zero-sized congestion control window is nonsense"+        );+        if let Some(time_elapsed) = now.checked_duration_since(self.last_instant) {+            self.last_instant = now;+            let new_tokens = window.saturating_mul(5) / 4;

Please move the definition of new_tokens down to where it's actually used (not used here if smoothed_rtt is 0) (consider collapsing it with the redefinition below).

DemiMarie-parity

comment created time in 3 days

push eventdjc/quinn

Dirkjan Ochtman

commit sha fff0fc7b4462ec89fa89d25f857f718e4e44c921

Move PathData and related code into separate module

view details

push time in 3 days

issue commentdjc/quinn

PTO handling

Update (after #823):

fn set_key_discard_timer() {
     let duration = self.pto() * 3;
}

fn detect_lost_packets() {
    let rtt = self.path.rtt.conservative();
    let loss_delay = cmp::max(rtt.mul_f32(self.config.time_threshold), TIMER_GRANULARITY);

    let congestion_period = self.pto() * self.config.persistent_congestion_threshold;
}

fn reset_idle_timeout() {
    let dt = cmp::max(_, 3 * self.pto());
}

fn migrate() {
    let duration = 3 * cmp::max(self.pto(), 2 * self.config.initial_rtt);
}

fn set_close_timer() {
    let duration = 3 * self.pto();
}

fn pto_time_and_space() {
    let backoff = 2u32.pow(self.pto_count.min(MAX_BACKOFF_EXPONENT));
    let mut duration = self.path.rtt.pto_base() * backoff;

    if space == SpaceId::Data {
        duration += self.max_ack_delay() * backoff;
    }
}

fn pto() {
    self.path.rtt.pto_base() + self.max_ack_delay()
}

So my remaining questions I think boil down to:

  • Are there good reasons to bound the PTO by 2 * initial_rtt for migrate(), but not set_key_discard_timer(), reset_idle_timeout() or set_close_timer()?
  • pto_time_and_space() takes care to only add the max_ack_delay() to the PTO for SpaceId::Data, but set_key_discard_timer(), detect_lost_packets(), reset_idle_timeout(), migrate() and set_close_timer() use pto() directly, which always adds the max_ack_delay(). Presumably at least some of these can be called in earlier spaces. Should we add a SpaceId argument to pto()?
djc

comment created time in 3 days

Pull request review commentdjc/quinn

Move PathData/RttEstimator to a paths module

 impl RttEstimator {             .map_or(self.latest, |x| cmp::max(x, self.latest))     } +    pub fn max(&self) -> Duration {+        self.get().max(self.latest)+    }+

Done.

(Actually I guess the argument for this which is strongest IMO is that providing abstractions local to the data makes it easier to prevent divergent use of the data, of the sort you fixed in #822.)

djc

comment created time in 3 days

push eventdjc/quinn

Dirkjan Ochtman

commit sha 5a4d18f837373d81fda4db71ae709c538acca773

Upgrade to rustls-0.18

view details

Benjamin Saunders

commit sha 3339bc8f270df95efedbd7256a9901b57ef04ecf

Correct default initial RTT

view details

Alexander Jackson

commit sha 80653da2ca1e085439cf6497ef72934ae77ae024

Add documentation links to improve navigation Add documentation links and references to some of the main parts of the API so that relevant structs, enums and methods can be navigated to more quickly. This reduces the need for the search bar and thus makes it easier to find related content wherever it is mentioned.

view details

Demi M. Obenour

commit sha 6018f200555c38d178c08b890342fe7d8d4334d7

Don’t try to run code coverage on PRs It won’t work, as those builds don’t have access to the CodeCov API token.

view details

Dirkjan Ochtman

commit sha 8377fb1856de696598f573bea69960726b43d17e

Always try ECN on a new path

view details

Dirkjan Ochtman

commit sha aa852e22cc41b64f56d1545ae81a4c5fd46a85c4

More abstract method for RTT calculations

view details

Dirkjan Ochtman

commit sha 54a255807eebb4371b6cde8cbffa7a647a7071bb

Add more abstractions for PTO calculations

view details

Dirkjan Ochtman

commit sha 589eaf8b79040dc0581ef3af6126385f2b87a54e

Add PathData::new() associated function

view details

Dirkjan Ochtman

commit sha 78fee35b3c9c5bb6a74889d84ab631d23b5367ad

Abstract PathData creation for migrations

view details

Dirkjan Ochtman

commit sha 7a00a6867e9199be57822a68dfda4b70c8be21cd

Move PathData and related code into separate module

view details

push time in 3 days

issue commentrust-lang/release-team

Rust support across macOS and Windows versions

Maybe there's also a role here for platform vendors to support the Rust project in maintaining support for older releases? @rylev does MS have thoughts about they might want to handle this going forward?

XAMPPRocky

comment created time in 3 days

PR opened jbg/tokio-postgres-rustls

Upgrade rustls to 0.18
+3 -3

0 comment

1 changed file

pr created time in 3 days

create barnchdjc/tokio-postgres-rustls

branch : upgrades

created branch time in 3 days

fork djc/tokio-postgres-rustls

Rustls integration for tokio-postgres

fork in 3 days

issue openedserayuzgur/crates

Show different status indicator for semver-compatible updates

First let me say that I just recently discovered your extension on Twitter and I love it. That said, I think it could be better still, so here's one little suggestion.

Is your feature request related to a problem? Please describe. When reviewing dependency versions I'm usually hunting for semver-incompatible updates, since I use cargo update to bump all the semver-compatible ones. I noticed that your extension already shows 👍 for versions that are set to "1" for example, but actually it could/should also do this for crates that are set to "1.0.1" for example, since these are semver-compatible cargo will pull in the updates automatically. Setting it to 1.0.1 might be common if you've never tested your crate with 1.0.0 for example, or if you know you need a bugfix from 1.0.1. (Just in case you're not aware, in Cargo.toml "1.0.0" is equivalent to "^1.0.0" rather than "=1.0.0" -- you probably always want to be show semver-compatible updates in the latter case.)

Describe the solution you'd like Right now your extension either shows 👍 or "Latest: 1.2.3" by default. However, it seems to show "Latest: 1.2.3" even if I have say "1.1.2" configured. It would be nice if it could distinguish between semver-incompatible updates (as in, "2.0") and semver-compatible updates ("1.2.3"). The simplest way to do this could be to show 👍 for semver-compatible updates. A slightly more advanced way could be to distinguish between the case of semver-compatible and the case of semver-incompatible updates (for example, 👍 1.2.3 for semver-compatible and "Latest: 1.2.3" for -incompatible).

created time in 3 days

issue commentdjc/askama

No README file on crates.io

That seems like a Cargo issue then? Maybe file an issue there.

tim77

comment created time in 3 days

pull request commentdjc/quinn

Don’t try to run code coverage on PRs

Thanks!

DemiMarie-parity

comment created time in 3 days

push eventdjc/quinn

Demi M. Obenour

commit sha 6018f200555c38d178c08b890342fe7d8d4334d7

Don’t try to run code coverage on PRs It won’t work, as those builds don’t have access to the CodeCov API token.

view details

push time in 3 days

PR merged djc/quinn

Don’t try to run code coverage on PRs

It won’t work, as those builds don’t have access to the CodeCov API token.

+24 -18

0 comment

2 changed files

DemiMarie-parity

pr closed time in 3 days

push eventdjc/quinn

Alexander Jackson

commit sha 80653da2ca1e085439cf6497ef72934ae77ae024

Add documentation links to improve navigation Add documentation links and references to some of the main parts of the API so that relevant structs, enums and methods can be navigated to more quickly. This reduces the need for the search bar and thus makes it easier to find related content wherever it is mentioned.

view details

push time in 3 days

PR merged djc/quinn

Add documentation links to improve navigation

Add documentation links and references to some of the main parts of the API so that relevant structs, enums and methods can be navigated to more quickly. This reduces the need for the search bar and thus makes it easier to find related content wherever it is mentioned.

+122 -43

1 comment

4 changed files

alexander-jackson

pr closed time in 3 days

pull request commentdjc/quinn

Add documentation links to improve navigation

Sounds great, thanks!

alexander-jackson

comment created time in 3 days

push eventdjc/tokio-imap

dependabot-preview[bot]

commit sha 6aa15e962e01ed3010e817ac884e049d1f1ea405

Update tokio-rustls requirement from 0.13.0 to 0.14.0 Updates the requirements on [tokio-rustls](https://github.com/tokio-rs/tls) to permit the latest version. - [Release notes](https://github.com/tokio-rs/tls/releases) - [Commits](https://github.com/tokio-rs/tls/commits) Signed-off-by: dependabot-preview[bot] <support@dependabot.com>

view details

push time in 3 days

PR merged djc/tokio-imap

Update tokio-rustls requirement from 0.13.0 to 0.14.0 dependencies

Updates the requirements on tokio-rustls to permit the latest version. <details> <summary>Commits</summary> <ul> <li>See full diff in <a href="https://github.com/tokio-rs/tls/commits">compare view</a></li> </ul> </details> <br />

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


<details> <summary>Dependabot commands and options</summary> <br />

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
  • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
  • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
  • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language
  • @dependabot badge me will comment on this PR with code to add a "Dependabot enabled" badge to your readme

Additionally, you can set the following in your Dependabot dashboard:

  • Update frequency (including time of day and day of week)
  • Pull request limits (per update run and/or open at any time)
  • Automerge options (never/patch/minor, and dev/runtime dependencies)
  • Out-of-range updates (receive only lockfile updates, if desired)
  • Security updates (receive only security updates, if desired)

</details>

+1 -1

1 comment

1 changed file

dependabot-preview[bot]

pr closed time in 3 days

push eventdjc/quinn

Benjamin Saunders

commit sha 3339bc8f270df95efedbd7256a9901b57ef04ecf

Correct default initial RTT

view details

push time in 3 days

PR merged djc/quinn

Correct default initial RTT

See https://github.com/quicwg/base-drafts/pull/3823.

+1 -1

1 comment

1 changed file

Ralith

pr closed time in 3 days

push eventdjc/quinn

Dirkjan Ochtman

commit sha 5a4d18f837373d81fda4db71ae709c538acca773

Upgrade to rustls-0.18

view details

push time in 3 days

PR merged djc/quinn

Upgrade to rustls-0.18

Still needs a rustls-native-certs update.

+8 -11

2 comments

6 changed files

djc

pr closed time in 3 days

pull request commentdjc/quinn

Move PathData/RttEstimator to a paths module

(Also, #822 was actually necessary to make the abstractions line up here, so no stepping on toes at all...)

djc

comment created time in 3 days

Pull request review commentdjc/quinn

Move PathData/RttEstimator to a paths module

 impl RttEstimator {             .map_or(self.latest, |x| cmp::max(x, self.latest))     } +    pub fn max(&self) -> Duration {+        self.get().max(self.latest)+    }+

I think you previously argued in favor of accessors for reasons of keeping fields private? I think the same applies here, plus there's actually calculation going on here, so we're not just abstracting over the field. I'm open to a better name; all I could come up with was something like max_smoothed_or_latest() or some such, which seemed wordy and still not that clear.

djc

comment created time in 3 days

Pull request review commentdjc/quinn

Move PathData/RttEstimator to a paths module

 impl RttEstimator {         }     } -    /// Get current smoothed RTT estimate-    fn get(&self) -> Duration {-        self.smoothed.unwrap_or(self.latest)+    pub fn get(&self) -> Duration {+        self.smoothed+            .map_or(self.latest, |x| cmp::max(x, self.latest))

Ah, that was an unintentional conflict resolution error. Fixed now.

djc

comment created time in 3 days

push eventdjc/quinn

Dirkjan Ochtman

commit sha 03b919e30ec5a7ecdb31724743addd827116b71c

More abstract method for RTT calculations

view details

Dirkjan Ochtman

commit sha 532e34a3322514b6c714f198fc4cd75e249188fc

Add more abstractions for PTO calculations

view details

Dirkjan Ochtman

commit sha f47e308a04e735a843677978e71c9fca95bd6ade

Add PathData::new() associated function

view details

Dirkjan Ochtman

commit sha ca8b8d157c6aeb03c42469e7af1fbe40b6e403c6

Abstract PathData creation for migrations

view details

Dirkjan Ochtman

commit sha a427c5c30b0b5179a0eeb05670fceaca89b31c15

Move PathData and related code into separate module

view details

push time in 3 days

push eventdjc/quinn

Dirkjan Ochtman

commit sha fdc2bce0005e6134698baf49a9596ea22b387dad

Upgrade to rustls-0.18

view details

push time in 3 days

PR opened djc/quinn

Move PathData/RttEstimator to a paths module

The refactoring parts of #804 so we can get this out of the way.

+125 -98

0 comment

2 changed files

pr created time in 3 days

create barnchdjc/quinn

branch : paths

created branch time in 3 days

Pull request review commentdjc/quinn

H3: accept goaway from client

 impl ConnectionInner {          self.reset_waker(cx); -        Ok(self.inner.is_closing() && self.inner.requests_in_flight() == 0)+        if self.inner.is_closing() && self.inner.requests_in_flight() == 0 {

Missed this the first time around, but maybe consider hoisting the Ok() out and moving the DriveState::Running into an else branch?

stammw

comment created time in 3 days

push eventdjc/quinn

Benjamin Saunders

commit sha 70cf90389eaa94f17bdf45fbf344edcc3140559b

Respect initial RTT in RttEstimator

view details

Benjamin Saunders

commit sha b612577c7cf8f91325573dd1f2cdb54e62223c16

Update RTT estimation to match recovery draft

view details

Benjamin Saunders

commit sha da0943610527269124a3ede63562b0888b541578

Update PTO computation to match current recovery draft

view details

Benjamin Saunders

commit sha 81e6c26c687e2526f2c1738919e14651f1d46e3a

Deduplicate RTT fallback logic

view details

push time in 3 days

PR merged djc/quinn

RTT/PTO debugging

We had a number of subtle issues and/or cases of out of date logic.

+26 -26

1 comment

1 changed file

Ralith

pr closed time in 3 days

pull request commentctz/rustls-native-certs

Bump rustls to 0.18

Nice!

Keruspe

comment created time in 4 days

pull request commentdjc/quinn

Upgrade to rustls-0.18

Good call, done.

djc

comment created time in 4 days

push eventdjc/quinn

Dirkjan Ochtman

commit sha 9ba43e1eb478c891768d597d0457c54df5eb0049

Upgrade to rustls-0.18

view details

push time in 4 days

pull request commentdjc/topfew-rs

Split fields without regular expressions

@HeroicKatora thanks, I'll see if that helps at the macro level.

djc

comment created time in 4 days

PR opened djc/quinn

Upgrade to rustls-0.18

Still needs a rustls-native-certs update.

+5 -5

0 comment

5 changed files

pr created time in 4 days

create barnchdjc/quinn

branch : rustls-0.18

created branch time in 4 days

issue commentdjc/quinn

QPACK: gather all encoder tracking data into a type

Would you like me to take care of this one? Sounds like the kind of thing I'd enjoy.

stammw

comment created time in 4 days

pull request commentdjc/quinn

Initial support for PLPMTUD

I think it'd be good to land the refactoring parts those, but in order to make abstractions I'd prefer discussion of the questions raised in #815 before we land this.

djc

comment created time in 4 days

Pull request review commentdjc/quinn

H3: accept goaway from client

 impl ConnectionRef {     } } +type ConnectionEnd = bool;

I don't see this adding much value, since it doesn't provide type safety. Maybe just add some comments to the drive() function instead, or instead make it an actual enum? (I've done the latter in work code and I did feel it made the code clearer -- and it should be basically from a performance perspective.)

stammw

comment created time in 4 days

pull request commentctz/rustls-native-certs

Bump rustls to 0.18

Yeah, looks like a Windows-only failure that's also happening on master.

Keruspe

comment created time in 4 days

Pull request review commentdjc/quinn

Implement pacing

+//! Pacing of packet transmissions.++use std::{+    convert::TryInto,+    time::{Duration, Instant},+};++/// A simple token-bucket pacer. The bucket starts full and has an adjustable capacity. Once the+/// bucket is empty, further transmission is blocked. The bucket refills at a rate determined by+/// the congestion window and estimated round-trip time.+pub struct Pacer {+    capacity: u64,+    tokens: u64,+    last_instant: Instant,+}++impl Pacer {+    /// Obtains a new [`Pacer`].+    pub fn new(capacity: u64) -> Self {+        Self {+            capacity,+            tokens: capacity,+            last_instant: Instant::now(),+        }+    }++    /// Return how long we need to wait before sending a packet.+    ///+    /// If we can send a packet right away, this returns [`None`]. Otherwise, returns `Some(d)`,+    /// where `d` is the time before this function should be called again.+    ///+    /// We panic if the passed-in instant is older than `self.last_instant`.+    pub fn delay(+        &mut self,+        smoothed_rtt: Duration,+        mtu: u16,+        window: u64,+        mut now: Instant,+    ) -> Option<Instant> {+        if let Some(time_elapsed) = now.checked_duration_since(self.last_instant) {+            self.last_instant = now;+            let new_tokens = window.saturating_mul(5) / 4;

The 5 / 4 ratio used twice here could use some comments/documentation on why/how that's chosen.

DemiMarie-parity

comment created time in 4 days

Pull request review commentdjc/quinn

Implement pacing

+//! Pacing of packet transmissions.++use std::{+    convert::TryInto,+    time::{Duration, Instant},+};++/// A simple token-bucket pacer. The bucket starts full and has an adjustable capacity. Once the+/// bucket is empty, further transmission is blocked. The bucket refills at a rate determined by+/// the congestion window and estimated round-trip time.+pub struct Pacer {+    capacity: u64,+    tokens: u64,+    last_instant: Instant,+}++impl Pacer {+    /// Obtains a new [`Pacer`].+    pub fn new(capacity: u64) -> Self {+        Self {+            capacity,+            tokens: capacity,+            last_instant: Instant::now(),+        }+    }++    /// Return how long we need to wait before sending a packet.+    ///+    /// If we can send a packet right away, this returns [`None`]. Otherwise, returns `Some(d)`,
    /// If we can send a packet right away, this returns `None`. Otherwise, returns `Some(d)`,
DemiMarie-parity

comment created time in 4 days

Pull request review commentdjc/quinn

Implement pacing

+//! Pacing of packet transmissions.++use std::{+    convert::TryInto,+    time::{Duration, Instant},+};++/// A simple token-bucket pacer. The bucket starts full and has an adjustable capacity. Once the+/// bucket is empty, further transmission is blocked. The bucket refills at a rate determined by+/// the congestion window and estimated round-trip time.+pub struct Pacer {+    capacity: u64,+    tokens: u64,+    last_instant: Instant,+}++impl Pacer {+    /// Obtains a new [`Pacer`].+    pub fn new(capacity: u64) -> Self {+        Self {+            capacity,+            tokens: capacity,+            last_instant: Instant::now(),+        }+    }++    /// Return how long we need to wait before sending a packet.+    ///+    /// If we can send a packet right away, this returns [`None`]. Otherwise, returns `Some(d)`,+    /// where `d` is the time before this function should be called again.+    ///+    /// We panic if the passed-in instant is older than `self.last_instant`.+    pub fn delay(+        &mut self,+        smoothed_rtt: Duration,+        mtu: u16,+        window: u64,+        mut now: Instant,+    ) -> Option<Instant> {+        if let Some(time_elapsed) = now.checked_duration_since(self.last_instant) {+            self.last_instant = now;+            let new_tokens = window.saturating_mul(5) / 4;+            if smoothed_rtt.as_nanos() == 0 {+                return None;+            }+            let elapsed = (time_elapsed.as_nanos() / smoothed_rtt.as_nanos())+                .try_into()+                .unwrap_or(u64::max_value());++            let new_tokens = new_tokens.saturating_mul(elapsed);+            self.tokens = self.tokens.saturating_add(new_tokens).min(self.capacity);+        } else {+            now = self.last_instant+        }++        if window > u32::max_value().into() {+            // we disable pacing for extremely large windows+            return None;+        }++        let window = window as u32;+        if self.tokens > mtu.into() {+            None+        } else {+            let rtt = smoothed_rtt+                .checked_mul((mtu - self.tokens as u16).into())+                .unwrap_or_else(|| Duration::new(u64::max_value(), 999_999_999));+            let new_duration: Duration = rtt / window / 5 * 4;+            let new_instant: Instant = now + new_duration;+            Some(new_instant)+        }+    }+}

Probably add some unit tests here to prove that the basic functionality works correctly?

DemiMarie-parity

comment created time in 4 days

Pull request review commentdjc/quinn

Implement pacing

+//! Pacing of packet transmissions.++use std::{+    convert::TryInto,+    time::{Duration, Instant},+};++/// A simple token-bucket pacer. The bucket starts full and has an adjustable capacity. Once the+/// bucket is empty, further transmission is blocked. The bucket refills at a rate determined by+/// the congestion window and estimated round-trip time.+pub struct Pacer {+    capacity: u64,+    tokens: u64,+    last_instant: Instant,+}++impl Pacer {+    /// Obtains a new [`Pacer`].+    pub fn new(capacity: u64) -> Self {+        Self {+            capacity,+            tokens: capacity,+            last_instant: Instant::now(),+        }+    }++    /// Return how long we need to wait before sending a packet.+    ///+    /// If we can send a packet right away, this returns [`None`]. Otherwise, returns `Some(d)`,+    /// where `d` is the time before this function should be called again.+    ///+    /// We panic if the passed-in instant is older than `self.last_instant`.

Since the instant comes from the user (indirectly), I'm not sure panicking is the right approach here. Is there some other way to respond here?

DemiMarie-parity

comment created time in 4 days

Pull request review commentdjc/quinn

Implement pacing

+//! Pacing of packet transmissions.++use std::{+    convert::TryInto,+    time::{Duration, Instant},+};++/// A simple token-bucket pacer. The bucket starts full and has an adjustable capacity. Once the+/// bucket is empty, further transmission is blocked. The bucket refills at a rate determined by+/// the congestion window and estimated round-trip time.+pub struct Pacer {+    capacity: u64,+    tokens: u64,+    last_instant: Instant,+}++impl Pacer {+    /// Obtains a new [`Pacer`].+    pub fn new(capacity: u64) -> Self {+        Self {+            capacity,+            tokens: capacity,+            last_instant: Instant::now(),+        }+    }++    /// Return how long we need to wait before sending a packet.+    ///+    /// If we can send a packet right away, this returns [`None`]. Otherwise, returns `Some(d)`,+    /// where `d` is the time before this function should be called again.+    ///+    /// We panic if the passed-in instant is older than `self.last_instant`.+    pub fn delay(+        &mut self,+        smoothed_rtt: Duration,+        mtu: u16,+        window: u64,+        mut now: Instant,

I'd avoid mutating now here and instead semantically naming the calculated value.

DemiMarie-parity

comment created time in 4 days

Pull request review commentdjc/quinn

Implement pacing

+//! Pacing of packet transmissions.++use std::{+    convert::TryInto,+    time::{Duration, Instant},+};++/// A simple token-bucket pacer. The bucket starts full and has an adjustable capacity. Once the+/// bucket is empty, further transmission is blocked. The bucket refills at a rate determined by+/// the congestion window and estimated round-trip time.+pub struct Pacer {+    capacity: u64,+    tokens: u64,+    last_instant: Instant,+}++impl Pacer {+    /// Obtains a new [`Pacer`].+    pub fn new(capacity: u64) -> Self {+        Self {+            capacity,+            tokens: capacity,+            last_instant: Instant::now(),+        }+    }++    /// Return how long we need to wait before sending a packet.+    ///+    /// If we can send a packet right away, this returns [`None`]. Otherwise, returns `Some(d)`,+    /// where `d` is the time before this function should be called again.+    ///+    /// We panic if the passed-in instant is older than `self.last_instant`.+    pub fn delay(+        &mut self,+        smoothed_rtt: Duration,+        mtu: u16,+        window: u64,+        mut now: Instant,+    ) -> Option<Instant> {+        if let Some(time_elapsed) = now.checked_duration_since(self.last_instant) {+            self.last_instant = now;+            let new_tokens = window.saturating_mul(5) / 4;+            if smoothed_rtt.as_nanos() == 0 {+                return None;+            }+            let elapsed = (time_elapsed.as_nanos() / smoothed_rtt.as_nanos())+                .try_into()+                .unwrap_or(u64::max_value());++            let new_tokens = new_tokens.saturating_mul(elapsed);+            self.tokens = self.tokens.saturating_add(new_tokens).min(self.capacity);+        } else {+            now = self.last_instant+        }++        if window > u32::max_value().into() {+            // we disable pacing for extremely large windows+            return None;+        }++        let window = window as u32;+        if self.tokens > mtu.into() {

I think turning this into an early return would be a bit nicer. Maybe move that directly after the assignment to self.tokens?

DemiMarie-parity

comment created time in 4 days

Pull request review commentdjc/quinn

Implement pacing

+//! Pacing of packet transmissions.++use std::{+    convert::TryInto,+    time::{Duration, Instant},+};++/// A simple token-bucket pacer. The bucket starts full and has an adjustable capacity. Once the+/// bucket is empty, further transmission is blocked. The bucket refills at a rate determined by+/// the congestion window and estimated round-trip time.+pub struct Pacer {+    capacity: u64,+    tokens: u64,+    last_instant: Instant,+}++impl Pacer {+    /// Obtains a new [`Pacer`].+    pub fn new(capacity: u64) -> Self {+        Self {+            capacity,+            tokens: capacity,+            last_instant: Instant::now(),+        }+    }++    /// Return how long we need to wait before sending a packet.+    ///+    /// If we can send a packet right away, this returns [`None`]. Otherwise, returns `Some(d)`,+    /// where `d` is the time before this function should be called again.+    ///+    /// We panic if the passed-in instant is older than `self.last_instant`.+    pub fn delay(+        &mut self,+        smoothed_rtt: Duration,+        mtu: u16,+        window: u64,+        mut now: Instant,+    ) -> Option<Instant> {+        if let Some(time_elapsed) = now.checked_duration_since(self.last_instant) {+            self.last_instant = now;+            let new_tokens = window.saturating_mul(5) / 4;+            if smoothed_rtt.as_nanos() == 0 {+                return None;+            }+            let elapsed = (time_elapsed.as_nanos() / smoothed_rtt.as_nanos())+                .try_into()+                .unwrap_or(u64::max_value());++            let new_tokens = new_tokens.saturating_mul(elapsed);+            self.tokens = self.tokens.saturating_add(new_tokens).min(self.capacity);+        } else {+            now = self.last_instant+        }++        if window > u32::max_value().into() {+            // we disable pacing for extremely large windows+            return None;+        }++        let window = window as u32;+        if self.tokens > mtu.into() {+            None+        } else {+            let rtt = smoothed_rtt+                .checked_mul((mtu - self.tokens as u16).into())+                .unwrap_or_else(|| Duration::new(u64::max_value(), 999_999_999));+            let new_duration: Duration = rtt / window / 5 * 4;

Presumably we could do without type annotations here (and on the next line), in which case we should. Also, I'd prefer using parentheses in this expression to make the grouping clearer (and invert one of the operators).

DemiMarie-parity

comment created time in 4 days

Pull request review commentdjc/quinn

Implement pacing

+//! Pacing of packet transmissions.++use std::{+    convert::TryInto,+    time::{Duration, Instant},+};++/// A simple token-bucket pacer. The bucket starts full and has an adjustable capacity. Once the+/// bucket is empty, further transmission is blocked. The bucket refills at a rate determined by+/// the congestion window and estimated round-trip time.+pub struct Pacer {+    capacity: u64,+    tokens: u64,+    last_instant: Instant,+}++impl Pacer {+    /// Obtains a new [`Pacer`].+    pub fn new(capacity: u64) -> Self {+        Self {+            capacity,+            tokens: capacity,+            last_instant: Instant::now(),+        }+    }++    /// Return how long we need to wait before sending a packet.+    ///+    /// If we can send a packet right away, this returns [`None`]. Otherwise, returns `Some(d)`,+    /// where `d` is the time before this function should be called again.+    ///+    /// We panic if the passed-in instant is older than `self.last_instant`.+    pub fn delay(+        &mut self,+        smoothed_rtt: Duration,+        mtu: u16,+        window: u64,+        mut now: Instant,+    ) -> Option<Instant> {+        if let Some(time_elapsed) = now.checked_duration_since(self.last_instant) {+            self.last_instant = now;+            let new_tokens = window.saturating_mul(5) / 4;+            if smoothed_rtt.as_nanos() == 0 {+                return None;+            }+            let elapsed = (time_elapsed.as_nanos() / smoothed_rtt.as_nanos())+                .try_into()+                .unwrap_or(u64::max_value());++            let new_tokens = new_tokens.saturating_mul(elapsed);+            self.tokens = self.tokens.saturating_add(new_tokens).min(self.capacity);+        } else {+            now = self.last_instant+        }++        if window > u32::max_value().into() {

Can this come first, so we don't the work above for no reason?

DemiMarie-parity

comment created time in 4 days

Pull request review commentdjc/quinn

Implement pacing

 use crate::{  mod assembler; +mod pacing;

Looks like this module isn't actually included in this commit?

DemiMarie-parity

comment created time in 4 days

Pull request review commentdjc/quinn

Implement pacing

+//! Pacing of packet transmissions.

Let's squash this commit into the other one.

DemiMarie-parity

comment created time in 4 days

pull request commentdjc/quinn

Implement pacing

The RttEstimator move conflicts with my work in #804 and seems unrelated, let's skip that for now?

DemiMarie-parity

comment created time in 4 days

pull request commentctz/rustls-native-certs

Bump rustls to 0.18

Presumably we'll want the CI failures resolved before publishing the next release.

Keruspe

comment created time in 4 days

issue commentdjc/quinn

0-RTT is sometimes unexpectedly rejected

The stateless cryptographic scheme would require the user to manage the keys, right? Unless you perhaps reuse some derivative of the certificate's private key, in which case stuff would invalidate if you rotate the certificate.

Ralith

comment created time in 6 days

create barnchdjc/acme-proto

branch : master

created branch time in 7 days

created repositorydjc/acme-proto

created time in 7 days

pull request commentbriansmith/ring

Add hkdf::Prk::expand_to_secret()

I've updated the hkdf_tests() to use expand_to_secret() directly. Since hkdf_output_len_tests() still exercise the separate expand().into() idiom, this seems like a decent way to get coverage for this.

djc

comment created time in 7 days

push eventdjc/ring

David Benjamin

commit sha d59682c4274baaaed8b0818a1fa13bfd502cafc8

Fix runner tests with Go 1.13. Go 1.13 will add Ed25519 support to the standard library. Switch the order of our vendored Ed25519 bits so we do not get mixed up by this. When Go 1.13 is released, we can then unwind all this in favor of the standard library version. Update-Note: See b/135634259 Change-Id: Iddc0ea58db5b2181cecacfcdd3cc058159271787 Reviewed-on: https://boringssl-review.googlesource.com/c/boringssl/+/36504 Reviewed-by: Adam Langley <agl@google.com>

view details

Watson Ladd

commit sha 629f321ffd98992febd751526dd1c06eff5f921a

Add an API to record use of delegated credential Change-Id: Ie964dee5ff9f8c6d43208dd1d3947d9b427ea27d Reviewed-on: https://boringssl-review.googlesource.com/c/boringssl/+/36424 Commit-Queue: David Benjamin <davidben@google.com> Reviewed-by: David Benjamin <davidben@google.com>

view details

Nick Harper

commit sha 7198a233689e250eb36d3d8e96b48b4cafb1be0a

Clarify language about default SSL_CTX session ticket key behavior. Change-Id: I8017a99ed99562b48a44d09da6a9338f1de9078f Reviewed-on: https://boringssl-review.googlesource.com/c/boringssl/+/36524 Reviewed-by: David Benjamin <davidben@google.com> Commit-Queue: David Benjamin <davidben@google.com>

view details

Adam Langley

commit sha cfcb0060e8b8fba92d275fa4ac27d369890ea9bf

Emit empty signerInfos in PKCS#7 bundles. This is our bug that we've had since the beginning of PKCS#7 writing support in eeb9f491: the empty signerInfos SET wasn't emitted. Some parsers, including OpenSSL, don't like this but it appears to have taken five years for anyone to notice. This change does not make parsing strict so that we continue to parse old messages that we may have produced. (As ever, PKCS#* should not be used expect where absolutely required for interoperability.) Bug: b:135982177 Change-Id: Ia7241de69f105657bdfb5ff75e909deae71748a0 Reviewed-on: https://boringssl-review.googlesource.com/c/boringssl/+/36564 Commit-Queue: David Benjamin <davidben@google.com> Reviewed-by: David Benjamin <davidben@google.com>

view details

Steven Valdez

commit sha d6f9c359d219055a89c676cb8886421b145a08da

Factor out TLS cipher selection to ssl_choose_tls_cipher. This is factored out since ESNI will need to do its own cipher selection. Bug: 275 Change-Id: Id87fd91272fbcd9098b3f2a9caa78a2129b154b5 Reviewed-on: https://boringssl-review.googlesource.com/c/boringssl/+/36544 Commit-Queue: Steven Valdez <svaldez@google.com> Reviewed-by: David Benjamin <davidben@google.com>

view details

Adam Langley

commit sha 60cc4d4b4ee3ccaa530b1eeb29730efb28b5120b

Move fipstools/ to util/fipstools/cavp We have two “fipstools” directories, which is silly. Unify them into one by moving CAVP stuff into a subdirectory of util/fipstools. Change-Id: Ibeaa2205c58699f3d042445bfa6a6576a762da6f Reviewed-on: https://boringssl-review.googlesource.com/c/boringssl/+/36624 Commit-Queue: David Benjamin <davidben@google.com> Reviewed-by: David Benjamin <davidben@google.com>

view details

Yun Liu

commit sha 3f98fde5addf628ede098491851c9122597fae5b

Add android_sdk checkout Bug: chromium:428426 Change-Id: I12c2969fe8b37a604b14300433f3e3f09aeb24e6 Reviewed-on: https://boringssl-review.googlesource.com/c/boringssl/+/36584 Reviewed-by: David Benjamin <davidben@google.com> Commit-Queue: David Benjamin <davidben@google.com>

view details

Adam Langley

commit sha 0086bd65c401e11605f9f92a95a4aa52d3bc9b04

Support key wrap with padding in CAVP. Change-Id: I27a282ee2b11083a1137990b00a9d599dd1f48df Reviewed-on: https://boringssl-review.googlesource.com/c/boringssl/+/36625 Reviewed-by: David Benjamin <davidben@google.com> Commit-Queue: Adam Langley <agl@google.com>

view details

Yun Liu

commit sha 365b7a0fcbf273b1fa704d151059e419abd6cfb8

Remove android_tools checkout Remove it when recipe change https://chromium-review.googlesource.com/c/chromium/tools/build/+/1685789 checked in and works as expected. Bug: chromium:428426 Change-Id: I649ba7f4bd003101c71d07faad2a0d1e957cb97e Reviewed-on: https://boringssl-review.googlesource.com/c/boringssl/+/36626 Reviewed-by: David Benjamin <davidben@google.com> Commit-Queue: David Benjamin <davidben@google.com>

view details

Adam Langley

commit sha 09050cb498336655883157c6e6055db9e5542857

Add SipHash-2-4. The added code is a one-shot function. A handful of instructions could be saved by having a context object for repeated use of the same key, but perhaps it's not needed. Selected the 2-4 variant to implement because it seems to be overwhelmingly the most commonly used. Change-Id: I1e4f699f7dd5a2d35e12245fa116bafbd3439979 Reviewed-on: https://boringssl-review.googlesource.com/c/boringssl/+/36664 Commit-Queue: Adam Langley <agl@google.com> Reviewed-by: David Benjamin <davidben@google.com>

view details

Kris Kwiatkowski

commit sha 3c8ae0fd3ea5e390770ed67a9dd85b26a3854ab7

Implements SIKE/p434 * CECPQ2b will use SIKE/p434 instead of SIKE/p503 * KEM uses SHA256 instead of HMAC-256 * implements new starting curve: y^2=x^3 + 6x^2 + x * adds optimized implementation for aarch64 * adds optimized implementation for AMD64 which do not support MULX/ADOX/ADCX * syncs the SIKE test code with the NIST Round 2 specification. * removes references to field size from variables names, tests and defines. Change-Id: I5359c6c62ad342354c6d337f7ee525158586ec93 Reviewed-on: https://boringssl-review.googlesource.com/c/boringssl/+/36704 Reviewed-by: Adam Langley <agl@google.com>

view details

Adam Langley

commit sha b7f0c1b4d3e7a630fab74e64a7a181defdff918e

Add initial draft of ACVP tool. ACVP will be the replacement for CAVP. CAVP is the FIPS 140 test-vector program. This commit contains some very rough support for ACVP. Currently it only supports hash functions and it's not hard to hit corner cases, but it's enough of a framework to work from. Change-Id: Ifcde18ac560710e252220282acd66d08e7507262 Reviewed-on: https://boringssl-review.googlesource.com/c/boringssl/+/36644 Commit-Queue: Adam Langley <agl@google.com> Reviewed-by: David Benjamin <davidben@google.com>

view details

Adam Langley

commit sha 0fc4979ddc54eaa28b62af3cbf72ac870444bc52

Fix shim error message endings. A few fprintfs were missing newlines at the end of the message. A few more were missing periods. This change makes them all consistent. Change-Id: Ib275a9543414f34a7bee5bb9ec3cba37c9ec3cf8 Reviewed-on: https://boringssl-review.googlesource.com/c/boringssl/+/36724 Commit-Queue: David Benjamin <davidben@google.com> Reviewed-by: David Benjamin <davidben@google.com>

view details

Adam Langley

commit sha a86c69888b9a416f5249aacb4690a765be064969

Add post-quantum experiment signal extension. When testing HRSS-SXY and SIKE, we also want a control group. However, how are clients to indicate that they're part of the 1/3 of the experiment population that's not advertising CECPQ? And how are servers to indicate that they would have negotiated CECPQ2 / 2b if only the client had asked? This change adds a temporary signaling extension to solve these issues. Change-Id: Ic087a09149ef10141568b734396981ae97950a9b Reviewed-on: https://boringssl-review.googlesource.com/c/boringssl/+/36725 Reviewed-by: David Benjamin <davidben@google.com>

view details

Adam Langley

commit sha 1a3178cf028b1fd58b7bd2c322678d39df91c218

Rename SIKE's params.c. We already have crypto/dh/params.c and some of our downstream consumers cannot take two source files with the same name in the same build target. Change-Id: I324ace094c2215b443e98fc9ae69876ea1929efa Reviewed-on: https://boringssl-review.googlesource.com/c/boringssl/+/36744 Reviewed-by: Adam Langley <agl@google.com> Commit-Queue: Adam Langley <agl@google.com>

view details

Adam Langley

commit sha 07432f325d6a388fe6d4881e84b076610c961f05

Prefix all the SIKE symbols. I should have noticed this previously, but the SIKE code was exporting symbols called generic things like “params”. They're not dynamically exported, but BoringSSL is often statically linked so better to ensure that these things are prefixed to avoid the risk of collisions. Change-Id: I3a942dbc8f4eab703d5f1d6898f67513fd7b578c Reviewed-on: https://boringssl-review.googlesource.com/c/boringssl/+/36745 Commit-Queue: Adam Langley <agl@google.com> Commit-Queue: David Benjamin <davidben@google.com> Reviewed-by: David Benjamin <davidben@google.com>

view details

David Benjamin

commit sha 66e106026aa6efb7e7266d5fd36ef27f30757d43

Align TLS 1.3 cipher suite names with OpenSSL. There are two naming conventions for TLS cipher suites, the standard IETF names (SSL_CIPHER_standard_name) and the ad-hoc OpenSSL names (SSL_CIPHER_get_name). When we added TLS 1.3, we had to come up with OpenSSL-style names for the cipher suites. OpenSSL-style names use hyphens rather than underscores (and omit underscores in odd places), so the natural name for TLS_AES_128_GCM_SHA256 would have been "AES128-GCM-SHA256". However, that name is already taken by TLS_RSA_WITH_AES_128_GCM_SHA256 because OpenSSL's naming convention treats the legacy RSA key exchange as default. Instead, we used an "AEAD-" prefix to indicate the ciphers only specified the AEAD. Since then, OpenSSL has implemented TLS 1.3. Instead, they simply made the OpenSSL-style name match the standard name starting TLS 1.3, underscores and all. (This is why openssl s_client will return very different-looking cipher names in TLS 1.2 and TLS 1.3.) Align with OpenSSL and do the same. Update-Note: SSL_CIPHER_get_name will return different values for TLS 1.3 ciphers than before. Note that we did not allow TLS 1.3 ciphers to be configured at all, so no cipher suite configurations will need to change, but code logging or asserting on the result of a TLS connection may observe differences. It is also recommended that consumers replace uses of SSL_CIPHER_get_name with SSL_CIPHER_standard_name which gives a much more consistent naming convention. (BoringSSL supports both standard and OpenSSL names in the cipher suite configuration, so there's no need to use OpenSSL names at all.) Change-Id: I40b1de0689dd7b32af88602acc063934f2877999 Reviewed-on: https://boringssl-review.googlesource.com/c/boringssl/+/36764 Commit-Queue: David Benjamin <davidben@google.com> Commit-Queue: Adam Langley <agl@google.com> Reviewed-by: Adam Langley <agl@google.com>

view details

David Benjamin

commit sha b9e2b8adcd05e46c4dbf93435661e077f9b583cb

Name cipher suite tests in runner by IETF names. The names of those tests don't actually matter to the shim because we don't pass them in anywhere. Note hasComponent() is also used by the signature algorithm tests, so that also needs to use underscores as a result. Change-Id: I393df4c6ffebcc66a55f256df5a641ad87e66441 Reviewed-on: https://boringssl-review.googlesource.com/c/boringssl/+/36765 Commit-Queue: David Benjamin <davidben@google.com> Reviewed-by: Adam Langley <agl@google.com>

view details

Adam Langley

commit sha 9f5c419b9fbb0d10943f14829470130701af2f4b

Move the PQ-experiment signal to SSL_CTX. In the case where I need it, it's easier for it to be on the context rather than on each connection. Change-Id: I5da2929ae6825d6b3151ccabb813cb8ad16416a1 Reviewed-on: https://boringssl-review.googlesource.com/c/boringssl/+/36746 Commit-Queue: Adam Langley <agl@google.com> Commit-Queue: David Benjamin <davidben@google.com> Reviewed-by: David Benjamin <davidben@google.com>

view details

David Benjamin

commit sha 4dfd5af70191b068aebe567b8e29ce108cee85ce

Only bypass the signature verification itself in fuzzer mode. Keep the setup_ctx logic, which, among other things, checks if the signature algorithm is valid. This cuts down on some unnecessary fuzzer-mode suppressions. Change-Id: I644f75630791c9741a1b372e5f83ae7ff9f01c2f Reviewed-on: https://boringssl-review.googlesource.com/c/boringssl/+/36766 Commit-Queue: David Benjamin <davidben@google.com> Reviewed-by: Adam Langley <agl@google.com>

view details

push time in 7 days

PR opened djc/topfew-rs

Split fields without regular expressions
+38 -41

0 comment

4 changed files

pr created time in 7 days

create barnchdjc/topfew-rs

branch : no-regex

created branch time in 7 days

pull request commentdjc/askama

Disable default features of `askama_shared` in `askama_derive`

Ah, thanks! I've pushed out 0.10.1 with this fix.

JohnTitor

comment created time in 7 days

push eventdjc/askama

Yuki Okushi

commit sha 0a6284aa91e1eb8da81d7b317fb539653e519d0a

Disable default features of `askama_shared` in `askama_derive`

view details

push time in 7 days

PR merged djc/askama

Disable default features of `askama_shared` in `askama_derive`

3b8bf97cb6da128f020bb557057269661ac89fea disabled default features of askama_shared in askama but it didn't in askama_derive. We'd drop dependencies completely if we could disable there also.

+1 -1

0 comment

1 changed file

JohnTitor

pr closed time in 7 days

issue commentdjc/quinn

Integer overflow running benchmark

@bryandmc I'm guessing this is fixed on master, since 9cf8652deb89f16e236bc3d4fe46a42793a4c036 probably prevents bad data from being inserted into the RangeSet. Please let us know regardless, we might want to publish a maintenance release!

alecmocatta

comment created time in 7 days

issue commenthyperium/hyper

HTTP/3 Support

I've been discussing it with @seanmonstar, as far as I can tell he's still figuring out how he wants to go about it.

uvd

comment created time in 7 days

push eventdjc/mendes

Dirkjan Ochtman

commit sha 33b84523206f7839ba6bb4fe791e929779fa784b

Fix import order

view details

push time in 8 days

push eventdjc/mendes

Dirkjan Ochtman

commit sha be918fba0b280b49e9f4e1d17b4f40588b71b041

Tweak feature flag dependencies

view details

Dirkjan Ochtman

commit sha 2d91e87406eb541b9eb7e3a65b4d68619d6a6f54

Store client IP address in Request extensions

view details

Dirkjan Ochtman

commit sha 9cc59624a00b06fc445860a70d691e5562917711

Bump mendes version to 0.0.30

view details

push time in 8 days

PR closed sfackler/rust-phf

Support String as FmtConst and tuple keys

Would these changes make sense? Do we need any extra tests for this?

Threw in a formatting commit because I have the IDE set to format by default -- let me know if you want me to scratch that.

I was also surprised that the phf_codegen::Map::entry() method takes the value by reference and then immediately calls to_owned() on it. Presumably it would be more transparent in this case to just take a String argument?

+194 -104

1 comment

13 changed files

djc

pr closed time in 8 days

pull request commentsfackler/rust-phf

Support String as FmtConst and tuple keys

I don't think this is working correctly. Will update if I ever have a need.

djc

comment created time in 8 days

push eventdjc/template-benchmarks-rs

Dirkjan Ochtman

commit sha 2d30cb898697ba8544dc50a52811cd3bf04ed719

Update liquid benchmark

view details

Dirkjan Ochtman

commit sha d56db5d4a5286fe6e934f818285756ce638a27ce

Fix formatting

view details

Dirkjan Ochtman

commit sha d21aba4d97ae1e720a5a0cd4905516499895e2ab

Make sailfish benchmark interface consistent

view details

Dirkjan Ochtman

commit sha 69d977b21387d256e8dc676f75e8d64f0c64dfa5

Fixate sailfish commit to avoid issues

view details

Dirkjan Ochtman

commit sha 9faa7cb0297646823eaf9ce56e6c25969b536896

Update results (fixes #25)

view details

push time in 8 days

issue closeddjc/template-benchmarks-rs

Update results

Some libraries have been updated with faster, so can you please update the results for a more realistic benchmark?

closed time in 8 days

tizgafa

issue commentWebAssembly/spec

[web] Register application/wasm MIME type

I've submitted a PR for the next meeting's agenda. Unfortunately I won't be able to attend, but hopefully who does can lead the discussion. It honestly seems fairly straightforward and non-controversial to me.

lukewagner

comment created time in 8 days

PR opened WebAssembly/meetings

Add discussion of media type registration

Unfortunately I won't be able to attend -- I hope someone else can take the lead in the discussion, but it seems pretty straightforward.

+1 -0

0 comment

1 changed file

pr created time in 8 days

push eventdjc/meetings

Dirkjan Ochtman

commit sha cb7a6bdd5ef7f76b07d01fde970107a9aea0d4a0

Add discussion of media type registration Unfortunately I won't be able to attend -- I hope someone else can take the lead in the discussion, but it seems pretty straightforward.

view details

push time in 8 days

fork djc/meetings

WebAssembly meetings (VC or in-person), agendas, and notes

fork in 8 days

more