profile
viewpoint

davidben/embedded-emacs 26

Replace your browser textareas with embedded Emacs instances. Please see README before using.

davidben/dpkg 10

Debian package manager

davidben/barnowl 3

A multi-protocol curses IM client.

davidben/barnowl-locker-bin 2

Arch-independent wrapper scripts for the barnowl locker

davidben/emscripten 2

Emscripten: An LLVM-to-JavaScript Compiler

aglasgall/barnowl 1

A multi-protocol curses IM client.

davidben/anygit 1

Any git object lookup

davidben/barnowl-zstatus 1

Z-Status module for BarnOwl

davidben/ctlfish 1

Webathena-based remctl client.

issue openedsublimelsp/LSP

RuntimeError: Set changed size during iteration

When I use LSP (0.9.5 installed via Package Control) with clangd, I get the following error working on Chromium:

Error handling server payload
Traceback (most recent call last):
  File "[...]/LSP.sublime-package/plugin/core/rpc.py", line 203, in receive_payload
    self.response_handler(payload)
  File "[...]/LSP.sublime-package/plugin/core/rpc.py", line 224, in response_handler
    handler(result)
  File "[...]/LSP.sublime-package/plugin/core/sessions.py", line 273, in _handle_initialize_result
    self._on_post_initialize(self, None)
  File "[...]/LSP.sublime-package/plugin/core/windows.py", line 580, in _handle_post_initialize
    self.documents.add_session(session)
  File "[...]/LSP.sublime-package/plugin/core/windows.py", line 117, in add_session
    self._notify_open_documents(session)
  File "[...]/LSP.sublime-package/plugin/core/windows.py", line 151, in _notify_open_documents
    for file_name in self._document_states:
RuntimeError: Set changed size during iteration

I tried enabling log_debug and log_server per https://lsp.readthedocs.io/en/latest/troubleshooting/, but that caused it to work. I'm guessing there is some race condition here.

The offending loop was added in #868. Prior to that PR, _notify_open_documents iterated over list(self._document_states), which made a copy of the set (then a dict).

created time in a day

issue openedsublimelsp/LSP

Troubleshooting guide link in issue template is wrong

The new issue template says:

Your issue may have been solved already - please search before creating new!

Troubleshooting guide: https://lsp.readthedocs.io/en/latest/#troubleshooting Chat: https://discord.gg/RMkk5MR

The troubleshooting link does not work. I assume it was supposed to be https://lsp.readthedocs.io/en/latest/troubleshooting/.

created time in a day

Pull request review commentMikeBishop/dns-alt-svc

Only allow each SvcParamKey to appear once

 responses to the address queries that were issued in parallel. A few initial SvcParamKeys are defined here.  These keys are useful for HTTPS, and most are applicable to other protocols as well. -## "transport" and "no-default-transport"+## "transport" and "no-default-transport" {#transport-key}  The "transport" and "no-default-transport" SvcParamKeys together indicate the set of transport protocols supported by this service endpoint.+A transport protocol is identified by a protocol-id with the following form:++    protocol-id = 1*(ALPHA_LC / DIGIT / "-" / "_" / ".")++The presentation and wire format of "transport" are the same: a comma (0x2c)+separated list of one or more `protocol-id`s:

Replace this with fixed-width length prefixes. A client parsing DNS (and TLS for that matter) already has tons of routines for such formats. Nothing the client parses in this spec may use separators.

Then revert the change to protocol-id. It should be an 8-bit-clean opaque byte string, like ALPN, so we do not have random implementation variations around people remembering to and forgetting to check, getting the check subtle wrong, and weird bugs in the corners.

bemasc

comment created time in 5 days

Pull request review commentMikeBishop/dns-alt-svc

Only allow each SvcParamKey to appear once

 each of which contains:   (but constrained by the RDATA and DNS message sizes). * an octet string of the length defined by the previous field. -If the parser reaches the end of the RDATA while parsing a SvcFieldValue,-the RR is invalid and MUST be discarded.+SvcParamKeys SHALL appear in increasing numeric order.++Clients MUST consider an RR malformed if+* the parser reaches the end of the RDATA while parsing a SvcFieldValue.+* SvcParamKeys are not in increasing numeric order.+* a single SvcParamKey appears twice.

This one is redundant with the one above it (which is why increasing numeric order is nice). To avoid folks missing this and adding an unnecessary extra check, what if we said:


Clients MUST consider an RR malformed if

  • the parser reaches [...]
  • SvcParamKeys are not in strictly increasing numeric order.
  • a SvcParamValue for [...]

Note the second condition implies there are no duplicate SvcParamKeys.

bemasc

comment created time in 5 days

issue commentgoogle/conscrypt

handshake_error on OpenJDK 11 when using Conscrypt

With Conscrypt on OpenJDK 11 the ECDH(E) ciphers aren't in the ClientHello but this is: "supported_groups (10)": { "versions": [ffdhe2048, ffdhe3072, ffdhe4096, ffdhe6144, ffdhe8192] }

This also is not Conscrypt. We don't support any of the FFDH groups at all, only ECDH groups. It looks like there's some other implementation altogether that you're having problems with.

However, with that line I can probably diagnose that mystery implementation's problems: supporting only FFDH groups and no ECDH groups won't talk to most TLS 1.3 clients. Unlike TLS 1.2, where the legacy RSA-decryption cipher suites were usable without it, TLS 1.3 makes a Diffie-Hellman exchange mandatory. As most TLS 1.3 clients only support ECDH groups (FFDH groups are not performant), that advertisement won't work very well.

cryptomeme

comment created time in 6 days

Pull request review commentsbingler/cookie-incrementalism

Adding schemeful same-site

 attribute (Section 4.1.2.5 of {{RFC6265bis}}) by altering the storage model defi This is conceptually similar to the requirements put into place for the `__Secure-` prefix (Section 4.1.3.1 of {{RFC6265bis}}). +## Schemeful Same-Site {#schemeful-samesite}++By using the scheme, as well as the registrable domain, SameSite can help to+protect https origins against a network attacker that is impersonating an http+origin with the same registrable domain. Further increasing its CSRF+protections. To do so we need to modify a number of things:++First change the definition of "site for cookies" from a registrable domain to+an origin. In the places where a we return an empty string for a non-existent+"site for cookies" we should instead return an origin set to a freshly+generated globally unique identifier.++Then replace the same-site calculation algorithm with the following+~~~+Two origins, A and B, are considered same-site if the following algorithm returns true:+1.  If A and B are both scheme/host/port triples then++    1.  If A's scheme does not equal B's scheme, return false.++    2.  Let hostA be A's host, and hostB be B's host.++        1.  If hostA equals hostB and hostA's registrable domain is null, return true.++        2.  If hostA's registrable domain equals hostB's registrable domain and is non-null, return true.++2.  If A and B are both the same globally unique identifier, return true.++3.  Return false.++Note: The port component of the origins is not considered.++A request is "same-site" if its target's URI's origin+is same-site with the request's client's "site for cookies", or if the+request has no client. The request is otherwise "cross-site".+~~~++Now that we have a new algorithm, we can update any comparision of two sites+from "have the same registrable domain" (or "is an exact match for") to say+"is same-site".++Since we're now looking at scheme and would like WebSockets to continue to be+able to use cookies let's add the following note directly after+"5.  Return `cross-site`"++~~~+Note: The request's URL when establishing a WebSockets connection {{RFC6455}}+has scheme "http" or "https", rather than "ws" or "wss". See {{FETCH}} which+maps schemes when constructing the request. This allows same-site cookies to be+sent with WebSockets.+~~~++Finally, since we're citing RFC6455 it should be added to the informative section.

Overall comment, this is a very conversational tone, which is not how IETF drafts usually read. But I guess the rest of the document is also like this. TBH, I don't know what to do about this since this is also a delta to another document, which is just generally weird.

I would suggest, rather than patching in the WebSockets note and citation, to just lift it into this document, and then you don't need the weird citation patching. When someone does rfc[whatever-6562bis-becomes]bis, they can manually insert the note back in.

sbingler

comment created time in 6 days

Pull request review commentsbingler/cookie-incrementalism

Adding schemeful same-site

 attribute (Section 4.1.2.5 of {{RFC6265bis}}) by altering the storage model defi This is conceptually similar to the requirements put into place for the `__Secure-` prefix (Section 4.1.3.1 of {{RFC6265bis}}). +## Schemeful Same-Site {#schemeful-samesite}++By using the scheme, as well as the registrable domain, SameSite can help to+protect https origins against a network attacker that is impersonating an http+origin with the same registrable domain. Further increasing its CSRF+protections. To do so we need to modify a number of things:++First change the definition of "site for cookies" from a registrable domain to+an origin. In the places where a we return an empty string for a non-existent+"site for cookies" we should instead return an origin set to a freshly+generated globally unique identifier.++Then replace the same-site calculation algorithm with the following

Nit: colon at the end?

sbingler

comment created time in 6 days

Pull request review commentsbingler/cookie-incrementalism

Adding schemeful same-site

 attribute (Section 4.1.2.5 of {{RFC6265bis}}) by altering the storage model defi This is conceptually similar to the requirements put into place for the `__Secure-` prefix (Section 4.1.3.1 of {{RFC6265bis}}). +## Schemeful Same-Site {#schemeful-samesite}++By using the scheme, as well as the registrable domain, SameSite can help to+protect https origins against a network attacker that is impersonating an http+origin with the same registrable domain. Further increasing its CSRF+protections. To do so we need to modify a number of things:++First change the definition of "site for cookies" from a registrable domain to+an origin. In the places where a we return an empty string for a non-existent+"site for cookies" we should instead return an origin set to a freshly+generated globally unique identifier.++Then replace the same-site calculation algorithm with the following+~~~+Two origins, A and B, are considered same-site if the following algorithm returns true:+1.  If A and B are both scheme/host/port triples then++    1.  If A's scheme does not equal B's scheme, return false.++    2.  Let hostA be A's host, and hostB be B's host.++        1.  If hostA equals hostB and hostA's registrable domain is null, return true.++        2.  If hostA's registrable domain equals hostB's registrable domain and is non-null, return true.++2.  If A and B are both the same globally unique identifier, return true.++3.  Return false.++Note: The port component of the origins is not considered.++A request is "same-site" if its target's URI's origin+is same-site with the request's client's "site for cookies", or if the+request has no client. The request is otherwise "cross-site".+~~~++Now that we have a new algorithm, we can update any comparision of two sites+from "have the same registrable domain" (or "is an exact match for") to say+"is same-site".++Since we're now looking at scheme and would like WebSockets to continue to be

looking at the scheme able to use cookies [COMMA] let's add the Period at the end

(I miss Google Docs' inline suggestions. :-( )

sbingler

comment created time in 6 days

Pull request review commentsbingler/cookie-incrementalism

Adding schemeful same-site

 attribute (Section 4.1.2.5 of {{RFC6265bis}}) by altering the storage model defi This is conceptually similar to the requirements put into place for the `__Secure-` prefix (Section 4.1.3.1 of {{RFC6265bis}}). +## Schemeful Same-Site {#schemeful-samesite}++By using the scheme, as well as the registrable domain, SameSite can help to+protect https origins against a network attacker that is impersonating an http+origin with the same registrable domain. Further increasing its CSRF+protections. To do so we need to modify a number of things:++First change the definition of "site for cookies" from a registrable domain to+an origin. In the places where a we return an empty string for a non-existent+"site for cookies" we should instead return an origin set to a freshly+generated globally unique identifier.++Then replace the same-site calculation algorithm with the following+~~~

Hrm, does this look right when you build this? ~~~ makes it just draw ASCII art and not interpret the contents as markdown at all.

sbingler

comment created time in 6 days

Pull request review commentsbingler/cookie-incrementalism

Adding schemeful same-site

 attribute (Section 4.1.2.5 of {{RFC6265bis}}) by altering the storage model defi This is conceptually similar to the requirements put into place for the `__Secure-` prefix (Section 4.1.3.1 of {{RFC6265bis}}). +## Schemeful Same-Site {#schemeful-samesite}

Optional, since it seems I'm the only one who hates "schemeful" :-)

"Redefining Same-Site"?

sbingler

comment created time in 6 days

Pull request review commentsbingler/cookie-incrementalism

Adding schemeful same-site

 attribute (Section 4.1.2.5 of {{RFC6265bis}}) by altering the storage model defi This is conceptually similar to the requirements put into place for the `__Secure-` prefix (Section 4.1.3.1 of {{RFC6265bis}}). +## Schemeful Same-Site {#schemeful-samesite}++By using the scheme, as well as the registrable domain, SameSite can help to+protect https origins against a network attacker that is impersonating an http+origin with the same registrable domain. Further increasing its CSRF+protections. To do so we need to modify a number of things:

Looks like you didn't finish this sentence? (Further [...])

sbingler

comment created time in 6 days

Pull request review commentsbingler/cookie-incrementalism

Adding schemeful same-site

 the near-term. User agents should:      This is spelled out in more detail in {{require-secure}}. +3. Ensure the scheme, as well as the registrable domain, of the+   "site for cookies" and request match when making a same-site decision.+   That is, "http://site.example" and "https://site.example" should be+   considered cross-site

Period at the end.

sbingler

comment created time in 6 days

push eventkrgovind/first-party-sets

David Benjamin

commit sha 0b5df5e08f4c46fd90d33e31713852140c74dd8f

More JSON formatting

view details

push time in 6 days

push eventkrgovind/first-party-sets

David Benjamin

commit sha 4459e4843913d73c6511cda078ed852974f110c6

Minor formatting fixes Tabs vs spaces, and JSON doesn't allow trailing commas.

view details

push time in 6 days

issue commentnetty/netty

TLSv1.3 can fail with HTTP/2 and Session Tickets Enabled

(We quite extensibly test things assuming a single-byte write buffer so it would be quite surprising if data was getting dropped in BoringSSL.)

carl-mastrangelo

comment created time in 6 days

issue commentnetty/netty

TLSv1.3 can fail with HTTP/2 and Session Tickets Enabled

Netty does a number of odd things with assumptions about write overheads in order to implement the SSLEngine API, which is probably where the dropping of data comes from.

(Really the API Netty wants isn't SSL_write or BIOs in the first place but alas we've yet to have the time to build the BIO-less API.)

carl-mastrangelo

comment created time in 6 days

issue commentopenssl/openssl

P and Q used after key generation

P and Q are typically not discarded since they're used as part of the CRT optimization. OpenSSL should support RSA keys without CRT parameters (alas RSA dates to an era of cryptography where we were extremely bad at standardizing things so there are many ad hoc variations on RSA private keys), but the standard RSAPrivateKey serialization requires all the CRT parameters, so it's possible whatever you're doing involves serializing them and your key is getting messed up in the process. https://tools.ietf.org/html/rfc8017#appendix-A.1.2

GuillermoEscobero

comment created time in 6 days

issue commentnetty/netty

TLSv1.3 can fail with HTTP/2 and Session Tickets Enabled

(This probably changed for you because we used to send NewSessionTicket during the server handshake, but now we defer it to the next SSL_write. Depending on the sizes of tickets, sizes of transport buffers, and exact I/O patterns, sending it eagerly could result in deadlocks or spurious TCP resets and we wanted to surprising behaviors or random cliffs like that.)

carl-mastrangelo

comment created time in 7 days

issue commentnetty/netty

TLSv1.3 can fail with HTTP/2 and Session Tickets Enabled

Ah yeah, we should probably be incorporating any of the pending handshake bytes into that. I didn't get to looking at this today, but I'll poke at it tomorrow. (Mostly I need to check whether anything else uses SSL_max_seal_overhead and would want the current behavior.)

Note this will mean that SSL_max_seal_overhead will change over the course of the connection, depending on what random incidental messages are queued up (NewSessionTicket at the start and then the occasional KeyUpdate, if anyone ever sent them). Would that be fine for you?

carl-mastrangelo

comment created time in 7 days

Pull request review commentMikeBishop/dns-alt-svc

Replace "alpn" with "transport"/"no-default-transport"

 responses to the address queries that were issued in parallel. A few initial SvcParamKeys are defined here.  These keys are useful for HTTPS, and most are applicable to other protocols as well. -## "alpn"--The "alpn" SvcParamKey defines the Application Layer Protocol-(ALPN, as defined in {{!RFC7301}) supported by a TLS-based alternative-service.  Its value SHOULD be an entry in the IANA registry "TLS-Application-Layer Protocol Negotiation (ALPN) Protocol IDs".--The presentation format and wire format of SvcParamValue-is its registered "Identification Sequence".  This key SHALL NOT-appear more than once in a SvcFieldValue.--Clients MUST include this value in the ProtocolNameList in their-ClientHello's `application_layer_protocol_negotiation` extension.-Clients SHOULD also include any other values that they support and-could negotiate on that connection with equivalent or better security-properties.  For example, when using a SvcFieldValue with an "alpn" of-"h2", the client MAY also include "http/1.1" in the ProtocolNameList.--Clients MUST ignore SVCB RRs where the "alpn" SvcParamValue-is unknown or not supported for use with the current scheme.--The value of the "alpn" SvcParamKey can have effects beyond the content-of the TLS handshake and stream.  For example, an "alpn" value of "h3"-({{HTTP3}} Section 11.1) indicates the client must use QUIC, not TLS.+## "transport" and "no-default-transport"++The "transport" and "no-default-transport" SvcParamKeys together+indicate the set of transport protocols supported by this service endpoint.++Each scheme that is mapped to SVCB defines a set or registry of allowed+transport values, and a "default set" of supported values, which SHOULD NOT+be empty.  To determine the set of transport protocols supported by an+endpoint (the "transport set"), the client collects the set of+"transport" values, and then adds the default set unless the+"no-default-transport" SvcParamKey is present.  The presence of a value in+the transport set indicates that this service endpoint, described by+SvcDomainName and the other parameters (e.g. "port") offers service with+that transport.

It sounds like you're proposing to drop the requirement that zone authors need to take an explicit action to disable TLS support, as in your flattened example in this thread.

I mean, I never requested that property to begin with. :-) I think something got lost in translation here.

The deployability requirement is that is impossible to disable the default transport. Possible with explicit action doesn't solve anything. It should either be syntactically impossible (preferred) or, if that can't be avoided, it needs to be easy to describe and detect (that's why I've been trying to avoid giving it a name that differs by scheme), with the expectation that authoritative servers reject it at the config level and clients reject it if they see it.

From there, it would be nice if the wrong thing was easy to avoid, but if authoritatives perform the mandatory checks, that helps smooth things over a bit.

Let's sync after the long weekend. GitHub PRs aren't a great medium for design discussions. I think we need something more synchronous and high-bandwidth. This DNS record has gotten far too complicated. We need to cut down the flexibility by an order of magnitude before this is viable.

bemasc

comment created time in 11 days

push eventkrgovind/first-party-sets

Martin Thomson

commit sha 97e0a025966147fcd4482dd0c105062cc3bb269a

Some tweaks to the text in the intro (#10)

view details

push time in 12 days

Pull request review commentsbingler/http-extensions

Update draft-ietf-httpbis-rfc6265bis.md

 following conditions holds:  ## "Same-site" and "cross-site" Requests  {#same-site-requests} -A request is "same-site" if its target's URI's origin's registrable domain-is an exact match for the request's client's "site for cookies", or if the+Two origins, A and B, are considered same-site if the following algorithm returns true:+1.  If A and B are both scheme/host/port triples then+    1.  If A’s scheme does not equal B’s scheme, return false.+    2.  Let hostA be A’s host, and hostB be B’s host.+        1.  If hostA equals hostB and hostA’s registrable domain is non-null return true.+        2. If hostA’s registrable domain equals hostB’s registrable domain and is non-null return true.+2.  If A and B are both the same return true;+3.  Return false.++Note: +1. The port component of the origins is not considered.+2. For a WebSocket request WSURI, let RequestURI be a copy of WSURI, with its

Relating to "requests" being a specific term, "This allows WebSockets requests to treated as same-site." probably wasn't that great. Perhaps:

This allows same-site cookies to be sent when creating a WebSocket from an http or https document.

Or perhaps:

This allows same-site cookies to be sent with WebSockets.

sbingler

comment created time in 14 days

Pull request review commentsbingler/http-extensions

Update draft-ietf-httpbis-rfc6265bis.md

 For a given request ("request"), the following algorithm returns `same-site` or 2.  Let `site` be `request`'s client's "site for cookies" (as defined in the     following sections). -3.  Let `target` be the registrable domain of `request`'s current url.+3.  Let `target` be the origin of `request`'s current url. -4.  If `site` is an exact match for `target`, return `same-site`.+4.  If `site` is same-site with `target`, return `same-site`.  5.  Return `cross-site`. -The request's client's "site for cookies" is calculated depending upon its-client's type, as described in the following subsections:+The request's client's "site for cookies" is either a triple origin or a+globally unique identifier. It is calculated depending upon its client's type,

Optional: I would probably just say that it's an origin, here and throughout. It seems weird to expand out the two cases when nothing in the IETF spec actually says these are the two kinds of origins. Also if the top-level URL is an iframe sandbox, it might be an opaque origin that you didn't create.

WDYT?

sbingler

comment created time in 14 days

Pull request review commentsbingler/http-extensions

Update draft-ietf-httpbis-rfc6265bis.md

 Additionally, client-side techniques such as those described in {{app-isolation}} may also prove effective against CSRF, and are certainly worth exploring in combination with "SameSite" cookies. +#### Taking Scheme into Consideration+By using the scheme, as well as the registrable domain, SameSite can help to+protect a secure site against a network attacker which is impersonating an+insecure site with the same registrable domain.

I think it's worth spelling out http vs https specifically. Perhaps: [...], "SameSite" cookies can help to protect https origins against a network attacker that is impersonating a corresponding http origin.

(I believe this should be "that" rather than "which"?)

sbingler

comment created time in 14 days

Pull request review commentsbingler/http-extensions

Update draft-ietf-httpbis-rfc6265bis.md

 document's "site for cookies".  Shared workers may be bound to multiple documents at once. As it is quite possible for those documents to have distinct "site for cookie" values, the-worker's "site for cookies" will be the empty string in cases where the values-diverge, and the shared value in cases where the values agree.+worker's "site for cookies" will be a globally unique identifier in cases+where the values diverge, and the shared value in cases where the values agree.  Given a WorkerGlobalScope (`worker`), the following algorithm returns its "site-for cookies" (either a registrable domain, or the empty string):+for cookies" (either a triple origin, or a globally unique identifier): -1.  Let `site` be `worker`'s origin's host's registrable domain.+1.  Let `site` be `worker`'s origin.  2.  For each `document` in `worker`'s Documents:      1.  Let `document-site` be `document`'s "site for cookies" (as defined         in {{document-requests}}). -    2.  If `document-site` is not an exact match for `site`, return the empty-        string.+    2.  If `document-site` is not same-site with `site`, return a globally+        unique identifier.

Ditto

sbingler

comment created time in 14 days

Pull request review commentsbingler/http-extensions

Update draft-ietf-httpbis-rfc6265bis.md

 cookies" (either a registrable domain, or the empty string):     1.  Let `origin` be the origin of `item`'s URI if `item`'s sandboxed origin         browsing context flag is set, and `item`'s origin otherwise. -    2.  If `origin`'s host's registrable domain is not an exact match for-        `top-origin`'s host's registrable domain, return the empty string.+    2.  If `origin` is not same-site with `top-origin`, return a globally

RFC6454 says "generate a fresh globally unique identifier and return that value." I'd suggest using that so it's clear that this identifier should be different from others.

sbingler

comment created time in 14 days

Pull request review commentsbingler/http-extensions

Update draft-ietf-httpbis-rfc6265bis.md

 document's "site for cookies".  Shared workers may be bound to multiple documents at once. As it is quite possible for those documents to have distinct "site for cookie" values, the-worker's "site for cookies" will be the empty string in cases where the values-diverge, and the shared value in cases where the values agree.+worker's "site for cookies" will be a globally unique identifier in cases+where the values diverge, and the shared value in cases where the values agree.

(Hopefully we can double-key these so this oddity no longer matters. As things stand, this feature is... kinda weird.)

sbingler

comment created time in 14 days

Pull request review commentsbingler/http-extensions

Update draft-ietf-httpbis-rfc6265bis.md

 For a given request ("request"), the following algorithm returns `same-site` or 2.  Let `site` be `request`'s client's "site for cookies" (as defined in the     following sections). -3.  Let `target` be the registrable domain of `request`'s current url.+3.  Let `target` be the origin of `request`'s current url. -4.  If `site` is an exact match for `target`, return `same-site`.+4.  If `site` is same-site with `target`, return `same-site`.  5.  Return `cross-site`. -The request's client's "site for cookies" is calculated depending upon its-client's type, as described in the following subsections:+The request's client's "site for cookies" is either a triple origin or a+globally unique identifier. It is calculated depending upon its client's type,+as described in the following subsections:

There is some overlap with https://github.com/whatwg/html/pull/4966. No need to do anything now, but when we publish this PR, I think it's worth pointing this out and whether we should align things. (Maybe a request client has a top-level origin plus a "same-site with ancestors" boolean to handle the recursive ancestry business.)

Though it's unclear to me whether this is defining it for a document or a request's client or what.

sbingler

comment created time in 14 days

Pull request review commentsbingler/http-extensions

Update draft-ietf-httpbis-rfc6265bis.md

 For a given request ("request"), the following algorithm returns `same-site` or 2.  Let `site` be `request`'s client's "site for cookies" (as defined in the     following sections). -3.  Let `target` be the registrable domain of `request`'s current url.+3.  Let `target` be the origin of `request`'s current url. -4.  If `site` is an exact match for `target`, return `same-site`.+4.  If `site` is same-site with `target`, return `same-site`.  5.  Return `cross-site`. -The request's client's "site for cookies" is calculated depending upon its-client's type, as described in the following subsections:+The request's client's "site for cookies" is either a triple origin or a+globally unique identifier. It is calculated depending upon its client's type,+as described in the following subsections:  ### Document-based requests {#document-requests}  The URI displayed in a user agent's address bar is the only security context directly exposed to users, and therefore the only signal users can reasonably-rely upon to determine whether or not they trust a particular website. The-registrable domain of that URI's origin represents the context in which a user-most likely believes themselves to be interacting. We'll label this domain the-"top-level site".+rely upon to determine whether or not they trust a particular website. Thus we+base the notion of "site for cookies" on the top-level URL. We’ll label this+URI’s origin the "top-level origin".  For a document displayed in a top-level browsing context, we can stop here: the-document's "site for cookies" is the top-level site.+document's "site for cookies" is the top-level origin.  For documents which are displayed in nested browsing contexts, we need to audit the origins of each of a document's ancestor browsing contexts' active documents in order to account for the "multiple-nested scenarios" described in Section 4-of {{RFC7034}}. A document's "site for cookies" is the top-level site if and-only if the document and each of its ancestor documents' origins have the same-registrable domain as the top-level site. Otherwise its "site for cookies" is-the empty string.+of {{RFC7034}}. A document's "site for cookies" is the top-level origin if and+only if the document and each of its ancestor documents' origins are same-site+with the top-level origin. Otherwise its "site for cookies" is a a globally

s/a a/a/

sbingler

comment created time in 14 days

Pull request review commentsbingler/http-extensions

Update draft-ietf-httpbis-rfc6265bis.md

 following conditions holds:  ## "Same-site" and "cross-site" Requests  {#same-site-requests} -A request is "same-site" if its target's URI's origin's registrable domain-is an exact match for the request's client's "site for cookies", or if the+Two origins, A and B, are considered same-site if the following algorithm returns true:+1.  If A and B are both scheme/host/port triples then

Newlines between numbered list elements seems to appease markdown.

sbingler

comment created time in 14 days

Pull request review commentsbingler/http-extensions

Update draft-ietf-httpbis-rfc6265bis.md

 following conditions holds:  ## "Same-site" and "cross-site" Requests  {#same-site-requests} -A request is "same-site" if its target's URI's origin's registrable domain-is an exact match for the request's client's "site for cookies", or if the+Two origins, A and B, are considered same-site if the following algorithm returns true:+1.  If A and B are both scheme/host/port triples then+    1.  If A’s scheme does not equal B’s scheme, return false.+    2.  Let hostA be A’s host, and hostB be B’s host.+        1.  If hostA equals hostB and hostA’s registrable domain is non-null return true.+        2. If hostA’s registrable domain equals hostB’s registrable domain and is non-null return true.+2.  If A and B are both the same return true;+3.  Return false.++Note: +1. The port component of the origins is not considered.+2. For a WebSocket request WSURI, let RequestURI be a copy of WSURI, with its+scheme set to "http" if WSURI's scheme is "ws", and to "https" otherwise.+RequestURI should then be used as the target URI in the same-site calculation.+[FETCH]

Apparently [FETCH] works, but let's do {{FETCH}} to match.

sbingler

comment created time in 14 days

Pull request review commentsbingler/http-extensions

Update draft-ietf-httpbis-rfc6265bis.md

 following conditions holds:  ## "Same-site" and "cross-site" Requests  {#same-site-requests} -A request is "same-site" if its target's URI's origin's registrable domain-is an exact match for the request's client's "site for cookies", or if the+Two origins, A and B, are considered same-site if the following algorithm returns true:+1.  If A and B are both scheme/host/port triples then+    1.  If A’s scheme does not equal B’s scheme, return false.+    2.  Let hostA be A’s host, and hostB be B’s host.+        1.  If hostA equals hostB and hostA’s registrable domain is non-null return true.

Nit: comma before return in "If ..., return true" here and below.

sbingler

comment created time in 14 days

Pull request review commentsbingler/http-extensions

Update draft-ietf-httpbis-rfc6265bis.md

 following conditions holds:  ## "Same-site" and "cross-site" Requests  {#same-site-requests} -A request is "same-site" if its target's URI's origin's registrable domain-is an exact match for the request's client's "site for cookies", or if the+Two origins, A and B, are considered same-site if the following algorithm returns true:+1.  If A and B are both scheme/host/port triples then+    1.  If A’s scheme does not equal B’s scheme, return false.+    2.  Let hostA be A’s host, and hostB be B’s host.+        1.  If hostA equals hostB and hostA’s registrable domain is non-null return true.+        2. If hostA’s registrable domain equals hostB’s registrable domain and is non-null return true.+2.  If A and B are both the same return true;+3.  Return false.++Note: +1. The port component of the origins is not considered.+2. For a WebSocket request WSURI, let RequestURI be a copy of WSURI, with its

Let's have this just say something like "Note: The port [...] is not considered.". Then move the WebSockets note to modify the algorithm below.

There, rather than restarting the text in Fetch out of context, perhaps something like:

  1. At the top of the file, in the informative section, add a line RFC6455: to cite WebSockets under informative references (this is just a note, so I don't think it needs to be normative).

  2. Add text to the tune of:

Note: The request's URL when establishing a WebSockets connection {{RFC6455}} has scheme "http" or "https", rather than "ws" or "wss". {{FETCH}} maps schemes when constructing the request. This allows WebSockets requests to treated as same-site.

sbingler

comment created time in 14 days

Pull request review commentsbingler/http-extensions

Update draft-ietf-httpbis-rfc6265bis.md

 following conditions holds:  ## "Same-site" and "cross-site" Requests  {#same-site-requests} -A request is "same-site" if its target's URI's origin's registrable domain-is an exact match for the request's client's "site for cookies", or if the+Two origins, A and B, are considered same-site if the following algorithm returns true:+1.  If A and B are both scheme/host/port triples then+    1.  If A’s scheme does not equal B’s scheme, return false.+    2.  Let hostA be A’s host, and hostB be B’s host.+        1.  If hostA equals hostB and hostA’s registrable domain is non-null return true.+        2. If hostA’s registrable domain equals hostB’s registrable domain and is non-null return true.+2.  If A and B are both the same return true;

Maybe replace "both the same" with "are the same origin"? Doesn't hugely matter but "same origin" is a phrase that's often associated with origins.

sbingler

comment created time in 14 days

Pull request review commentsbingler/http-extensions

Update draft-ietf-httpbis-rfc6265bis.md

 following conditions holds:  ## "Same-site" and "cross-site" Requests  {#same-site-requests} -A request is "same-site" if its target's URI's origin's registrable domain-is an exact match for the request's client's "site for cookies", or if the+Two origins, A and B, are considered same-site if the following algorithm returns true:+1.  If A and B are both scheme/host/port triples then+    1.  If A’s scheme does not equal B’s scheme, return false.

Super nitpicky nitpick: it looks like the rest of this file uses plain ' instead of . No idea if the tooling cares. (I assume Docs autoconverted it.)

sbingler

comment created time in 14 days

Pull request review commentMikeBishop/dns-alt-svc

Replace "alpn" with "transport"/"no-default-transport"

 responses to the address queries that were issued in parallel. A few initial SvcParamKeys are defined here.  These keys are useful for HTTPS, and most are applicable to other protocols as well. -## "alpn"--The "alpn" SvcParamKey defines the Application Layer Protocol-(ALPN, as defined in {{!RFC7301}) supported by a TLS-based alternative-service.  Its value SHOULD be an entry in the IANA registry "TLS-Application-Layer Protocol Negotiation (ALPN) Protocol IDs".--The presentation format and wire format of SvcParamValue-is its registered "Identification Sequence".  This key SHALL NOT-appear more than once in a SvcFieldValue.--Clients MUST include this value in the ProtocolNameList in their-ClientHello's `application_layer_protocol_negotiation` extension.-Clients SHOULD also include any other values that they support and-could negotiate on that connection with equivalent or better security-properties.  For example, when using a SvcFieldValue with an "alpn" of-"h2", the client MAY also include "http/1.1" in the ProtocolNameList.--Clients MUST ignore SVCB RRs where the "alpn" SvcParamValue-is unknown or not supported for use with the current scheme.--The value of the "alpn" SvcParamKey can have effects beyond the content-of the TLS handshake and stream.  For example, an "alpn" value of "h3"-({{HTTP3}} Section 11.1) indicates the client must use QUIC, not TLS.+## "transport" and "no-default-transport"++The "transport" and "no-default-transport" SvcParamKeys together+indicate the set of transport protocols supported by this service endpoint.++Each scheme that is mapped to SVCB defines a set or registry of allowed+transport values, and a "default set" of supported values, which SHOULD NOT+be empty.  To determine the set of transport protocols supported by an+endpoint (the "transport set"), the client collects the set of+"transport" values, and then adds the default set unless the+"no-default-transport" SvcParamKey is present.  The presence of a value in+the transport set indicates that this service endpoint, described by+SvcDomainName and the other parameters (e.g. "port") offers service with+that transport.

The problem with the half-flattened scheme is this design is extremely inconsistent in its principles. It simultaneously believes no-default-transport is a rare misfeature only to be used for questionable deployment patterns, at the same time it is a pattern for new protocols. It then simultaneously believes duplicating the ESNIConfig is a problem and should be avoided, yet recommends it as a pattern for new protocols. All this mixup then pressures new protocols to share UDP ports to avoid all these design flaws, yet it also insists this pressure doesn't exist.

BTW, even if NOTQUIC can't share a UDP port with QUIC, users can still avoid no-default-transport by running TLS on the TCP port of the same number. This seems like a reasonable, perhaps likely configuration, for the same reasons that it appears to be popular with QUIC.

No, this is not a reasonable or likely configuration. Moreover, it's dangerous because it further incentivizes existing TCP HTTP servers to not check port numbers against the HTTP Host header. (This check is a security requirement for sites running different origins on different ports as it's the only thing preventing the network from mixing ports up.)

It's popular with QUIC because, barring any reason to pick another port, 443 is the natural one in order to match with the origin. HTTPSSVC having broken all these standard assumption is not going to suddenly make people want to deploy TCP over the corresponding port for NOTQUIC. That doesn't really make any sense at all.

bemasc

comment created time in 15 days

pull request commentopenssl/openssl

WIP: master QUIC support

FYI, API-wise we're planning on splitting the traffic secrets callback into separate read and write callbacks, so that we can defer the app data read secret on the server to after client finished.

The current design was to ensure that QUIC would never worry about reading packets it cannot write ACKs for, but the converse scenario (writing packets you, for now, cannot read ACKs for) is important to get 0-RTT and half-RTT data right, so separate callbacks with invariants in prose it is.

tmshort

comment created time in 18 days

pull request commentgoogle/oss-fuzz

Removes cert corpus from boringssl in mbedtls project

Oops! That was unintentional. https://boringssl-review.googlesource.com/c/boringssl/+/39884 should put them back.

catenacyber

comment created time in 20 days

Pull request review commentMikeBishop/dns-alt-svc

Replace "alpn" with "transport"/"no-default-transport"

 responses to the address queries that were issued in parallel. A few initial SvcParamKeys are defined here.  These keys are useful for HTTPS, and most are applicable to other protocols as well. -## "alpn"--The "alpn" SvcParamKey defines the Application Layer Protocol-(ALPN, as defined in {{!RFC7301}) supported by a TLS-based alternative-service.  Its value SHOULD be an entry in the IANA registry "TLS-Application-Layer Protocol Negotiation (ALPN) Protocol IDs".--The presentation format and wire format of SvcParamValue-is its registered "Identification Sequence".  This key SHALL NOT-appear more than once in a SvcFieldValue.--Clients MUST include this value in the ProtocolNameList in their-ClientHello's `application_layer_protocol_negotiation` extension.-Clients SHOULD also include any other values that they support and-could negotiate on that connection with equivalent or better security-properties.  For example, when using a SvcFieldValue with an "alpn" of-"h2", the client MAY also include "http/1.1" in the ProtocolNameList.--Clients MUST ignore SVCB RRs where the "alpn" SvcParamValue-is unknown or not supported for use with the current scheme.--The value of the "alpn" SvcParamKey can have effects beyond the content-of the TLS handshake and stream.  For example, an "alpn" value of "h3"-({{HTTP3}} Section 11.1) indicates the client must use QUIC, not TLS.+## "transport" and "no-default-transport"++The "transport" and "no-default-transport" SvcParamKeys together+indicate the set of transport protocols supported by this service endpoint.++Each scheme that is mapped to SVCB defines a set or registry of allowed+transport values, and a "default set" of supported values, which SHOULD NOT+be empty.  To determine the set of transport protocols supported by an+endpoint (the "transport set"), the client collects the set of+"transport" values, and then adds the default set unless the+"no-default-transport" SvcParamKey is present.  The presence of a value in+the transport set indicates that this service endpoint, described by+SvcDomainName and the other parameters (e.g. "port") offers service with+that transport.

Sorry, I should have elaborated on "more-or-less". Encoding separate RRs for port purposes in this proposal is fussy (you need a no-default-transport) and costs a duplicate ESNI config. (Which we presumably care about or we'd go with the flattened design.) That means there will be pressure at the protocol design phase to share invariants, which means we effectively are saying this.

That's not to say this is fatal. Maybe each scheme only needs two sets of protocol invariants? QUIC folks have no doubt thought about this more than me so I'd like to know what they think. My main position is that, if we're strongly incentivizing this, we need to recognize that and decide we're okay with it, and not hide behind the verbose alternative existing.

bemasc

comment created time in 20 days

Pull request review commentMikeBishop/dns-alt-svc

Replace "alpn" with "transport"/"no-default-transport"

 responses to the address queries that were issued in parallel. A few initial SvcParamKeys are defined here.  These keys are useful for HTTPS, and most are applicable to other protocols as well. -## "alpn"--The "alpn" SvcParamKey defines the Application Layer Protocol-(ALPN, as defined in {{!RFC7301}) supported by a TLS-based alternative-service.  Its value SHOULD be an entry in the IANA registry "TLS-Application-Layer Protocol Negotiation (ALPN) Protocol IDs".--The presentation format and wire format of SvcParamValue-is its registered "Identification Sequence".  This key SHALL NOT-appear more than once in a SvcFieldValue.--Clients MUST include this value in the ProtocolNameList in their-ClientHello's `application_layer_protocol_negotiation` extension.-Clients SHOULD also include any other values that they support and-could negotiate on that connection with equivalent or better security-properties.  For example, when using a SvcFieldValue with an "alpn" of-"h2", the client MAY also include "http/1.1" in the ProtocolNameList.--Clients MUST ignore SVCB RRs where the "alpn" SvcParamValue-is unknown or not supported for use with the current scheme.--The value of the "alpn" SvcParamKey can have effects beyond the content-of the TLS handshake and stream.  For example, an "alpn" value of "h3"-({{HTTP3}} Section 11.1) indicates the client must use QUIC, not TLS.+## "transport" and "no-default-transport"++The "transport" and "no-default-transport" SvcParamKeys together+indicate the set of transport protocols supported by this service endpoint.++Each scheme that is mapped to SVCB defines a set or registry of allowed+transport values, and a "default set" of supported values, which SHOULD NOT+be empty.  To determine the set of transport protocols supported by an+endpoint (the "transport set"), the client collects the set of+"transport" values, and then adds the default set unless the+"no-default-transport" SvcParamKey is present.  The presence of a value in+the transport set indicates that this service endpoint, described by+SvcDomainName and the other parameters (e.g. "port") offers service with+that transport.++Clients SHOULD NOT attempt connection to a service endpoint whose+transport set does not contain any compatible transport protocols.  To ensure+consistency of behavior, clients MAY reject the entire SVCB RRSet and fall+back to basic connection establishment if all of the RRs indicate+"no-default-transport", even if connection could have succeeded using a+non-default transport.++For "transport", the presentation and wire format are the same,+in the expectation that values will typically be ASCII strings, but any+sequence of octets is a permissible value.  This key MAY appear multiple times+with different values.++The value of "no-default-transport" MUST be empty, and clients SHOULD reject

s/SHOULD/MUST/

It's a syntax error. Syntax errors are fatal.

bemasc

comment created time in 21 days

Pull request review commentMikeBishop/dns-alt-svc

Replace "alpn" with "transport"/"no-default-transport"

 responses to the address queries that were issued in parallel. A few initial SvcParamKeys are defined here.  These keys are useful for HTTPS, and most are applicable to other protocols as well. -## "alpn"--The "alpn" SvcParamKey defines the Application Layer Protocol-(ALPN, as defined in {{!RFC7301}) supported by a TLS-based alternative-service.  Its value SHOULD be an entry in the IANA registry "TLS-Application-Layer Protocol Negotiation (ALPN) Protocol IDs".--The presentation format and wire format of SvcParamValue-is its registered "Identification Sequence".  This key SHALL NOT-appear more than once in a SvcFieldValue.--Clients MUST include this value in the ProtocolNameList in their-ClientHello's `application_layer_protocol_negotiation` extension.-Clients SHOULD also include any other values that they support and-could negotiate on that connection with equivalent or better security-properties.  For example, when using a SvcFieldValue with an "alpn" of-"h2", the client MAY also include "http/1.1" in the ProtocolNameList.--Clients MUST ignore SVCB RRs where the "alpn" SvcParamValue-is unknown or not supported for use with the current scheme.--The value of the "alpn" SvcParamKey can have effects beyond the content-of the TLS handshake and stream.  For example, an "alpn" value of "h3"-({{HTTP3}} Section 11.1) indicates the client must use QUIC, not TLS.+## "transport" and "no-default-transport"++The "transport" and "no-default-transport" SvcParamKeys together+indicate the set of transport protocols supported by this service endpoint.++Each scheme that is mapped to SVCB defines a set or registry of allowed+transport values, and a "default set" of supported values, which SHOULD NOT+be empty.  To determine the set of transport protocols supported by an+endpoint (the "transport set"), the client collects the set of+"transport" values, and then adds the default set unless the+"no-default-transport" SvcParamKey is present.  The presence of a value in+the transport set indicates that this service endpoint, described by+SvcDomainName and the other parameters (e.g. "port") offers service with+that transport.

Having one port parameter describe all transports is kind of odd, both when it's TCP vs UDP and UDP vs UDP. For TCP/TLS/H{1.1,2} the port is a TCP port. For UDP/QUIC/H3, the port is a UDP port. That maps TCP and UDP ports onto each other, which seems a little odd? But at least it's distinguishable because the endpoint layer branches.

Suppose we were to later define UDP/VeryFancyProto/H4. This design more-or-less forces all new UDP-based protocols for transporting HTTP to share protocol invariants with QUIC. You get exactly two sets of protocol invariants, one for TCP and one for UDP, and then it starts getting very inconvenient. This is... odd.

On the other hand, if we were to define UDP/QUICv2/H3 such that QUIC and QUICv2 did expect to run together but benefited from an out-of-band version hint (if QUICv2 is purely in-band negotiation like TLS, then there is no need to incorporate it into HTTPSSVC at all), then this works as separate transports. Although it is interesting to note that, in that picture, UDP/QUIC and UDP/QUICv2 have a very different relationship from TCP/TLS. (I don't know how Alternate-Protocol expressed this, but Google's Alt-Svc advertisement for gQUIC seems to use a separate v attribute for denoting versions.)

An alternate design would have been to instead:

  1. Get rid of no-default-transport.
  2. transport is an optional single-valued token. If omitted, it is the default transport.
  3. port and other attributes modify just the one transport.
  4. Replace the "if all of the RRs indicate no-default-transport" with "if none of the RRs indicates the default transport".

This avoids the port oddities and flattens the structure a bit, which seems simpler to me. One wriggle is it requires duplicating the ESNI config per transport whereas this design avoids it in cases where the port confusion is what you wanted. (Clllleeeeaaarrly we need a more complicated encoding! :-D)

Another design would be some odd two-level thing where transports have associated attributes.

@DavidSchinazi I'm curious what your thoughts on all this is. Most of the oddities pertain to QUIC.

bemasc

comment created time in 21 days

Pull request review commentMikeBishop/dns-alt-svc

Replace "alpn" with "transport"/"no-default-transport"

 Consider a simple zone of the form  The domain owner could add records like -    simple.example. 7200 IN HTTPSSVC 1 . alpn=h3 ...-                            HTTPSSVC 2 . alpn=h2 ...+    simple.example. 7200 IN HTTPSSVC 1 . transport=quic ... -The presence of these records indicates to clients that simple.example-supports HTTPS, and the key=value pairs indicate that it prefers HTTP/3-but also supports HTTP/2.  The records can also include other information+The presence of this record indicates to clients that simple.example+supports HTTPS, and the key=value pairs indicate that it supports QUIC

This can be useful for letting clients using an experimental protocol know that it is available on a pool with a different name.

Note that such a deployment loses downgrade-protection of ALPN. So does a temporary heterogeneous deployment during rollout or rollback on a multi-instance service, but I think such situations are better understood to be temporary.

bemasc

comment created time in 21 days

Pull request review commentMikeBishop/dns-alt-svc

Fully specify HTTPSSVC/Alt-Svc/ESNI interaction

 of {{HSTS}}.  The SVCB "esniconfig" parameter is defined for conveying the ESNI configuration of an alternative service.-The value of the parameter is an ESNIConfig structure {{!ESNI}}-or the empty string.  ESNI-aware clients SHOULD prefer SVCB/HTTPSSVC RRs with-non-empty esniconfig.--The parameter value is the ESNIConfig structure {{!ESNI}}-encoded in {{!base64=RFC4648}} or the empty string.+The value of the parameter is an ESNIConfig structure {{!ESNI}}.+In presentation format, the structure is encoded in {{!base64=RFC4648}}. The SVCB SvcParamValue wire format is the octet string containing the binary ESNIConfig structure.  This parameter MUST NOT appear more than once in a single SvcFieldValue. --### Handling a mixture of alternatives not supporting ESNI+## Client behavior {#esni-client-behavior}  The general client behavior specified in {{client-behavior}} permits clients to retry connection with a less preferred alternative if the preferred option fails, including falling back to a direct connection if all SVCB options fail. This behavior is not suitable for ESNI, because fallback would negate the privacy benefits of-ESNI.+ESNI.  Accordingly, ESNI-capable clients SHALL implement the following+behavior for connection establishment. -Accordingly, any connection attempt that uses ESNI MUST fall back only to-another alt-value that also has the esniconfig parameter.  If the parameter's-value is the empty string, the client SHOULD connect as it would in the-absence of any ESNIConfig information.+1. Perform connection establishment using HTTPSSVC as described in+   {{client-behavior}}, but do not fall back to the origin's A/AAAA records.+   If all the HTTPSSVC RRs have esniconfig, and they all fail, terminate+   connection establishment.+2. If the client implements Alt-Svc, try to connect using any entries from+   the Alt-Svc cache.+3. Fall back to the origin's A/AAAA records if necessary. -For example, suppose a server operator has two alternatives.  Alternative A-is reliably accessible but does not support ESNI.  Alternative B supports-ESNI but is not reliably accessible.  The server operator could include a-full esniconfig value in Alternative B, and mark Alternative A with esniconfig=""-to indicate that fallback from B to A is allowed.+As a latency optimization, clients MAY prefetch DNS records for later steps+before they are needed. -Other clients and services implementing SVCB or HTTPSSVC with esniconfig-are encouraged to take a similar approach.+## Deployment considerations +An HTTPSSVC RRSet containing some RRs with esniconfig and some without is+vulnerable to a downgrade attack.  This configuration is NOT RECOMMENDED.+Zone owners who do use such a mixed configuration SHOULD mark the RRs with+esniconfig as more preferred (i.e. smaller SvcFieldPriority) than those+without.

Marking it more preferred means ESNI is still not downgrade-protected, right? I.e. the prioirty thing is just to be opportunistically okay? It's probably worth clarifying. This confused me at first.

bemasc

comment created time in 21 days

Pull request review commentMikeBishop/dns-alt-svc

Fully specify HTTPSSVC/Alt-Svc/ESNI interaction

 record, groups of clients will necessarily receive the same SvcFieldValue.  Therefore, HTTPSSVC is not suitable for uses that require single-client granularity. +## Interaction with Alt-Svc+ If the client has an Alt-Svc cache, and a usable Alt-Svc value is present in that cache, then the client MAY skip the HTTPSSVC query.+If Alt-Svc connection fails, the client SHOULD fall back to the HTTPSSVC+client connection procedure ({{client-behavior}}).

This text still allows clients to short-circuit HTTPSSVC if an Alt-Svc cache entry exists, which means all the issues from #105 apply. (Is this just a mistake? The client behavior below seems to suggest HTTPSSVC goes first.)

Or is the intent that the client behavior overrides this sentence? It's probably worth writing this a bit more explicitly. The combination is also weird because that text isn't additional requirements on top of the base algorithm. It completely changes it. (The base algorithm, while not explicitly spelled out, implies that the Alt-Svc integration point is before HTTPSSVC while the ESNI algorithm says it's after.)

bemasc

comment created time in 21 days

pull request commentopenssl/openssl

statem: fix the alert sent for too large messages

Or perhaps we just fold one into the other.

tomato42

comment created time in 21 days

pull request commentMikeBishop/dns-alt-svc

Generalize SERVFAIL handling for security

Ah oops! Apparently I can't read. :-)

(We were discussing how we want to handle this and the risks associated.)

bemasc

comment created time in 22 days

pull request commentMikeBishop/dns-alt-svc

Generalize SERVFAIL handling for security

This is to address the attack we discussed where the attacker drops QUIC packets to time out the HTTPSSVC response, right? I think the rationale here should be in the draft, in case this fix fails. E.g., if many authoritative servers time out unexpected record types, HTTPSSVC would break sites and thus this would not be viable.

Having the reasoning documented means we won't forget to explore alternate fixes. (Perhaps we make some kind of "combined query" to the DoH resolver so it knows to pack the responses together?)

bemasc

comment created time in 22 days

pull request commentopenssl/openssl

Do not silently truncate files on perlasm errors

Would you mind adding it for master, too?

Already did in https://github.com/openssl/openssl/pull/10930.

davidben

comment created time in 23 days

issue commentMikeBishop/dns-alt-svc

Why allow multiple values for the same parameter at all?

I think I've lost track of what "this" is and what is being revisited where. :-) I think the encoding should be:

  • You only get one parameter per key, with one value.
  • As is already the case, that value has whatever internal format it needs. That internal format may be a single structure, a list of structures, or something more complicated.
  • Duplicate parameters are a syntax error. The receiver treats this as a parse error.
  • Keys must be listed in ascending order, so the receiver can check for duplicates without maintaining much state. Non-sorted keys are a parse error.

Re protocol IDs not having fixed lengths, just add length prefixes as needed. That's all a multi-valued parameter is doing anyway. (I don't think we should use NUL-termination. That sort of thing is prone to injection problems. See also all the problems when things get stuck into C-style strings.)

(I still need to review #97, but I doubt a vector<transport> would work anyway given that you need a port number. If it ends up being a vector<tuple<transport, port>> or a different design, that further suggests a key-specific serialization. A vector<tuple<transport, port>> needs a length prefix on the transport, not on the overall tuple.)

davidben

comment created time in 25 days

issue commentgrpc/grpc

//test/cpp/end2end:async_end2end_test failing on msan

Is there an easy way to see which compiler/optimization flags chromium uses for msan? (I'm not familiar with chromium's build).

I don't know of one at the level of actual compiler flags. It also is potentially different for different targets. The builders do output the high-level GN options, which indeed include is_debug = false. https://ci.chromium.org/p/chromium/builders/ci/Linux%20MSan%20Builder/24344

What that compiles into is kind of all over the place, but I believe it ultimately becomes -O2 most of the time: https://source.chromium.org/chromium/chromium/src/+/master:build/config/compiler/BUILD.gn;l=2010?q=%5C-O%20case:yes&ss=chromium%2Fchromium%2Fsrc:build%2Fconfig%2F

I don't know if local (=non-RBE) msan build will give you the right signal. We mostly use RBE builds for running sanitizers because they are the same as what our CI runs.

Ah, that's annoying. I guess the C++ runtime isn't one of the hermetic parts of the Bazel build? Is the compiler hermetic at least? I wonder if I can somehow tell MSan to not care about the uninstrumented C++ runtime bits and try to get far enough to run things or look at the disassembly.

One idle thought is maybe making aes_nohw_and into #defines, at least for the SSE2 build, would make MSan + -O0 behave better. Making them #defines for the uint32_t and uint64_t build makes me a little sad due to lost type-checking, but the SSE2 build gets the same type-checking out of the intrinsics, and you'd be getting the SSE2 intrinsics...

jtattermusch

comment created time in a month

issue commentdavidben/client-language-selection

Origins containing multiple languages

I like that idea! That means the UI component is effectively a "tell the site I like language X" permission prompt, but it feels natural.

One oddity: if the first page you go to is the A,B,C one so the browser remembers C, you'll hit a default language on the A,B pages, which isn't great. But if we believe in the UI mitigation, that will still get in the right state. Or we could lean on other mitigations and automatically retry if we think the current page is entirely unreadable.

jyasskin

comment created time in a month

PublicEvent

pull request commentopenssl/openssl

Do not silently truncate files on perlasm errors

The string should be "error closing STDOUT: $!"

Done.

davidben

comment created time in a month

push eventdavidben/openssl

David Benjamin

commit sha c24a13796ea4b77c35f320e19f42fa3cc03ba1a6

Do not silently truncate files on perlasm errors If one of the perlasm xlate drivers crashes, OpenSSL's build will currently swallow the error and silently truncate the output to however far the driver got. This will hopefully fail to build, but better to check such things. Handle this by checking for errors when closing STDOUT (which is a pipe to the xlate driver). This is the OpenSSL 1.1.1 version of https://github.com/openssl/openssl/pull/10883 and https://github.com/openssl/openssl/pull/10930.

view details

push time in a month

pull request commentopenssl/openssl

Also check for errors in x86_64-xlate.pl.

The string should be "error closing STDOUT: $!"

Done. Applied this to all the files since the others already merged stuff in.

davidben

comment created time in a month

push eventdavidben/openssl

David Benjamin

commit sha 628a6e0e3de96fc6b6a252d2aa3c6d75ac0443bd

Include $! in the perl close STDOUT error messages. Per review comment.

view details

push time in a month

issue commentMikeBishop/dns-alt-svc

Consider SVCB-Used header

Why does the server need this information when it already knows the IP the client connected to?

enygren

comment created time in a month

issue commentgrpc/grpc

//test/cpp/end2end:async_end2end_test failing on msan

So I might be able to reproduce, when you build for MSan, is it just cloning the repo, getting submodules, and running that bazel command, or is there something else? I ran into troubles with it pulling in my uninstrumented system libstdc++.

jtattermusch

comment created time in a month

issue commentgrpc/grpc

//test/cpp/end2end:async_end2end_test failing on msan

Yeah, that is quite slow. I wonder if the unoptimized mode isn't inlining aes_nohw_and and friends. Or maybe MSan is handling SSE2 intrinsics badly and relies on LLVM to clean it back up. https://boringssl.googlesource.com/boringssl/+/refs/heads/master/crypto/fipsmodule/aes/aes_nohw.c#62

Although if I build BoringSSL locally without optimizations and with MSan, I still hit much faster than 30s for 4MB for AES-GCM.

jtattermusch

comment created time in a month

issue commentgrpc/grpc

//test/cpp/end2end:async_end2end_test failing on msan

I notice your MSan configuration builds with -O0. I expect that means, in addition to not optimizing the code, it's not optimizing the MSan instrumentation.

FWIW, Chromium builds MSan with is_debug = false, which does apply optimizations. Dunno what the internal repository does. https://www.chromium.org/developers/testing/memorysanitizer

jtattermusch

comment created time in a month

issue commentgrpc/grpc

//test/cpp/end2end:async_end2end_test failing on msan

Sorry, missed this yesterday. (A lot of GitHub notifications go to my personal email, so everything is confusing. :-/ )

All the inputs and outputs in the log appear to be correct. So that rules out the theory that somehow we're getting the wrong answer.

Looking at the failure, it's:

Expected equality of these values:
  cq->AsyncNext(&got_tag, &ok, deadline)
    Which is: 2
  CompletionQueue::GOT_EVENT
    Which is: 1

In hindsight, we should have decoded those values earlier. 2 appears to be CompletionQueue::TIMEOUT. Probably all that's happening is the new side channel protections, combined with the MSan instrumentation, is just causing the test to time out.

jtattermusch

comment created time in a month

issue commentMikeBishop/dns-alt-svc

ESNI lifetime implications from Alt-Svc vs HTTPSSVC precedence

I have concerns about any approach where HTTPSSVC without ESNI takes precedence over Alt-Svc ESNI. That arrangement creates an incentive for hostile DNS servers to synthesize fake HTTPSSVC records, in order to override the ESNIConfig received in Alt-Svc.

It's certainly true that tying ESNI to DNS has consequences given a hostile DNS server. Both that and the dependency on new DNS records (which effectively depends on DoH) are indeed disappointing. But both properties have been true throughout this iteration of ESNI. We switched to the HTTPSSVC record on the assumption that it was equivalent to the original ESNI record. The Alt-Svc flaw breaks that assumption.

If we want to solve the hostile DNS server problem, we would need to get rid of all Alt-Svc failure paths on the client. The hostile DNS server also controls A/AAAA records, so it could blackhole alt-svc-entrypoint.cdn.example and cause all Alt-Svc routes to fail. (If we punt that problem and try only to solve the broad deployment problem, an Alt-Svc to HTTPSSVC fallback is tenable, though it alone doesn't solve the name pinning problem.)

Unless we can make ESNI in Alt-Svc truly equivalent to HTTPSSVC (unlikely given the TTL), it must be possible for servers to use HTTPSSVC with ESNI, use Alt-Svc without ESNI, and still get ESNI support for clients which support both. That means HTTPSSVC must be able to override Alt-Svc.

If we can come up with a way to salvage ESNI in Alt-Svc, we can always add some Alt-Svc directive that means this configuration cannot be overridden by HTTPSSVC. (This, of course, would still need to meet all the usual deployability and availability requirements.) We'd also still need to backport all of the ALPN-related fixes or general Alt-Svc support is dead in the water. All of the issues raised in #73 apply to Alt-Svc once ESNI shows up.

As for lifetimes, ESNI keys are substantially less sensitive than TLS keys, so requiring them to be rotated ultra-frequently seems like a low priority. Typical TLS certificates last for 90 days. A similar or shorter lifetime for ESNI keys would easily be compatible with Alt-Svc ESNI.

I agree they're less sensitive. I'm more concerned about the name pinning than the key rotation.

A footnote on the equivalence to TLS keys however: 90 days is still too long for TLS certificates (the world needs to move to issuance automation). Also, as long-lived keys are fundamentally more sensitive than short-lived keys, server deployments apply extra protections to their long-lived keys like keeping them off of serving frontends. If ESNI keys, due to their lifetime, end up warranting such protections, it would double the cost of such measures.

But reasonably short TTLs mostly make sense if clients honor Alt-Svc going to alternate servernames anyways as you want some degree of agility on these.

Certainly switching the names increases the desire for shorter TTLs, but that doesn't change the nature of Alt-Svc. Alt-Svc fundamentally only applies to subsequent connections, so the lifetime must extend to the next time you visit the site. (The current visit already has a preexisting connection. Even in clients willing to bear the complexity of cycling to new connection instantly, this is not useful for ossification purposes because blocking the alternate has no availability consequences while the main connection lives.)

The requirements on large enough Alt-Svc TTLs get stronger with ESNI. If you only use Alt-Svc as a routing optimization, perhaps you're fine with low Alt-Svc coverage as it's just an optimization. An ESNI-bearing Alt-Svc with small TTL barely applies.

(One good use-case would be opportunistically supporting ESNI with QUIC prior to being ready to support it in TLS and before clients support doing HTTPSSVC lookups.)

This use case doesn't make sense. Remember that clients will always try TCP. We often say "fallback", but this is misleading. One of the failure modes is blackholed packets, so the two are actually run in parallel, so doing this will deterministically leak the SNI.

Indeed the one of the requirements on any fix to the many Alt-Svc ALPN mistakes is to prevent this, so that servers do not accidentally deploy a QUIC-only ESNI. Out-of-band signal => fallback for robustness => failing to provide equivalent security is a server configuration error.

davidben

comment created time in a month

issue commentgrpc/grpc

//test/cpp/end2end:async_end2end_test failing on msan

Thanks! Would you mind trying that again with this patch? I was dumb and forgot about threads, so that log is a little jumbled. :-(

Here's a new patch that'll hopefully avoid the issue? boringssl-patch-2.txt Also does the non-MSan build with OPENSSL_ia32cap=0 reproduce the issue, or is it just MSan?

jtattermusch

comment created time in a month

create barnchdavidben/openssl

branch : perlasm-errors-1.1.1

created branch time in a month

PR opened openssl/openssl

Do not silently truncate files on perlasm errors

If one of the perlasm xlate drivers crashes, OpenSSL's build will currently swallow the error and silently truncate the output to however far the driver got. This will hopefully fail to build, but better to check such things.

Handle this by checking for errors when closing STDOUT (which is a pipe to the xlate driver).

This is the OpenSSL 1.1.1 version of https://github.com/openssl/openssl/pull/10883 and https://github.com/openssl/openssl/pull/10930.

+160 -160

0 comment

160 changed files

pr created time in a month

PR opened openssl/openssl

Also check for errors in x86_64-xlate.pl.

In https://github.com/openssl/openssl/pull/10883, I'd meant to exclude the perlasm drivers since they aren't opening pipes and do not particularly need it, but I only noticed x86_64-xlate.pl, so arm-xlate.pl and ppc-xlate.pl got the change.

That seems to have been fine, so be consistent and also apply the change to x86_64-xlate.pl. Checking for errors is generally a good idea.

+1 -1

0 comment

1 changed file

pr created time in a month

create barnchdavidben/openssl

branch : perlasm-xlate-errors

created branch time in a month

pull request commentopenssl/openssl

Do not silently truncate files on perlasm errors

I can put together a separate PR for 1.1.1. It's just a sed line. :-)

davidben

comment created time in a month

issue commentgrpc/grpc

//test/cpp/end2end:async_end2end_test failing on msan

Ah okay. :-) Thanks!

jtattermusch

comment created time in a month

issue commentgrpc/grpc

//test/cpp/end2end:async_end2end_test failing on msan

@jtattermusch Can you answer the questions above? We still need to determine the actual cause of the bug.

jtattermusch

comment created time in a month

issue commentMikeBishop/dns-alt-svc

ESNI lifetime implications from Alt-Svc vs HTTPSSVC precedence

Using HTTPSSVC to repair Alt-Svc errors rather than checking them ahead of time is an interesting idea, but it does not address the problem for clients that cannot make HTTPSSVC queries. Indeed the PR notes:

If the client does not support HTTPSSVC, it MUST fail the connection to avoid enabling a downgrade attack.

That means a server still cannot put ESNI in Alt-Svc without long-term commitment about their hosting provider. While the PR does claim Alt-Svc is unnecessary:

Origins MAY use esniconfig in Alt-Svc, HTTPSSVC, both, or neither

This is false. Saying it will break sporadically doesn't change the fact that it's broken. It also isn't sporadic: it will deterministicly break for all but the first connection. That does not count as deploying ESNI.

The section "Non-authoritative servers" needs to go in ESNI with full TLSWG analysis as it has security consequences for both the client and server and affects TLS itself. As to those security consequences, I do not think we should do this. Extracting an authenticated signal out of explicitly getting a bad certificate is extremely questionable. Certificate errors may arise from all kinds of reasons. Overloading that would be a problem for a client implementation.

Additionally, the text mandates particular behavior on unknown names. Given the subtle problems that come up when HTTP servers accept unknown Host headers, I don't think we should introduce such requirements, certainly not via an implicit signal like this.

This also still assumes the previous hosting provider behaves in a particular way.

Finally, this is far too much complexity on the client. Please use the much simpler solution outlined above.

davidben

comment created time in a month

issue openedWICG/cors-rfc1918

Bypass via "mixed" content

tl;dr: When a private IP fetches from a public IP, we should require TLS from the public IP.

Suppose http://corp.example is an intranet or localhost server. It serves an HTML file which has some JS, so it writes <script src="/script.js"></script>.

An attacker then points evil.example to corp.example's IP address and causes the user to visit http://evil.example. The intranet site could detect this by checking the Host header but, realistically, many servers do not check this.

CORS-RFC1918 will treat that document as private address space. The browser then loads /script.js which resolves to http://evil.example/script.js. The attacker now rebinds evil.example to an attacker-controlled IP. The attacker now has arbitrary script running in a private address space context, which bypasses CORS-RFC1918 checks.

I think the two things that went wrong in this scenario are:

  1. Although the document as authenticated via the intranet, the part of the base URL wasn't and the document's meaning depends on the base URL.
  2. An intranet-authenticated context sourced unauthenticated script from the public internet, which means its security level is effectively downgraded.

(1) seems hard to fix, short of the browser having intranet vs internet names preconfigured. In that case we wouldn't categorize these based on the resolved IP at all. But then we don't get to plug holes in the mass of broken configurations today.

(2) is really an analog of mixed content in HTTPS. If we blocked that, we would block this attack as well as other misconfigurations which make no sense. Though I do worry that (1) may have other fun implications to think about.

created time in a month

issue openedWICG/cors-rfc1918

Examples are inconsistent about http vs https

Example 1 says the router management site is deployed at http://admin:admin@router.local/set_dns. However, the example later uses an iframe to https://admin:admin@router.local/set_dns in the attack.

Example 3 says the internal link-shortening service is at https://go/. The next paragraph then talks about clicking http://go/* links. It then goes back to talking about leaking https://go/shortlink.

Given intranet sites tend not to use https, I'm guessing all of these were meant to be http?

created time in a month

issue commenthttpwg/http-extensions

SameSite attribute is not easily extensible

I believe Firefox's behavior is that it ignores the attribute when SameSite=garbage (falling back to a previous one, if any). That's the -02 behavior, rather than the -03 behavior, which unconditionally takes the last one attribute (so no falling back to a previous one) and then ignores unrecognized ones.

I believe Chrome and Safari (as of 10.15) implement the -03 behavior, while Firefox implements the -02 behavior. If we were trying to extend SameSite, I think the -02 behavior would be ideal because it gives the developer full control over the fallback chain. That would requiring changing Chrome as well as the macOS/iOS cookie logic again. Failing that, leaving rfc6252bis alone but changing cookie-incrementalism to treat SameSite=garbage as SameSite=None rather than the default would cover the most likely desired fallback when trying to extend SameSite.

That said, I agree with you that we probably shouldn't try to extend SameSite anymore, so trying to optimize for that is not terribly important.

Whatever happens, at least one of Chrome+Safari or Firefox should change because they do not behave the same. As things currently stand, it's Firefox that's out of date, since they implement the -02 behavior rather than the -03 behavior.

sbingler

comment created time in a month

issue commentgrpc/grpc

//test/cpp/end2end:async_end2end_test failing on msan

If you patch boringssl-debug.txt into BoringSSL, can you attach the full output for a failing test? Assuming I didn't mess up the logging, that should print all the inputs and outputs for aes_nohw_*. I can then write some tool to figure out which output is wrong.

jtattermusch

comment created time in a month

issue commentgrpc/grpc

//test/cpp/end2end:async_end2end_test failing on msan

I wasn't able to reproduce with that command (I had to remove the --bazelrc flag due to missing credentials), though I'm getting other MSan errors, which seem to be a result of using an uninstrumented C++ runtime.

Looking at the MSan error, I think the uninitialized memory itself is a red herring. I think because AsyncNext failed, it never initialized got_tag. Perhaps that EXPECT_EQ should be ASSERT_EQ. The MSan correlation is likely just that, by disabling assembly, you're using the fallback implementation.

If you run in a non-MSan build with OPENSSL_ia32cap=0 in the environment, does it also reproduce? Is there some lower level test that's also failing? That output is hard to do anything with.

jtattermusch

comment created time in a month

pull request commentw3c/resource-timing

Zero RTT clarifications from davidben

Is there any update on whether tests for this are feasible at all?

Tests are perfectly feasible within browsers. Indeed Chrome implements this behavior today and unit tests it at the socket level. As already discussed on the bug, whole-browser WPT tests are not possible due to tooling and infrastructure deficiencies in WPT.

yoavweiss

comment created time in a month

pull request commentopenssl/openssl

Remove x86/x86_64 BSAES and AES_ASM support

Here's the x86_64 bit: https://github.com/openssl/openssl/pull/10884

bernd-edlinger

comment created time in a month

PR opened openssl/openssl

Add vpaes_ctr32_encrypt_blocks for x86_64

This adds an analog to the (currently unused) "2x" optimization from vpaes-armv8.pl and a ctr128_f implementation.

This PR probably should not be merged as-is. I haven't integrated it into the C portions or even tested that it works in OpenSSL. Rather, this PR is so @bernd-edlinger can look into integrating it into #9677 if it helps remove bsaes-x86_64.pl which, while constant-time, is currently integrated with other bits that are not constant-time and is a little awkward at small inputs.

In BoringSSL, we found that, while bsaes-x86_64.pl is still faster, the gap became small enough that we were comfortable removing bsaes-x86_64.pl and reducing the number of AES implementations we carried around. Perhaps this'll help you all too.

See also https://github.com/openssl/openssl/pull/9677#issuecomment-575374166 and https://boringssl-review.googlesource.com/c/boringssl/+/35364. (Note that, by then, we had already batched the non-constant-time fallback for small inputs in bsaes_ctr32_encrypt_blocks.)

Checklist
  • [ ] tests are added or updated
+299 -0

0 comment

1 changed file

pr created time in a month

create barnchdavidben/openssl

branch : vpaes-2x-x86-64

created branch time in a month

create barnchdavidben/openssl

branch : perlasm-errors

created branch time in a month

PR opened openssl/openssl

Do not silently truncate files on perlasm errors

If one of the perlasm xlate drivers crashes, OpenSSL's build will currently swallow the error and silently truncate the output to however far the driver got. This will hopefully fail to build, but better to check such things.

Handle this by checking for errors when closing STDOUT (which is a pipe to the xlate driver).

+162 -162

0 comment

162 changed files

pr created time in a month

issue commentopenssl/openssl

Data loss with TLS 1.3

This looks like another variation of #7967. TCP's reset semantics do not play well with OpenSSL's TLS 1.3 NewSessionTicket strategy.

MrAnno

comment created time in a month

issue commentMikeBishop/dns-alt-svc

ESNI lifetime implications from Alt-Svc vs HTTPSSVC precedence

The proposed fix does have an inverse consequence which is that Alt-Svc is more trustworthy than HTTPSSVC because it came from the origin. That makes overriding it odd. For everything currently delivered over Alt-Svc and HTTPSSVC, this is fine, but we are effectively constraining Alt-Svc to never depend on this trustworthiness in the future. (I.e. it can only be used for DNS-like things.)

This is hopefully fine because we can always make a new header (and probably should given all of the mistakes in Alt-Svc), but this is an implication to keep in mind.

davidben

comment created time in a month

pull request commentopenssl/openssl

Remove x86/x86_64 BSAES and AES_ASM support

Well, when removing BSAES is not an option, then it would be better to use VPAES instead of BSAES if both are possible. Currently BSAES is slower and using non-constant time code for key schedue and small data blocks. So which way will we go?

For BoringSSL, we did a handful of things:

  • On x86_64, the VPAES and BSAES gap got much smaller after we implemented a variation of the _vpaes_encrypt_2x optimization from vpaes-armv8.pl for vpaes_ctr32_encrypt_blocks. (Note _vpaes_encrypt_2x isn't actually used anywhere in vpaes-armv8.pl, which is itself a missed opportunity.) With that done, we decided it was no longer worth carrying BSAES for x86_64.
  • On armv7, we translated vpaes-armv8.pl to armv7 to cover the non-parallel modes on NEON-capable chips. (Most of our 32-bit ARM callers require NEON at this point.) We weren't able to get the _vpaes_encrypt_2x strategy to help much on armv7. It seems armv7 NEON doesn't have great byte shufflers. Instead, we patched out the bsaes-armv7 to aes-armv4 fallback paths (at a perf cost) and wrote code to convert between vpaes-armv7 and bsaes-armv7 key schedule formats as needed when passing large inputs to ctr128_f.

The armv7 solution was a bit fussy. At the time, we didn't have the newly-written fallback implementation. I haven't looked at whether it would now make sense to use a simpler strategy.

The C bits have probably diverged, but I'm happy to upstream whatever assembly changes are of interest.

bernd-edlinger

comment created time in a month

issue openedMikeBishop/dns-alt-svc

ESNI lifetime implications from Alt-Svc vs HTTPSSVC precedence

The draft originally said (prior to #66) said the Alt-Svc cache overrides HTTPSSVC. This had problems (issue #58 and #60), so #66 downgraded it to a MAY. It seems this still has problems.

HTTPSSVC vs Alt-Svc

Consider a server which uses both QUIC and ESNI. It configures both in HTTPSSVC. It also cares about HTTPSSVC-less clients (older client or legacy DNS resolver), so it configures QUIC in Alt-Svc. Is it required to configure ESNI in Alt-Svc, or can it leave things alone (with the understanding that ESNI will be limited to clients that support HTTPSSVC)?

The spec currently says a client (that would otherwise support HTTPSSVC) MAY skip an HTTPSSVC lookup given an Alt-Svc cache entry. That means, for ESNI to work, the server MUST configure it in both. This is not obvious and should be written down. More importantly, it has deployment consequences.

HTTPSSVC records apply to the current HTTP request. If the client has no cached DNS record, it still queries DNS and gets HTTPSSVC. That means HTTPSSVC TTLs may be set more-or-less freely depending on the site's performance vs. flexibility needs. Let's say it's O(1 hour).

Alt-Svc headers apply to subsequent HTTP requests. If the client has no Alt-Svc entry cached, it will send the HTTP request without Alt-Svc. That means Alt-Svc TTLs must cover the time to the next HTTP request for Alt-Svc to be used at all. For reference, I see google.com currently uses 30 days.

Commitments

For the duration of the HTTPSSVC or Alt-Svc lifetime, the server operator has made a commitment to the client. ESNI is a soft commitment that the server understands this ESNI key and a hard commitment that the server is colocated with the public name. The first lower-bounds key lifetime and rotation on the server. There is a recovery mechanism, but it is expensive, so this is a soft commitment. The second is roughly a commitment to use a particular hosting provider. ESNI's retry mechanism requires the public name, so this is a hard commitment. Breaking this will knock out your site.

(Note Alt-Svc without ESNI was not a hosting provider commitment. A provider-specific Alt-Svc may fail if the site changes providers, but the client could still connect without Alt-Svc. ESNI must take this fallback away to prevent network downgrade.)

HTTPSSVC and Alt-Svc commitment timescales are qualitatively different. Saying ESNI servers must advertise in both, as implied by the spec today, means servers must incur a long-lived hosting provider commitment to deploy ESNI at all. It also means ESNI keys must be long-lived, which makes them more sensitive.

Proposed fix

Given the above, I don't see how allowing Alt-Svc to override HTTPSSVC is tenable. That suggests changing the spec so a client that makes HTTPSSVC queries makes them even if Alt-Svc is available. If it gets an HTTPSSVC record, it ignores Alt-Svc and uses those instead. Otherwise, it may freely use Alt-Svc.

This is fussy because Alt-Svc itself allows replacing the origin hostname with an alternate name. Clients would likely want to query the alternate's A/AAAA records, the origin's A/AAAA records, and the origin/s HTTPSSVC records in parallel. However, the alternate may leak ESNI, so the alternate connection must wait for whether HTTPSSVC aborts it before proceeding past that query.

That adds even more complexity to the prospect of actually implementing remote Alt-Svc. Personally, I think all these name indirections are seeming more and more like a mistake and questionably worthwhile.

Whither ESNI in Alt-Svc?

With the above, it is no longer strictly necessary to allocate a way to spell ESNI in Alt-Svc. I don't know whether we still want to. This issue means ESNI in Alt-Svc is very different from ESNI in HTTPSSVC. At minimum, we must clearly call out the implications of the longer lifetime in the spec. We could decide this is not worth the trouble. On the other hand, it's likely a number of clients won't make HTTPSSVC queries for some time, and perhaps those clients getting ESNI for the subset of servers willing to make a longer-term public name commitment is worthwhile.

Parting thought

I think a lesson here is we cannot completely abstract ESNI from its delivery mechanism. Pulling ESNI into HTTPSSVC is reasonable so we only have one record to query, but HTTPSSVC's decisions still have implications for ESNI. (@chris-wood, I dunno if you watch this repo, so CC'ing you in here explicitly.)

created time in a month

issue commentMikeBishop/dns-alt-svc

Parameter to indicate no HSTS-like behavior?

Given that it is 2020 and HTTPS is table stakes, I'm inclined to think this would not be the right tradeoff.

enygren

comment created time in a month

issue commentMikeBishop/dns-alt-svc

Parameter to indicate no HSTS-like behavior?

Such a CDN would then cause all N customers to go through the allow-insecure path, right? That means the customers who don't update their CNAME would inadvertently get a less secure configuration, so one would still want everyone to update CNAMEs, but now with decreased incentive (but correspondingly more adoption of other bits of HTTPSSVC, so it's a tradeoff between adoption and getting the ecosystem in a better place).

enygren

comment created time in a month

pull request commentMikeBishop/dns-alt-svc

Alt-Used example

Sending this header has some interesting tracking consequences. Your DNS resolver can unilaterally send a common cross-site identifier to every site by faking an Alt-Svc name. The DNS resolver would answer for that name too, so it doesn't matter if the name is fake. More generally, this is feeding an insecure value into a header in a secure context, which is odd.

(Chrome doesn't send Alt-Used for Alt-Svc either, though I don't know what the reasoning was at the time.)

enygren

comment created time in a month

issue commenthttpwg/http-extensions

Structured Headers and GREASE

It could, but that'd be a really weird bug for servers to have, whereas not properly skipping over unknown keys is much easier to do on accident. (Maybe I just parse the whole thing with a giant regex and make assumptions about the order.)

annevk

comment created time in a month

issue commenthttpwg/http-extensions

Structured Headers and GREASE

For TLS, we've never fed state in as input to GREASE, which is the safest option privacy-wise. But you're right that you then need to worry about reproducibility. To that end, if we GREASE some TLS codepoint, we always GREASE it, with the randomness coming in only in the particular values, as a low-pass discouragement against hardcoding values. But if the implementation is intolerant to only values that start with 5, it is indeed unlikely to be reproducible enough.

annevk

comment created time in a month

push eventgoogle/der-ascii

David Benjamin

commit sha 9b24cdccdc9750d87a207bd235bd3f075c5b89fd

Fix example ordering. I added the `indefinite` and `long-form:N` examples in the wrong place.

view details

push time in a month

push eventgoogle/der-ascii

David Benjamin

commit sha 992b8fdf54d6338890a2480309f1fd5bba0def0a

Fix link to OpenSSL's documentation The master branch's documentation seems to have a broken link, so point to 1.1.1, which is more likely what folks are using anyway.

view details

push time in a month

issue commenthttpwg/http-extensions

Structured Headers and GREASE

I am, unsurprisingly, in favor of GREASEing things on general principle. :-) Although the network stack only produces a few core headers while others, like Fetch metadata, mostly come from outside anyway. GREASE would want to be applied where the header is produced, most likely. (Well, we could implement some generic post-processing scheme, but then all the headers would need compatible mechanisms and we'd need to know which they are.)

annevk

comment created time in a month

pull request commentopenssl/openssl

Add AES consttime code for no-asm configurations

Exciting! What's the S-box implementation strategy? It's probably worth writing down in a comment. Also I'm curious. :-)

Did you consider bitslicing? That can help smooth over the performance hit on parallel modes like AES-GCM. (I recently finished a constant-time fallback AES myself. I used bitslicing with this circuit. It seems faster than this PR even on single-block operations, but it's also just as likely that I messed up the comparison, so take that with a grain of salt.)

bernd-edlinger

comment created time in a month

issue commentquicwg/base-drafts

Confusing SNI recommendation in the HTTP/3 spec

For HTTP/1.1 and HTTP/2 over TLS, we do not send the IP literal in SNI. Chrome accidentally did for few releases, which broke Apache httpd. It got confused because IPv6 literals have colons. Of course, the compatibility issue does not apply to a new protocol like QUIC.

It would be nice for QUIC to mandate SNI, and, all else equal, avoiding random TLS inconsistencies is good. But this rule is odd and "you MUST send SNI whenever you are supposed to send SNI" is tautological. TLS 1.3 says something woefully noncommittal:

Additionally, all implementations MUST support the use of the "server_name" extension with applications capable of using it. Servers MAY require clients to send a valid "server_name" extension. Servers requiring this extension SHOULD respond to a ClientHello lacking a "server_name" extension by terminating the connection with a "missing_extension" alert.

I dunno, everything is a mess. :-( If we were doing things anew, perhaps we'd have stuck the port number in there too. (As it stands, TLS does not authenticate the port, yet the port is part of the origin, and I doubt HTTP servers reliably check the port in the Host header.)

RyanAtGoogle

comment created time in 2 months

issue commentpyca/cryptography

EC Private Keys Are Incompatible With iOS/OS X Security APIs (support SECG serialization)

At the time I made the comment, the length passed to to_bytes as computed as (x.bit_length() + 7) // 8, etc., which is not the field size. Glad to see you've since fixed it, though note that the public key coordinates are field elements, while the private key is a scalar (number modulo the group order). These are not the same modulus.

Hasse's theorem implies they are close, but that bound doesn't quite guarantee the same bit or byte length. I vaguely recall there being some uncommon named curves where the byte lengths indeed differ, but I'm not sure which one off-hand or whether cryptography.io supports any of them.

Looks like key_size will give you the size of the scalar. I don't see a corresponding constant for the field size, but better to use the provided point encode and decode functions anyway. https://cryptography.io/en/latest/hazmat/primitives/asymmetric/ec/#cryptography.hazmat.primitives.asymmetric.ec.EllipticCurvePrivateKey.key_size

dimitribouniol

comment created time in 2 months

pull request commentopenssl/openssl

Allow EC keys without pubkey to be used for SSL

Even when the private key is offloaded, there's still hopefully a public key available. If nothing else, it's in the certificate. (In fact that's what Chrome does for client certificates. We used to ask the OS's smartcard APIs for public key metadata like type or size, but the smartcards tend to give the wrong result or an error, so we switched to just parsing it out of the certificate.)

dwmw2

comment created time in 2 months

pull request commentmit-plv/fiat-crypto

Don't widen carry bits

Looks reasonable as far as C integer rules go, though I too cannot check the bounds myself. :-) I stuffed it into BoringSSL and there wasn't a noticeable perf difference.

One observation about unsaturated limbs: the curve25519_64.c instances look like this, with x49 a uint64_t.

 fiat_25519_uint1 x50 = (fiat_25519_uint1)(x49 >> 51);

64 - 51 = 13 > 8, so the compiler may need to do a bit of work to prove it doesn't matter. In particular, the compiler doesn't know that the input bounds are tighter than uint64_t, so it's possible this would insert some bitmasks that otherwise wouldn't be needed. Perhaps this was the reason?

JasonGross

comment created time in 2 months

more