profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/jvehent/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.
Julien Vehent jvehent @mozilla Florida, USA https://j.vehent.org Firefox Operations Security; crypto nerd; sec tools coder.

jvehent/AFW 41

Advanced FireWall cookbook for Chef and Linux that uses Iptables and to dynamically configure inbound and outbound rules on each node.

jvehent/AutOssec 22

Ossec cookbook for Chef, with secure & automated key management

jvehent/dkimstatus 17

[THIS ROUNDCUBE PLUGIN IS UNMAINTAINED] DKIMSTATUS displays the status of the DKIM Signature of each email your read in Roundcube.

jvehent/dmarc-parser 5

A Python parser for DMARC reports that connects to IMAP and pushes metrics to graphite

jvehent/cljs 4

Go module for working with Collection+JSON

jvehent/awesome-appsec 3

A curated list of resources for learning about application security

jvehent/1nw 2

url shortener written in Perl with Mojolicious

jvehent/awesome-ciandcd 2

continuous integration and continuous delivery

jvehent/awesome-devsecops 2

An authoritative list of awesome devsecops tools with the help from community experiments and contributions.

jvehent/chef-keymaster 2

distribute encryption keys across the nodes of an environment using Chef

startedjvehent/service-go

started time in 9 days

pull request commentSecuring-DevOps/invoicer-chapter2

change circleci repo

merging

frap

comment created time in 12 days

PullRequestEvent

pull request commentSecuring-DevOps/invoicer-chapter2

change circleci repo

merge frap/invoicer-chapter 2 as new repo

frap

comment created time in 12 days

issue commentmozilla/server-side-tls

Why do you recommend disable ssl_session_tickets in NGINX?

nginx guarantees that it switches the built-in key on reload but that's not really what you want. You want to rotate them rather than suddenly invalidating all of the previous ones.

To do the right thing, you have to script it externally and reference the keys with ssl_session_ticket_key:

http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_ticket_key

You need that if you want to sync them across servers, but you shouldn't need this for the simple case.

It's a similar situation as OCSP stapling where you can use an approach like https://github.com/tomwassenberg/certbot-ocsp-fetcher to persistently cache a valid response externally and reference it from the server, but they don't give you an easy way to do the right thing internally, which is strange.

Neither of these things would be particularly hard for them to implement properly internally. It would be simpler than what you need to do with scripts...

nodesocket

comment created time in 14 days

PR closed Securing-DevOps/invoicer-chapter2

Featbr1 change repo name

Featbr1 change repo name

+2 -1

0 comment

1 changed file

kabourne

pr closed time in 18 days

PR opened Securing-DevOps/invoicer-chapter2

Featbr1 change repo name

Featbr1 change repo name

+2 -1

0 comment

1 changed file

pr created time in 18 days

PR closed mozilla-services/pkcs7

Implement ParseRawData

Sometimes there can be cases where there is no Content Type, but the content is still known. When UEFI tooling creates signatures there is no header, but it should be SignedData regardless. However the current Parse function is unable to handle it.

Example header from the pkcs7 library:

    0:d=0  hl=4 l=1979 cons: SEQUENCE
    4:d=1  hl=2 l=   9 prim: OBJECT            :pkcs7-signedData
   15:d=1  hl=4 l=1964 cons: cont [ 0 ]
   19:d=2  hl=4 l=1960 cons: SEQUENCE
   23:d=3  hl=2 l=   1 prim: INTEGER           :01
   26:d=3  hl=2 l=  13 cons: SET
   28:d=4  hl=2 l=  11 cons: SEQUENCE
   30:d=5  hl=2 l=   9 prim: OBJECT            :sha256
   41:d=3  hl=2 l=  11 cons: SEQUENCE
   43:d=4  hl=2 l=   9 prim: OBJECT            :pkcs7-data

Example header from UEFI signature tooling

    0:d=0  hl=4 l=1966 cons: SEQUENCE
    4:d=1  hl=2 l=   1 prim: INTEGER           :01
    7:d=1  hl=2 l=  15 cons: SET
    9:d=2  hl=2 l=  13 cons: SEQUENCE
   11:d=3  hl=2 l=   9 prim: OBJECT            :sha256
   22:d=3  hl=2 l=   0 prim: NULL
   24:d=1  hl=2 l=  11 cons: SEQUENCE
   26:d=2  hl=2 l=   9 prim: OBJECT            :pkcs7-data

Signed-off-by: Morten Linderud morten@linderud.pw

+30 -0

0 comment

1 changed file

Foxboron

pr closed time in 18 days

PR closed Securing-DevOps/invoicer-chapter2

Featbr1

Testing with Docker build

+1 -1

0 comment

2 changed files

stods21

pr closed time in 21 days

PR opened Securing-DevOps/invoicer-chapter2

Featbr1

Testing with Docker build

+1 -1

0 comment

2 changed files

pr created time in 21 days

startedjvehent/pineapple

started time in a month

startedjvehent/haproxy-aws

started time in a month

startedjvehent/haproxy-aws

started time in a month

issue commentmozilla/server-side-tls

TLS & Gzip on NGINX

I think this issue can be closed.

Is it still recommended that gzip be disabled over TLS connections?

Yes (if by "gzip" you actually meant TLS-layer compression)

RFC 7540 HTTP/2 requires that TLS compression be disabled in TLS 1.2 https://tools.ietf.org/html/rfc7540#section-9.2.1

A deployment of HTTP/2 over TLS 1.2 MUST disable compression. TLS compression can lead to the exposure of information that would not otherwise be revealed [RFC3749]. Generic compression is unnecessary since HTTP/2 provides compression features that are more aware of context and therefore likely to be more appropriate for use for performance, security, or other reasons.

RFC 8447 IANA Registry Updates for TLS and DTLS marks TLS-layer compression NULL for TLS 1.3 and later. https://tools.ietf.org/html/rfc8447

To make it clear that (D)TLS 1.3 has orphaned certain registries (i.e., they are only applicable to version of (D)TLS protocol versions prior to 1.3), IANA:

o has added the following to the TLS Compression Method Identifiers registry [RFC3749]:

Note: Value 0 (NULL) is the only value in this registry applicable to (D)TLS protocol version 1.3 or later.

codesport

comment created time in a month

PullRequestEvent

PR opened Securing-DevOps/invoicer-chapter2

init circleci config

basic config change

+44 -44

0 comment

1 changed file

pr created time in a month

issue openedmozilla-services/pkcs7

Usage of PKCS7.Verify methods when no certificates are included in PKCS7 structure

Hello,

I am attempting to verify the signature of an AWS EC2 identify document as a signed PKCS#7 structure. The structure does not include any certificates, and as a result the PKCS7.VerifyWithChain and PKCS7.Verify methods fail with the error: pkcs7: No certificate for signer. The library returns this error when it fails to find a certificate with a serial number matching one of the signing certificates:

// verify.go, line 35.
func verifySignature(p7 *PKCS7, signer signerInfo, truststore *x509.CertPool) (err error) {
	signedData := p7.Content
	ee := getCertFromCertsByIssuerAndSerial(p7.Certificates, signer.IssuerAndSerialNumber)
	if ee == nil {
		return errors.New("pkcs7: No certificate for signer")
	}
	// snip.

Looking at section 9 of RFC 2315, it appears that including any certificates in the PKCS#7 structure is optional:

A recipient verifies the signatures by decrypting the encrypted
message digest for each signer with the signer's public key, then
comparing the recovered message digest to an independently computed
message digest. The signer's public key is either contained in a
certificate included in the signer information, or is referenced by
an issuer distinguished name and an issuer-specific serial number
that uniquely identify the certificate for the public key.

I expected that I could provide the signer certificates via the x509.CertPool argument for PKCS7.VerifyWithChain. However, I realized that was used for path validation. I guess it just feels odd that I need to modify the parsed data structure prior to verifying its signature. This means callers need to be careful to modify the structure before calling those methods.

In this case, I can populate the PKCS7.Certificates field after parsing the PKCS#7. I can set it equal to a slice containing the certificate(s) I expect the PKCS#7's content to be signed with.

There does not appear to be any documentation about this use case... Is that how it should be used?

Thank you.

created time in a month

startedjvehent/Postscreen-Stats

started time in 2 months

PR opened mozilla-services/pkcs7

Support verifying legacy DSA signatures in Go 1.16

Go 1.16 removed support for DSA signatures from crypto/x509. This change gives those applications that need to verify legacy PKCS7 certificates, a little more time to migrate off of DSA.

+51 -1

0 comment

1 changed file

pr created time in 2 months

PR opened mozilla-services/pkcs7

Compatibility with Microsoft Azure certificates.

Some PKCS7 responses from Microsoft Azure's instance metadata service (IMDS)[0] use the "encrypt+digest" OID for the digest OID.

This change allows verification of these signatures.

[0] https://docs.microsoft.com/en-us/azure/virtual-machines/linux/instance-metadata-service?tabs=linux#attested-data

+12 -5

0 comment

1 changed file

pr created time in 2 months

startedjvehent/haproxy-aws

started time in 2 months

pull request commentmozilla-services/pkcs7

Support more encryption algorithms for key

In #48, I tried also to add support for different hash functions that can be specified through parameters as per RFC 4055, 4.1.

lsattem

comment created time in 3 months