profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/joewreschnig/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.

purcell/package-lint 152

A linting library for elisp package metadata

joewreschnig/auto-minor-mode 32

Enable Emacs minor modes by buffer name and contents

joewreschnig/clickhouse-jdbc 0

JDBC driver for ClickHouse

joewreschnig/go-gdpr 0

Golang support for the IAB's GDPR framework

joewreschnig/melpa 0

Recipes and build machinery for the biggest Emacs package repo

joewreschnig/pocketchip-batt 0

Battery polling service for PocketCHIP

joewreschnig/quelpa 0

Build and install your Emacs Lisp packages on-the-fly directly from source

joewreschnig/salt-mode 0

Emacs major mode for Salt States

joewreschnig/sarama 0

Sarama is a Go library for Apache Kafka 0.8, and up.

starteddanlamanna/live-awk-mode

started time in a month

issue closedprebid/go-gdpr

Return a concrete type from vendorconsent/tcf2 Parse functions

Hi,

I would prefer if the Parse and ParseString functions specific to TCF2 returned a ConsentMetadata rather than a boxed api.VendorConsents. This would allow easier access to e.g. the PurposeLITransparency method, and follow the general Go idiom of interfaces for arguments, structs for returns. Unfortunately I don't see how to do this in a compatible way, so I would like to discuss a) is this possible despite some breakage? and b) which kind of breakage is preferred?

The reason it doesn't seem possible is that ConsentMetadata is returned as, and boxed as, as a non-pointer type. So a Parse(data []byte) (ConsentMetadata, error) would have to return a an empty, invalid ConsentMetadata and break the code of anyone checking != nil, but updating the methods and using Parse(data []byte) (*ConsentMetadata, error)would change the type that gets boxed by the version-agnostic Parse.

Personally I think Parse(data []byte) (*ConsentMetadata, error) and using pointer receivers is the better API, but also seems like the breakage more likely to cause runtime errors. (Both cause runtime issues if someone stored the return to an explicit api.VendorConsents, re-boxing it, but the pointer return also affects code that assumes it knows the exact type of a v2 consent boxed by the version-agnostic Parse.)

closed time in a month

joewreschnig

issue commentprebid/go-gdpr

Return a concrete type from vendorconsent/tcf2 Parse functions

I disagree the improvement is subjective; it has a runtime cost and requires noisy casting to use fundamental TCF2 features (#26). But, I understand that even objective improvements need to be balanced against (also objective) compatibility concerns.

joewreschnig

comment created time in a month

issue commentprebid/go-gdpr

Incorrect module version (v1.9.0) in https://proxy.golang.org/

Using a v2.x.y version will require changing the module path for maximum compatibility with go mod tooling, so if you take this route I would hope it includes other deprecations / incompatible fixes (e.g. #31) - otherwise you may be looking at v3 and another new package name "soon".

I'm not familiar with the details of retraction but if it would let the module get back on the v0.x.y path where compatibility expectations are not so strict, that's the route I would take.

joewreschnig

comment created time in a month

issue openedprebid/go-gdpr

Incorrect module version (v1.9.0) in https://proxy.golang.org/

There's a version of this module in the default proxy which doesn't seem to be in the repository itself.

$ GONOPROXY="*" go get github.com/prebid/go-gdpr
go get: upgraded github.com/prebid/go-gdpr v0.8.0 => v0.10.0
$ GOPROXY=https://goproxy.v10s.net/ go get github.com/prebid/go-gdpr
go get: upgraded github.com/prebid/go-gdpr v0.10.0 => v1.9.0

The associated checksums are:

github.com/prebid/go-gdpr v1.9.0 h1:VhthA8zFjbOA3ASltDK2PWBUWXBFwXXdyrabtndOsBU=
github.com/prebid/go-gdpr v1.9.0/go.mod h1:OfBxLfd+JfP3OAJ1MhI4JYAV3dSMQYT1QAb80DHpZFo=

And the contents appears to be the same as v0.9.0. It is the only such version in the proxy's version list.

This breaks go get, which will now only get this version and not not v0.10.0 or any future versions unless explicitly asked.

created time in a month

push eventjoewreschnig/go-gdpr

Joe Wreschnig

commit sha 465610bb1614e5c9c71506a0fa1f29517ad3eaf9

Make `rangeConsent` and `rangeException` struct types (#30)

view details

push time in a month

pull request commentgoogle/uuid

add Version4 factory to configure UUID generation

If the project had a long history of adding options I would agree, but the ctor interfaces have been stable for years. The best way to ensure a complex set of options is to make an object to stash them in, instead of figuring out the fewer relevant primitives. (That was also the mistake in the pool API.)

no existing functionality to have parity with.

This doesn't seem fair; the reason there's no zero-allocation ctor to claim parity with because you took it out.

rittneje

comment created time in 2 months

Pull request review commentgoogle/uuid

add Version4 factory to configure UUID generation

  package uuid -import "io"+import (+	"crypto/rand"+	"io"+)  // New creates a new random UUID or panics.  New is equivalent to // the expression // //    uuid.Must(uuid.NewRandom())+//+// Deprecated: Use *Version4.NewUUID() instead.

I especially object to this and the deprecation of NewRandom etc. - if the library is going to provide ctor sugar, it should at least provide the most common one as the shortest!

rittneje

comment created time in 2 months

PullRequestReviewEvent

PR opened google/uuid

Add `ReadRandom` and reimplement `NewRandomFromReader` using it

Combined with e.g. bufio.Reader this provides fast zero-allocation UUID generation if you do not consider heap memory particularly sensitive, without a global state or lock.

I guess this is my proposal re. the discussion about #80/#85/#86/#88. It's faster than both of them and the single new function opens up other optimization possibilities they don't, e.g. if you already have some pathological [1000]UUID you need to fill.

I didn't remove the original pool implementation because in the end that's a question of the module's specific commitment to its API compatibility rather than a technical one.

+27 -3

0 comment

2 changed files

pr created time in 2 months

pull request commentgoogle/uuid

add Version4 factory to configure UUID generation

Also, maybe more objectively than some point about API taste, it doesn't solve half of the performance issue the pool did, that of requiring an allocation per call to New:

func BenchmarkUUID_NewVersion4(b *testing.B) {
	b.RunParallel(func(pb *testing.PB) {
		v4 := Version4{bufio.NewReaderSize(rander, 16*16)}
		for pb.Next() {
			_, err := v4.New()
			if err != nil {
				b.Fatal(err)
			}
		}
	})
}
BenchmarkUUID_NewVersion4-4   	 6922863	       172.8 ns/op	      16 B/op	       1 allocs/op
rittneje

comment created time in 2 months

create barnchjoewreschnig/uuid

branch : read-random

created branch time in 2 months

pull request commentgoogle/uuid

add Version4 factory to configure UUID generation

I have similar concerns about this that I have about the pool it's replacing - it's a big new API surface that doesn't really relate to "generating and inspecting" UUIDs. It's nothing that couldn't be done in a wrapper library or application, and doesn't abstract anything about UUIDs themselves.

This package should focus on efficient and correct UUID handling and making sure the UUID follows go best practices; it doesn't need to provide every sugared ctor.

rittneje

comment created time in 2 months

issue commentgoogle/uuid

Deprecate EnableRandPool and DisableRandPool

The pool API looks pretty dangerous and unnecessary to me also.

The speed gain comes almost entirely from the buffered I/O, which did not require a new API. The remaining allocation is the UUID itself which could be improved in a generally-useful way by having a function to fill a received pointer:

func NewRandomFromReader(r io.Reader) (UUID, error) {
	var uuid UUID
	return uuid, ReadRandom(r, &uuid)
}

func ReadRandom(r io.Reader, uuid *UUID) error {
	_, err := io.ReadFull(r, uuid[:])
	if err != nil {
		return err
	}
	uuid[6] = (uuid[6] & 0x0f) | 0x40 // Version 4
	uuid[8] = (uuid[8] & 0x3f) | 0x80 // Variant is 10
	return nil
}
func BenchmarkUUID_NewBufio(b *testing.B) {
	b.RunParallel(func(pb *testing.PB) {
		r := bufio.NewReaderSize(rander, randPoolSize)
		for pb.Next() {
			_, err := NewRandomFromReader(r)
			if err != nil {
				b.Fatal(err)
			}
		}
	})
}

func BenchmarkUUID_ReadRandom(b *testing.B) {
	b.RunParallel(func(pb *testing.PB) {
		r := bufio.NewReaderSize(rander, randPoolSize)
		var uuid UUID
		for pb.Next() {
			err := ReadRandom(r, &uuid)
			if err != nil {
				b.Fatal(err)
			}
		}
	})
}
BenchmarkUUID_NewBufio-4    	 6769986	       176.0 ns/op	      16 B/op	       1 allocs/op
BenchmarkUUID_ReadRandom-4   	 7622210	       155.6 ns/op	       0 B/op	       0 allocs/op

These are also lock-free (or as lock-free as the reader allows) approaches. On my system ReadRandom consistently beats NewPooled, I assume because of less lock contention.

rittneje

comment created time in 2 months

push eventjoewreschnig/go-gdpr

Joe Wreschnig

commit sha 87f8d16b2392c5c8444f628f405c2d2d85dfab5d

Make `rangeException` a struct type This has a similar effect on the performance of parsing TCF1 consents. benchmark old ns/op new ns/op delta BenchmarkParse/all_testcases-8 2536 1795 -29.22% BenchmarkParse/case_bad_consent_v2_-_wrong_prefix,_must_start_with_C-8 9911 2519 -74.58% benchmark old allocs new allocs delta BenchmarkParse/case_bad_consent_v2_-_wrong_prefix,_must_start_with_C-8 7 6 -14.29% benchmark old bytes new bytes delta BenchmarkParse/all_testcases-8 6586 2115 -67.89% BenchmarkParse/case_bad_consent_v2_-_wrong_prefix,_must_start_with_C-8 65709 16537 -74.83%

view details

push time in 2 months

issue openedprebid/go-gdpr

Return a concrete type from vendorconsent/tcf2 Parse functions

Hi,

I would prefer if the Parse and ParseString functions specific to TCF2 returned a ConsentMetadata rather than a boxed api.VendorConsents. This would allow easier access to e.g. the PurposeLITransparency method, and follow the general Go idiom of interfaces for arguments, structs for returns. Unfortunately I don't see how to do this in a compatible way, so I would like to discuss a) is this possible despite some breakage? and b) which kind of breakage is preferred?

The reason it doesn't seem possible is that ConsentMetadata is returned as, and boxed as, as a non-pointer type. So a Parse(data []byte) (ConsentMetadata, error) would have to return a an empty, invalid ConsentMetadata and break the code of anyone checking != nil, but updating the methods and using Parse(data []byte) (*ConsentMetadata, error)would change the type that gets boxed by the version-agnostic Parse.

Personally I think Parse(data []byte) (*ConsentMetadata, error) and using pointer receivers is the better API, but also seems like the breakage more likely to cause runtime errors. (Both cause runtime issues if someone stored the return to an explicit api.VendorConsents, re-boxing it, but the pointer return also affects code that assumes it knows the exact type of a v2 consent boxed by the version-agnostic Parse.)

created time in 2 months

PR opened prebid/go-gdpr

Make `rangeConsent` a struct type

The virtual call and additional indirections outweigh the cost of storing and comparing a uint16. This reduces allocation count and memory usage enormously for v2 consents with lots of ranges.

benchmark                                                                         old ns/op     new ns/op     delta
BenchmarkParse/all_testcases-8                                                    3157          2536          -19.67%
BenchmarkParse/case_short_consent_v2_ok-8                                         462           424           -8.22%
BenchmarkParse/case_long_consent_v2_ok-8                                          462           428           -7.50%
BenchmarkParse/case_really_long_consent_v2_ok-8                                   19662         12845         -34.67%

benchmark                                                                         old allocs     new allocs     delta
BenchmarkParse/all_testcases-8                                                    31             5              -83.87%
BenchmarkParse/case_short_consent_v2_ok-8                                         8              7              -12.50%
BenchmarkParse/case_long_consent_v2_ok-8                                          8              7              -12.50%
BenchmarkParse/case_really_long_consent_v2_ok-8                                   313            22             -92.97%

benchmark                                                                         old bytes     new bytes     delta
BenchmarkParse/all_testcases-8                                                    7195          6586          -8.46%
BenchmarkParse/case_short_consent_v2_ok-8                                         330           284           -13.94%
BenchmarkParse/case_long_consent_v2_ok-8                                          330           284           -13.94%
BenchmarkParse/case_really_long_consent_v2_ok-8                                   11988         5388          -55.06%

Remove the original consentData from rangeSection as it’s unused and relatively large.

+25 -40

0 comment

2 changed files

pr created time in 2 months

create barnchjoewreschnig/go-gdpr

branch : range-parse-optimization

created branch time in 2 months

fork joewreschnig/go-gdpr

Golang support for the IAB's GDPR framework

fork in 2 months

issue openedent/ent

Validators for Bytes fields

<!-- Provide a general summary of the feature in the Title above -->

<!-- Thank you very much for contributing to Ent by creating an issue! ❤️ To avoid duplicate issues we ask you to check off the following list. -->

<!-- Checked checkbox should look like this: [x] -->

  • [x] I have searched the issues of this repository and believe that this is not a duplicate.

Summary 💡

We would like to configure validators for Bytes fields similar to string fields. Introducing a new method,

// Validate adds a validator for this field. Operation fails if the validation fails.
func (b *bytesBuilder) Validate(fn func([]byte) error) *bytesBuilder {
	b.desc.Validators = append(b.desc.Validators, fn)
	return b
}

would be sufficient for us. (It even seems to codegen something reasonable already when used, but I didn't run any tests to see if it's correct.)

We would also be able to use similar MinLen, MaxLen, and NotEmpty default validators. For our use case it would also be best if bytesBuilder#MaxLen also configured a validator, but since this would break compatibility I understand if that can't be done. In that case I would suggest only providing Validate as it would be confusing if MaxLen affected the descriptor and not the validator, but MinLen the validator and not the descriptor.

Motivation 🔦

We have some binary blobs which have some length requirements (e.g. not empty and < 20kb) we would like to enforce via ent's mutators and not ad hoc validation.

created time in 2 months