profile
viewpoint
Rod Vagg rvagg require.io NSW, Australia http://r.va.gg Awk Ninja; Yak Shaving Rock Star

microsoft/ChakraCore 8253

ChakraCore is the core part of the Chakra JavaScript engine that powers Microsoft Edge

ded/reqwest 2872

browser asynchronous http requests

nodejs/nan 2797

Native Abstractions for Node.js

fat/bean 1386

an events api for javascript

ded/bonzo 1304

library agnostic, extensible DOM utility

ded/qwery 1104

a query selector engine

microsoft/node-v0.12 797

Enable Node.js to use Chakra as its JavaScript engine.

ded/morpheus 500

A Brilliant Animator

justmoon/node-bignum 408

Big integers for Node.js using OpenSSL

isaacs/st 372

A node module for serving static files. Does etags, caching, etc.

pull request commentipfs/go-hamt-ipld

Extend documentation around Flush and cached nodes

@warpfork the thing's confused me about cache is that it's difficult to see it being involved in mutation. Looking at it now I think that confusion comes from the fact that there's a Put() inside modifyValue() but that's only used in the full-bucket-overflow-to-new-node case where a new node is created and it needs a CID for Link. In the other cases - delete or insert - it looks like it modifies the child and leaves it alone.

So maybe these docs should extend into modifyValue() to explain that, I'd have found that very useful when trying to grok what the cache is trying to do.

Minor nit: the rest of the docs are capped at 80 chars, these additions have a much more relaxed approach to length.

warpfork

comment created time in 4 hours

issue commentipld/js-bitcoin

feat: release and move into ipld org

moved, but this needs to get up to sync with current multiformats, that's where it got stalled, perhaps you can help with that because I still haven't got my head around the new esm stack and am unsure how much I should pull that in to libraries like this

mikeal

comment created time in 5 hours

pull request commentnodejs/node-gyp

Release proposal: v7.1.0

done

rvagg

comment created time in a day

delete branch nodejs/node-gyp

delete branch : rvagg/v7.1.0-proposal

delete time in a day

PR merged nodejs/node-gyp

Release proposal: v7.1.0
  • [aaf33c3029] - build: add update-gyp script (Samuel Attard) #2167
  • [3baa4e4172] - (SEMVER-MINOR) gyp: update gyp to 0.4.0 (Samuel Attard) #2165
  • [f461d56c53] - (SEMVER-MINOR) build: support apple silicon (arm64 darwin) builds (Samuel Attard) #2165
  • [ee6fa7d3bc] - docs: note that node-gyp@7 should solve Catalina CLT issues (Rod Vagg) #2156
  • [4fc8ff179d] - doc: silence curl for macOS Catalina acid test (Chia Wei Ong) #2150
  • [7857cb2eb1] - deps: increase "engines" to "node" : ">= 10.12.0" (DeeDeeG) #2153
+11 -1

0 comment

2 changed files

rvagg

pr closed time in a day

push eventnodejs/node-gyp

Rod Vagg

commit sha c60379690e0d0b34d4941d535a13f69d55d1a9ce

v7.1.0: bump version and update changelog

view details

push time in a day

created tagnodejs/node-gyp

tagv7.1.0

Node.js native addon build tool

created time in a day

pull request commentnodejs/node-gyp

build: support apple silicon (arm64 darwin) builds

https://github.com/nodejs/node-gyp/pull/2192

MarshallOfSound

comment created time in 2 days

PR opened nodejs/node-gyp

Release proposal: v7.1.0
  • [aaf33c3029] - build: add update-gyp script (Samuel Attard) #2167
  • [3baa4e4172] - (SEMVER-MINOR) gyp: update gyp to 0.4.0 (Samuel Attard) #2165
  • [f461d56c53] - (SEMVER-MINOR) build: support apple silicon (arm64 darwin) builds (Samuel Attard) #2165
  • [ee6fa7d3bc] - docs: note that node-gyp@7 should solve Catalina CLT issues (Rod Vagg) #2156
  • [4fc8ff179d] - doc: silence curl for macOS Catalina acid test (Chia Wei Ong) #2150
  • [7857cb2eb1] - deps: increase "engines" to "node" : ">= 10.12.0" (DeeDeeG) #2153
+11 -1

0 comment

2 changed files

pr created time in 2 days

push eventnodejs/node-gyp

Rod Vagg

commit sha c60379690e0d0b34d4941d535a13f69d55d1a9ce

v7.1.0: bump version and update changelog

view details

push time in 2 days

create barnchnodejs/node-gyp

branch : rvagg/v7.1.0-proposal

created branch time in 2 days

pull request commentipfs/go-hamt-ipld

Load-time block format validation

Fixed CI bug with Go 1.11 and squashed this into just the two commits on the HEAD of this branch. Review should focus on those two, 04503fda1 particularly (the other just adds a coverage target to Makefile).

rvagg

comment created time in 2 days

push eventrvagg/go-hamt-ipld

Rod Vagg

commit sha 93007412a8f0724b2db083513ab7b2c884fdbb5b

chore: add coverage make target

view details

Rod Vagg

commit sha 04503fda140206a765c1f1aea288ffdbdf09dbd7

feat: strict validation on block load

view details

Rod Vagg

commit sha 7597825e3cdbbd0f7c831fac8f7734325daa3d88

BREAKING CHANGE: change Pointer CBOR representation to a kinded union From: type Pointer union { &Node "0" Bucket "1" } representation keyed i.e. {"0": CID} or {"1": [KV...]} To: type Pointer union { &Node link Bucket list } representation kinded i.e. CID or [KV...] Also removes redundant refmt tags Closes: https://github.com/ipfs/go-hamt-ipld/issues/53

view details

push time in 2 days

push eventrvagg/go-hamt-ipld

Steven Allen

commit sha 21886a1edb600df1cc6cead007fcba408f47af91

Merge pull request #50 from ipfs/chore/update-cid chore: update go-cid

view details

Rod Vagg

commit sha fc84b8f84189c48d0c11c2ab113620bc37d630c4

chore: docs inline

view details

Rod Vagg

commit sha 3d5b3607bc9dfb5f2385b202c0845591397c2fa3

fixup! chore: docs inline

view details

Steven Allen

commit sha 58f187ca68846a9af494ef47aa1289af83f0b7c5

Merge pull request #52 from rvagg/rvagg/docs Documentation

view details

Rod Vagg

commit sha ef9df7f06db818fb3a0ec8976b203765a5ad4346

chore: add printHamt() for debugging

view details

Rod Vagg

commit sha faf6c31e28c78daeb03e0d9ec5956bb05d816425

chore: refactor cleanChild() for clarity & closer match to CHAMP semantics Closes: https://github.com/ipfs/go-hamt-ipld/issues/56

view details

Rod Vagg

commit sha 93007412a8f0724b2db083513ab7b2c884fdbb5b

chore: add coverage make target

view details

Rod Vagg

commit sha 04503fda140206a765c1f1aea288ffdbdf09dbd7

feat: strict validation on block load

view details

push time in 2 days

push eventrvagg/go-hamt-ipld

Rod Vagg

commit sha 38dbb0ed7465b5aa3a2fa54aa9603d9e931d5c80

chore: docs inline

view details

Rod Vagg

commit sha 986fbd6c546ff467e12bcbb2d5bad88dd6b532bc

chore: add printHamt() for debugging

view details

Rod Vagg

commit sha 2da92e4df1099e50f058c52fa1eeac45cf25d69d

chore: refactor cleanChild() for clarity & closer match to CHAMP semantics Closes: https://github.com/ipfs/go-hamt-ipld/issues/56

view details

Rod Vagg

commit sha 1ab4f58bb75ae8560a36816cfb45ef9f91f79575

chore: add coverage make target

view details

Rod Vagg

commit sha 8b2afd8f94e359014a411be1d10d497f3ab7c0e8

feat: strict validation on block load

view details

push time in 2 days

push eventrvagg/go-hamt-ipld

Rod Vagg

commit sha de6061cc7db9dbf7a4772d2b02edabc9157df96f

fixup! feat: strict validation on block load

view details

push time in 2 days

issue commentipfs/go-hamt-ipld

Proposal: change block layout for Pointers

done in #60

rvagg

comment created time in 2 days

PR opened ipfs/go-hamt-ipld

Reviewers
Change Pointer CBOR representation to a kinded union (breaking change)

From:

type Pointer union {
	&Node "0"
	Bucket "1"
} representation keyed

i.e. {"0": CID} or {"1": [KV...]}

To:

type Pointer union {
	&Node link
	Bucket list
} representation kinded

i.e. CID or [KV...]

Also removes redundant refmt tags.

Diff is hard to see because this bundles unmerged PRs #56 and #59. Only the last commit contains the main change here, currently that's cb07286.

Because TestMalformedHamt has some manually generated CBOR you can see the change in the block layout there and the simplification this introduces.

Closes: #53

+680 -146

0 comment

6 changed files

pr created time in 2 days

create barnchrvagg/go-hamt-ipld

branch : rvagg/pointer-kinded

created branch time in 2 days

Pull request review commentipfs/go-hamt-ipld

Load-time block format validation

 func (p *Pointer) loadChild(ctx context.Context, ns cbor.IpldStore, bitWidth int // of this library to remain the defaults long-term and have strategies in // place to manage variations. func LoadNode(ctx context.Context, cs cbor.IpldStore, c cid.Cid, options ...Option) (*Node, error) {-	// TODO(rvagg): loaded nodes must be validated to make sure we have only-	// the correct form of Nodes to avoid attacks from alternative implementations-	// that feed us poorly formed data. Check that:-	// 1. Pointers contains the correct number of elements defined by Bitfield-	// 2. Pointers contain *only* a link or a bucket (this may already be done in-	// the CBOR unmarshal but might be worth doing here so the check is all in-	// one place)-	// 3. Pointers with links have are DAG-CBOR multicodec-	// 4. KV buckets contain strictly between 1 and bucketSize elements-	// 5. KV buckets are ordered by key (bytewise comparison)-	// 6. keys and values are valid (what are the rules? len(key)>0? can val be-	// nul? etc.)-	// 7. .. potentially we could validate the position of elements if we propagate-	// the depth of this node so we know which bits to chomp off the hash digest.+	return loadNode(ctx, cs, c, true, defaultBitWidth, defaultHashFunction, options...)+}++// internal version of loadNode that is aware of whether this is a root node or+// not for the purpose of additional validation on non-root nodes.+func loadNode(+	ctx context.Context,+	cs cbor.IpldStore,+	c cid.Cid,+	isRoot bool,+	bitWidth int,+	hashFunction func([]byte) []byte,+	options ...Option,+) (*Node, error) {+ 	var out Node 	if err := cs.Get(ctx, c, &out); err != nil { 		return nil, err 	}  	out.store = cs-	out.bitWidth = defaultBitWidth-	out.hash = defaultHashFunction+	out.bitWidth = bitWidth+	out.hash = hashFunction 	// apply functional options to node before using 	for _, option := range options { 		option(&out) 	} +	// Validation++	// too many elements in the data array for the configured bitWidth?+	if len(out.Pointers) > 1<<out.bitWidth {+		return nil, ErrMalformedHamt+	}++	// the bifield is lying or the elements array is+	if out.bitsSetCount() != len(out.Pointers) {+		return nil, ErrMalformedHamt+	}++	for _, ch := range out.Pointers {+		isLink := ch.isShard()+		isBucket := ch.KVs != nil+		if !((isLink && !isBucket) || (!isLink && isBucket)) {+			return nil, ErrMalformedHamt+		}+		if isLink && ch.Link.Type() != cid.DagCBOR { // not dag-cbor

Possibly the most controversial item in here that I want to highlight. We're validating that all links to child HAMT nodes (not links it might contain as values, only HAMT nodes) are of codec DAG-CBOR. Just another check to get some assurance we're not being fed bogus data in the structure, it doesn't go very far but we're asserting that this HAMT has homogeneity in its block generation, for now. This could be changed in future if we allowed some variability in codec or wanted to have a migration path.

rvagg

comment created time in 2 days

pull request commentnodejs/node

build: move compiling for Windows ARM64 to Tier 2

I guess discussion here should focus on how close this comes to our Tier 2 definition:

  • Tier 2: These platforms represent smaller segments of the Node.js user base. The Node.js Build Working Group maintains infrastructure for full test coverage. Test failures on tier 2 platforms will block releases. Infrastructure issues may delay the release of binaries for these platforms.

Given that there's an opt-in checkbox in a Jenkins subjob that gets compilation happening does it get close enough? Do we need to extend that a bit more to make a daily test job against master to get closer? I don't imagine too many people being aware of the tickbox or thinking to regularly run these jobs outside of one or two people.

It strictly meets "The Node.js Build Working Group maintains infrastructure for full test coverage." but "Test failures on tier 2 platforms will block releases" suggests that it's tied into our CI in such a way that failures would be noticed by someone preparing for releases.

joaocgreis

comment created time in 2 days

pull request commentnodejs/build

jenkins: add ARM64 Windows

That's one nasty hack in the version selector to get the checkbox to work! Probably better than building too much more of the logic into Jenkins but it might not be a good pattern to encourage unless we can find a way to make it a bit cleaner.

Is the intention to leave the checkbox only at the fanned-arm64 job and its subjobs or does it get migrated all the way up the job call stack?

And what's the path to a more permanent CI presence? Do we have any word on Windows ARM64 hosting options, do we even imagine that possibility on the horizon or are we going to be in a similar position to our Pi cluster, with Windows laptops in offices?

joaocgreis

comment created time in 2 days

Pull request review commentipfs/go-hamt-ipld

Load-time block format validation

 func (p *Pointer) loadChild(ctx context.Context, ns cbor.IpldStore, bitWidth int 		return p.cache, nil 	} -	out, err := LoadNode(ctx, ns, p.Link)+	out, err := loadNode(ctx, ns, p.Link, false, UseTreeBitWidth(bitWidth), UseHashFunction(hash))

I've added 2 more arguments to the function so they get passed in explicitly, it gets a bit verbose but it's not terrible as it is now. We're probably going to want bucketSize at some point too so this might start to get out of hand soon and pulling up to a config object might be a good refactor.

rvagg

comment created time in 3 days

Pull request review commentipfs/go-hamt-ipld

Load-time block format validation

 const defaultBitWidth = 8  // ErrNotFound is returned when a Find operation fails to locate the specified // key in the HAMT-var ErrNotFound = fmt.Errorf("not found")+var ErrNotFound = fmt.Errorf("Not found")

👍 thanks, fixed

rvagg

comment created time in 3 days

push eventrvagg/go-hamt-ipld

Rod Vagg

commit sha c0b6c447361b039b74e35f35b1a35145679f579a

fixup! feat: strict validation on block load

view details

push time in 3 days

push eventrvagg/go-hamt-ipld

Rod Vagg

commit sha 4edcc24ffe88d8ba5d36ff0cd8ff2aa6940286b3

fixup! feat: strict validation on block load

view details

push time in 3 days

push eventrvagg/go-hamt-ipld

Rod Vagg

commit sha 165d09cb8e30493393347f409b0d1681731057f2

fixup! feat: strict validation on block load

view details

push time in 3 days

pull request commentipfs/go-hamt-ipld

Load-time block format validation

Ready for review now. This includes 2 commits from #57, the cleanChild() changes. It adds a bunch of code to LoadNode() and pulls out the loading functionality into a new loadNode() so we can toggle on root/non-root for different validation. There's a minor change that gets into the new cleanChild() because we no longer need to do one bit of validation there now. There's a new method in uhamt.go for counting the total number in the bitfield. There's a big-ol' test case named TestMalformedHamt which pushes this around with different incoming CBOR blocks. Some utilities in hamt_util_test.go plus some context-specific ones inside the test let us manually build CBOR that forces validation errors in various ways. There's also sanity-checking along the way to make sure we're not just hitting "this is all just bad CBOR" type errors.

The HAMT has (almost) zero chill now about malformed blocks. If a block smells bad then it's rejected, no pretending. One item that could be checked but is not because it's so complicated and costly (for now) is to make sure that entries are in their right positions in the graph given their keys. If that's important it could be added later and called on demand. For now this is block-local only.

rvagg

comment created time in 3 days

push eventrvagg/go-hamt-ipld

Rod Vagg

commit sha ab2f022229f98e4676d4ec39aa1ee78612814794

fixup! feat: strict validation on block load

view details

push time in 3 days

PR closed rvagg/polendina

feat: print page errors
+3 -0

1 comment

1 changed file

mikeal

pr closed time in 3 days

pull request commentrvagg/polendina

feat: print page errors

5c1cfa9 with minor mods but no release cause tests aren't running .. yay

mikeal

comment created time in 3 days

push eventrvagg/polendina

Mikeal Rogers

commit sha 5c1cfa921b9cea35c018a8d65d86304f613eb759

feat: print page errors

view details

push time in 3 days

issue closednodejs/node-gyp

How to add libraries on a node addon.

I'm attempt to create a node addon that uses the a json cpp library. I'm attempted to use libraries(referenced on #328 ) as include dir, event the including directly but the build fails on locating the package.

Here's my gyp file.

{
    "targets": [
        {
            "target_name": "bindings",
            "sources": ["analyzer.cc"]
        },
    ],
    "include_dirs": [
    	"lib/nlohmann"
    ],
    "libraries": [
    	"-lnlohmann", "-L/home/lukaswilkeer/Documents/dev/nodejs/stock-parser/lib/nlohmann"
    ]
}

What to do and how to do?

closed time in 3 days

lukaswilkeer

issue commentnodejs/node-gyp

How to add libraries on a node addon.

Try doing a -l<absolite path to library file>, this is not an ideal arrangement but sometimes you don't have much choice to link to an absolute location. An ideal would be to have multiple targets and link them together. See https://github.com/Level/leveldown/blob/master/binding.gyp for how this might be done if you have control over the library you're linking against - that one links against a "dependency" which has its own .gyp file, and that dependency even links against another dependency which also has its own .gyp file. So if you're bundling some C++ and publishing as a Node addon and you expect users to compile both the library and your addon wrapper then try and get it into a state where it can all be compiled with GYP and linked together by GYP rather than this manual linking arrangement.

lukaswilkeer

comment created time in 3 days

PR closed nodejs/node-gyp

gyp: add missing extensions to compile_commands_json
Checklist
Description of change

Extends compile_commands_json generator to consider more file extensions than just c and cc. This commit adds the cpp and cxx extensions to the list of input source files.

+2 -2

1 comment

1 changed file

manuel-arguelles

pr closed time in 3 days

pull request commentnodejs/node-gyp

gyp: add missing extensions to compile_commands_json

sorry to do this but this needs to move over to https://github.com/nodejs/gyp-next which we vendor in here, we no longer touch the code in gyp/ except for pulling in new versions from over there.

manuel-arguelles

comment created time in 3 days

PR opened ipfs/go-hamt-ipld

WIP load-time block format validation

Includes #57, the first two commits. The changes so far are only in LoadNode and in the tests.

Adds in some simple helper functions into the tests to help manually build CBOR blobs that'll load and trigger various cases. Only got as far as ensuring that the bitmap and the number of elements aren't out of alignment. That can get stricter with #54 if we put in precisely the size of bitmap that we need for the array of elements.

Other things I'm working on in here are listed at the bottom of hamt_test.go. The aim is that this thing should refuse to load from blocks that smell funny (i.e. aren't exactly the right form). There should only be one way of representing this data and there should be no avenues for variation. That's an ideal and not strictly possible in the extreme sense for various reasons but we can get close.

+407 -43

0 comment

4 changed files

pr created time in 5 days

push eventrvagg/go-hamt-ipld

Rod Vagg

commit sha faf6c31e28c78daeb03e0d9ec5956bb05d816425

chore: refactor cleanChild() for clarity & closer match to CHAMP semantics Closes: https://github.com/ipfs/go-hamt-ipld/issues/56

view details

Rod Vagg

commit sha dc6c9ac7039f8ef47bab6e19595d6d93cc305575

chore: add coverage make target

view details

Rod Vagg

commit sha eac033ed0a98279f5824fc70882a67004955ac29

feat: strict validation on block load

view details

push time in 5 days

push eventrvagg/go-hamt-ipld

Rod Vagg

commit sha faf6c31e28c78daeb03e0d9ec5956bb05d816425

chore: refactor cleanChild() for clarity & closer match to CHAMP semantics Closes: https://github.com/ipfs/go-hamt-ipld/issues/56

view details

push time in 5 days

push eventrvagg/go-hamt-ipld

Rod Vagg

commit sha fc84b8f84189c48d0c11c2ab113620bc37d630c4

chore: docs inline

view details

Rod Vagg

commit sha 3d5b3607bc9dfb5f2385b202c0845591397c2fa3

fixup! chore: docs inline

view details

Steven Allen

commit sha 58f187ca68846a9af494ef47aa1289af83f0b7c5

Merge pull request #52 from rvagg/rvagg/docs Documentation

view details

Rod Vagg

commit sha ef9df7f06db818fb3a0ec8976b203765a5ad4346

chore: add printHamt() for debugging

view details

Rod Vagg

commit sha 18c8cac4638daa0ca1ed430ee5abd000085ef4bb

chore: refactor cleanChild() for clarity & closer match to CHAMP semantics Closes: https://github.com/ipfs/go-hamt-ipld/issues/56

view details

Rod Vagg

commit sha b2027f22f20d7606b507cd9ed24d255f9d2e1e9d

chore: add coverage make target

view details

Rod Vagg

commit sha 5e15de4a8f7085c38332cf8fcdce0a4663f00577

feat: strict validation on block load

view details

push time in 5 days

pull request commentipfs/go-hamt-ipld

Refactor cleanChild()

rebased to master and autosquashed

rvagg

comment created time in 5 days

Pull request review commentipfs/go-hamt-ipld

Refactor cleanChild()

 func (n *Node) Set(ctx context.Context, k string, v interface{}) error { 	return n.modifyValue(ctx, &hashBits{b: n.hash(kb)}, kb, d) } -func (n *Node) cleanChild(chnd *Node, cindex byte) error {-	l := len(chnd.Pointers)-	switch {-	case l == 0:-		return fmt.Errorf("incorrectly formed HAMT")-	case l == 1:-		// TODO: only do this if its a value, cant do this for shards unless pairs requirements are met.+// the number of links to child nodes this node contains+func (n *Node) directChildCount() int {+	count := 0+	for _, p := range n.Pointers {+		if p.isShard() {+			count+++		}+	}+	return count+} -		ps := chnd.Pointers[0]-		if ps.isShard() {-			return nil+// the number of KV entries this node contains+func (n *Node) directKVCount() int {+	count := 0+	for _, p := range n.Pointers {+		if !p.isShard() {+			count = count + len(p.KVs) 		}+	}+	return count+} -		return n.setChild(cindex, ps)-	case l <= arrayWidth:-		var chvals []*KV-		for _, p := range chnd.Pointers {-			if p.isShard() {-				return nil-			}+// This happens after deletes to ensure that we retain canonical form for the+// given set of data this HAMT contains. This is a key part of the CHAMP+// algorithm. Any node that could be represented as a bucket in a parent node+// should be collapsed as such. This collapsing process could continue back up+// the tree as far as necessary to represent the data in the minimal HAMT form.+// This operation is done from a parent perspective, so we clean the child+// below us first and then our parent cleans us.+func (n *Node) cleanChild(chnd *Node, cindex byte) error {

going to leave the byte/int thing for a separate PR

rvagg

comment created time in 5 days

push eventrvagg/go-hamt-ipld

Wes Morgan

commit sha cfefbadf833c06349cc7dd977ea46a67ebf3fe20

Return CBOR decode error in Find

view details

Steven Allen

commit sha f3547695a9a20334a5f3bc204fb2e7083dda40bd

Merge pull request #45 from cap10morgan/patch-1 Return CBOR decode error in Find

view details

Jeromy

commit sha 620412ad59132f1e7a81b1f9dd13aae022033151

update cbor-gen code to fix nested nil deferred unmarshals

view details

Whyrusleeping

commit sha d53d20a7063e88956255bedd1cce99816439c940

Merge pull request #46 from ipfs/fix/deferred-kv-nil update cbor-gen code to fix nested nil deferred unmarshals

view details

Jeromy

commit sha 0d6c7e3582b56b533915d5acd4c2b34a7c3abf16

update to latest cbor-gen strategiues

view details

Whyrusleeping

commit sha 0310ad2b0b1f880f8412150aa7dd9b1e6b9fb79d

Merge pull request #48 from ipfs/feat/new-cbor-gen update to latest cbor-gen strategiues

view details

Steven Allen

commit sha 4d3002f36e05a0201a16af6c9a80b040879c59e3

feat: allow custom hash functions This will allow HAMTs to be safely used in contexts where attackers can choose the keys.

view details

Steven Allen

commit sha bc91f4a85307f43250c5027e39c4733a83dea097

Merge pull request #49 from ipfs/fix/custom-hash feat: allow custom hash functions

view details

Steven Allen

commit sha 8aebad915d5f0d4f41e441df93dcef55f30a9966

chore: update go-cid

view details

Steven Allen

commit sha 21886a1edb600df1cc6cead007fcba408f47af91

Merge pull request #50 from ipfs/chore/update-cid chore: update go-cid

view details

Rod Vagg

commit sha fc84b8f84189c48d0c11c2ab113620bc37d630c4

chore: docs inline

view details

Rod Vagg

commit sha 3d5b3607bc9dfb5f2385b202c0845591397c2fa3

fixup! chore: docs inline

view details

Steven Allen

commit sha 58f187ca68846a9af494ef47aa1289af83f0b7c5

Merge pull request #52 from rvagg/rvagg/docs Documentation

view details

Rod Vagg

commit sha ef9df7f06db818fb3a0ec8976b203765a5ad4346

chore: add printHamt() for debugging

view details

Rod Vagg

commit sha 18c8cac4638daa0ca1ed430ee5abd000085ef4bb

chore: refactor cleanChild() for clarity & closer match to CHAMP semantics Closes: https://github.com/ipfs/go-hamt-ipld/issues/56

view details

push time in 5 days

push eventrvagg/go-hamt-ipld

Wes Morgan

commit sha ca51fd92bec942869111c021b1d1370c2ba0c3d8

Return CBOR decode error in Find

view details

Jeromy

commit sha b7702108b0fc8fa6f4f36f14b5d1c33cf52ce564

update cbor-gen code to fix nested nil deferred unmarshals

view details

Jeromy

commit sha 3135fb4ce7a8201fce869cd5e05e34900aff64b4

update to latest cbor-gen strategiues

view details

Steven Allen

commit sha 3932ad7a7f1b3138b47b17355e0af65b260ad57c

feat: allow custom hash functions This will allow HAMTs to be safely used in contexts where attackers can choose the keys.

view details

Steven Allen

commit sha 668e340e1c04756d860afa471bda07838f2d43dd

chore: update go-cid

view details

Rod Vagg

commit sha f5dd79bbb4ee315ea3ad99683db68ef70b0310f1

chore: docs inline

view details

Rod Vagg

commit sha 67db21e1cdbd2856462b424b118a3ae3d03f22ae

chore: add printHamt() for debugging

view details

Rod Vagg

commit sha 853bd9c49bed04f0e626d2e5e0df5a04e718b538

chore: refactor cleanChild() for clarity & closer match to CHAMP semantics Closes: https://github.com/ipfs/go-hamt-ipld/issues/56

view details

push time in 5 days

push eventrvagg/go-hamt-ipld

Rod Vagg

commit sha fc84b8f84189c48d0c11c2ab113620bc37d630c4

chore: docs inline

view details

Rod Vagg

commit sha 3d5b3607bc9dfb5f2385b202c0845591397c2fa3

fixup! chore: docs inline

view details

Steven Allen

commit sha 58f187ca68846a9af494ef47aa1289af83f0b7c5

Merge pull request #52 from rvagg/rvagg/docs Documentation

view details

Rod Vagg

commit sha 8c99bbeda6f21adc1532a9fefa619b00fee5dae4

chore: add printHamt() for debugging

view details

Rod Vagg

commit sha 92eaf7011e927a31e03fd71ab205f41997eae2f0

chore: refactor cleanChild() for clarity & closer match to CHAMP semantics Closes: https://github.com/ipfs/go-hamt-ipld/issues/56

view details

Rod Vagg

commit sha e96eff747065cbabd8e66fada4ecb5e0cece9e03

fixup! chore: refactor cleanChild() for clarity & closer match to CHAMP semantics

view details

push time in 5 days

create barnchrvagg/go-hamt-ipld

branch : rvagg/load-validation

created branch time in 5 days

issue commentnodejs/node-gyp

wrong include path generated for module node-addon-api

@nodejs/addon-api hey, the docs for node-addon-api have the @ but we don't use it for the NAN docs. Perhaps this should change? https://github.com/nodejs/node-addon-api/blob/master/doc/setup.md, I don't think it's necessary if it's just one item being returned, not an array, and in this case it seems to be to blame for doing some extra escaping on Windows, which I don't get, but maybe @nodejs/gyp understands.

Jamol

comment created time in 6 days

issue commentipfs/go-hamt-ipld

TODO: clarify bitfield format; size, ordering, etc.

Changed the title of this after playing a bit with the format. Dropping some thoughts here as I explore this.

The bitfield at the moment uses a big.Int which can spit out a type of big-endian format. (some of the guts are in https://golang.org/src/math/big/nat.go with the API in https://golang.org/src/math/big/int.go). We're using the Bytes() and SetBytes() methods for serialization and deserialization. What this seems to give us is the most compact big-endian representation of the number it holds. We're using SetBit() to set individual bits on and off to indicate the presence or absence of an element in our data array, so the number is arbitrary, it's the bits that matter.

The maximum length of the bitfield should be 2^bitWidth to hold enough bits to cover all the indexing we need for any node. So a bitWidth of 8 gives us 256 bits needed to store our bitmap data. So if we were to turn on all the bits because we have a full data array, we'd end up serializing 0xff... for 32-bytes, i.e. 256 1's. But if we only tinker with the first 8 bits, then we only need to serialize one byte. e.g. if we only had an element at index 0 then our bitfield would be a single byte, 0x01, but if we only set the 8th bit then we need to bytes so would serialize 0x0100, and so on.

Filecoin only uses a bitWidth of 5, so that's 32-bits, or 4-bytes needed to represent the bitfield.

Some thoughts about this format:

  • It's somewhat Go specific. It's convenient to get these from a big.Int and set them to a big.Int but it's going to be slightly annoying for everyone else unless they have something already that works in exactly the same way. The ideal internal representation is for a node to have a bitfield of exactly 2^bitWidth ready and available to set and unset bits on. The convenience of big.Int bypasses this entirely but that's not going to be the same story across languages. I have https://github.com/rvagg/iamap/blob/master/bit-utils.js for this in JS, but to go to and from this serialization format I'd have to be trimming left-most bytes that contain zeros on the way in and padding them back on the way out.
  • The randomness of the hash algorithm means that the chances of setting the 32nd bit are the same as setting the 1st. There's no bias toward smaller bits built-in here. So we buy some compaction in the serialization by using this truncated format, but it's only going to get us so far in a HAMT that's got more than a few values in it.
  • In Filecoin the bitWidth is 5, so 32 bits, or a maximum of 4 bytes in the serialization format of this field. There will be some cases with only 3, less with 2 and even less with just 1. We're saving bytes, but not many, and at the cost of complexity for everyone but Go implementers.
  • It's hard to validate canonical block form. The current implementation will (probably) take an arbitrarily long byte array and turn it into a valid big.Int. It'll treat as valid a block that has a byte array 1000 long in the position for the bitfield (I believe big.Int can handle this kind of arbitrary size). But then it should round-trip it back out as just-long-enough if the block was re-serialized. (So this is in a similar category to the problems suggested in https://github.com/filecoin-project/specs/issues/1045).

I don't have a strong opinion here yet, would like to hear others' thoughts. My personal preference would be for it to be stable and consistent, with the bitfield byte array in CBOR being exaclty 2^bitWidth (exactly 4 bytes, every time, for Filecoin) so serialization, validation and explanation of this spec is simpler than it currently is. I doubt that the number of bytes being saved here is very meaningful—but it's not zero.

@warpfork @Stebalien @anorth thoughts?

rvagg

comment created time in 6 days

Pull request review commentipfs/go-hamt-ipld

Refactor cleanChild()

 func TestOverflow(t *testing.T) { 	} } +func TestFillAndCollapse(t *testing.T) {+	ctx := context.Background()+	cs := cbor.NewCborStore(newMockBlocks())+	root := NewNode(cs, UseHashFunction(identityHash))

:thumbsup: good idea, have added that and am also adding it for the next set of tests I'm writing

re arrayWidth, it could be configurable but isn't yet, I quite like 3 as a value but it's one of those parameters that can be tuned for different results depending on what we're optimising for.

rvagg

comment created time in 6 days

push eventrvagg/go-hamt-ipld

Rod Vagg

commit sha 5f33c4216d2d0c2132060bf0af60e0667e292c78

fixup! chore: refactor cleanChild() for clarity & closer match to CHAMP semantics

view details

push time in 6 days

Pull request review commentipfs/go-hamt-ipld

Refactor cleanChild()

 func (n *Node) Set(ctx context.Context, k string, v interface{}) error { 	return n.modifyValue(ctx, &hashBits{b: n.hash(kb)}, kb, d) } -func (n *Node) cleanChild(chnd *Node, cindex byte) error {-	l := len(chnd.Pointers)-	switch {-	case l == 0:-		return fmt.Errorf("incorrectly formed HAMT")-	case l == 1:-		// TODO: only do this if its a value, cant do this for shards unless pairs requirements are met.+// the number of links to child nodes this node contains+func (n *Node) directChildCount() int {+	count := 0+	for _, p := range n.Pointers {+		if p.isShard() {+			count+++		}+	}+	return count+} -		ps := chnd.Pointers[0]-		if ps.isShard() {-			return nil+// the number of KV entries this node contains+func (n *Node) directKVCount() int {+	count := 0+	for _, p := range n.Pointers {+		if !p.isShard() {+			count = count + len(p.KVs) 		}+	}+	return count+} -		return n.setChild(cindex, ps)-	case l <= arrayWidth:-		var chvals []*KV-		for _, p := range chnd.Pointers {-			if p.isShard() {-				return nil-			}+// This happens after deletes to ensure that we retain canonical form for the+// given set of data this HAMT contains. This is a key part of the CHAMP+// algorithm. Any node that could be represented as a bucket in a parent node+// should be collapsed as such. This collapsing process could continue back up+// the tree as far as necessary to represent the data in the minimal HAMT form.+// This operation is done from a parent perspective, so we clean the child+// below us first and then our parent cleans us.+func (n *Node) cleanChild(chnd *Node, cindex byte) error {

fwiw #52 has a ton more documentation on some of this stuff

But the byte thing is a good point! it also precludes being able to add support for bitWidths greater than 8 (which may have questionable benefit but I added up to 16 in mine so I could push some boundaries in the 8-10 range). I don't know why it's a byte or why that effort is involved here, we could probably back that out without any drama.

rvagg

comment created time in 6 days

issue commentnodejs/node-gyp

wrong include path generated for module node-addon-api

Odd, I guess the \ is being treated as escaping. What happens if you remove the @?

Jamol

comment created time in 6 days

Pull request review commentipfs/go-hamt-ipld

Refactor cleanChild()

 func (n *Node) Set(ctx context.Context, k string, v interface{}) error { 	return n.modifyValue(ctx, &hashBits{b: n.hash(kb)}, kb, d) } -func (n *Node) cleanChild(chnd *Node, cindex byte) error {-	l := len(chnd.Pointers)-	switch {-	case l == 0:-		return fmt.Errorf("incorrectly formed HAMT")-	case l == 1:-		// TODO: only do this if its a value, cant do this for shards unless pairs requirements are met.+// the number of links to child nodes this node contains+func (n *Node) directChildCount() int {

I plan to use these two to help with load-time validation so their utility is beyond the clean algorithm, btw

rvagg

comment created time in 7 days

push eventrvagg/go-hamt-ipld

Steven Allen

commit sha bc91f4a85307f43250c5027e39c4733a83dea097

Merge pull request #49 from ipfs/fix/custom-hash feat: allow custom hash functions

view details

Steven Allen

commit sha 8aebad915d5f0d4f41e441df93dcef55f30a9966

chore: update go-cid

view details

Steven Allen

commit sha 21886a1edb600df1cc6cead007fcba408f47af91

Merge pull request #50 from ipfs/chore/update-cid chore: update go-cid

view details

Rod Vagg

commit sha 9ba88a6c503b5c6f30de1b9c60d4486c02ae7939

chore: add printHamt() for debugging

view details

Rod Vagg

commit sha 01a1eaf1e681dc7086bef631d4fc2f35ca3b1c80

chore: refactor cleanChild() for clarity & closer match to CHAMP semantics Closes: https://github.com/ipfs/go-hamt-ipld/issues/56

view details

push time in 7 days

PR opened ipfs/go-hamt-ipld

Refactor cleanChild()

Closes: https://github.com/ipfs/go-hamt-ipld/issues/56

I've tried to push cleanChild() around in the various ways I was suspecting it might be flawed, but couldn't make it misbehave, so I'm going to call it good. Added more testing for CHAMP compaction along with a printHamt() that can show you the kinds of shapes you have.

So in this test we do things like:

‣ bafy2bzaceaabidljpjqxkb3bwz2kymapl6w6zazcaehtwh55d3solznpudfji:                                                                                     
  ⇶ [ AAAAAA11, AAAAAA12, AAAAAA21 ]                                                                                                                  

overflows when we add another colliding element to:

‣ bafy2bzacec2e7uxlmpn5bzfjfhrjbvgkgyfbi76k43epdijp626i3baaao436:                                                                                     
  ‣ bafy2bzacebgcvdzkq4glgp2kl65oyz65w2qb6ptbzjwxgbvnw2hwzs25bgeaw:                                                                                   
    ‣ bafy2bzacect7ahd4hdy24w7jfrttzgx5mh264lzobxdq5dyisqeelol6dia5o:                                                                                 
      ‣ bafy2bzacebxdmdcvhpnpsiia632eiov45yynuijl3faw7zzfzbupg4tombhnu:                                                                               
        ‣ bafy2bzacecrptawydwevr4rt3w4mbvuvh6wteh2w3kplwohkzu64zip5lslc2:                                                                             
          ‣ bafy2bzacedxu2dpomrsnqgj45pbkgyxrc4nujwfgjrwlpm3nkjvfomhkmqang:                                                                           
            ‣ bafy2bzaceadtlk6fe4pishrnyakhlz32iiakebraxudz34r4o5um3sgof3azs:                                                                         
              ⇶ [ AAAAAA11, AAAAAA12 ]                                                                                                                
              ⇶ [ AAAAAA21, AAAAAA22 ]                                                                                                                

and then back again.

It also tests a mid-way state where collisions happen at half that tree height with a collection of elements that collide in another way:

‣ bafy2bzaceacaykqvidugqydd2lskm7ajgdionqzgh22odxcmtfo6jkj6legqe:
  ‣ bafy2bzacear4ozwsojhwrjc3yroecudxdchelx74k2h6xlgzwtjsrpgo7sz3o:
    ‣ bafy2bzacec4owi2jruh53fve4vn5tyzg4yw7lfmrsuvshdrlsk4rdaa3nnbga:
      ‣ bafy2bzacedvt2qjfegib2k7bzfj4oxe4nxy46amtfwmybhhqjv4vs56xlv6gq:
        ‣ bafy2bzacecknjra3gxqdsaydwjwhgkh3pvun4kphn3xgpm6ek36mg3kzf3zrc:
          ⇶ [ AAA11AA ]
          ⇶ [ AAA12AA ]
          ⇶ [ AAA13AA ]
          ⇶ [ AAA14AA ]
        ‣ bafy2bzacecrptawydwevr4rt3w4mbvuvh6wteh2w3kplwohkzu64zip5lslc2:
          ‣ bafy2bzacedxu2dpomrsnqgj45pbkgyxrc4nujwfgjrwlpm3nkjvfomhkmqang:
            ‣ bafy2bzaceadtlk6fe4pishrnyakhlz32iiakebraxudz34r4o5um3sgof3azs:
              ⇶ [ AAAAAA11, AAAAAA12 ]
              ⇶ [ AAAAAA21, AAAAAA22 ]

and rolls that back out to see if it behaves properly. And it does.

I've refactored cleanChild() to a state that I think has more clarity about the various states it can be in and reasons it has to take action or not. I think this more closely matches the CHAMP semantics if you want to read the paper and see those semantics in this code. Maybe that's subjective, but I think I've reduced the number of branches here which should in itself clear up some confusion.

+278 -30

0 comment

4 changed files

pr created time in 7 days

push eventrvagg/go-hamt-ipld

Rod Vagg

commit sha 98f92f8858c6d156522b130f9cc73aac9c8fc383

chore: add printHamt() for debugging

view details

Rod Vagg

commit sha 13083626a446e12ebc09e5d82247622c468529fd

chore: refactor cleanChild() for clarity & closer match to CHAMP semantics Closes: https://github.com/ipfs/go-hamt-ipld/issues/56

view details

push time in 7 days

push eventrvagg/go-hamt-ipld

Steven Allen

commit sha b61527af811e6598d799bd08f544d6cf485cf71c

chore: update go-cid

view details

Rod Vagg

commit sha 15c859aa4e4d30ed0b7922c38ad024efb41a010d

chore: add printHamt() for debugging

view details

Rod Vagg

commit sha 72b159abe349140af930740aef25c92eef8a0039

chore: refactor cleanChild() for clarity & closer match to CHAMP semantics Closes: https://github.com/ipfs/go-hamt-ipld/issues/56

view details

push time in 7 days

create barnchrvagg/go-hamt-ipld

branch : rvagg/cleanChild-refactor

created branch time in 7 days

Pull request review commentnodejs/build

ansible: metrics server (WIP) & removal of Cloudflare cache bypass

+#!/usr/bin/env node++const { pipeline, Transform } = require('stream')+const split2 = require('split2')+const strftime = require('strftime').timezone(0)+const {Storage} = require('@google-cloud/storage');++const storage = new Storage({keyFilename: "metrics-processor-service-key.json"});++const jsonStream = new Transform({+  readableObjectMode: true,+  transform (chunk, encoding, callback) {+    try {+      this.push(JSON.parse(chunk.toString()))+      callback()+    } catch (e) {+      callback(e)+    }+  }+})++const extensionRe = /\.(tar\.gz|tar\.xz|pkg|msi|exe|zip|7z)$/+const uriRe = /(\/+(dist|download\/+release)\/+(node-latest\.tar\.gz|([^/]+)\/+((win-x64|win-x86|x64)?\/+?node\.exe|(x64\/)?node-+(v[0-9.]+)[.-]([^? ]+))))/+const versionRe = /^v[0-9.]+$/++function determineOS (path, file, fileType) {+  if (/node\.exe$/.test(file)) {+    return 'win'+  } else if (/\/node-latest\.tar\.gz$/.test(path)) {+    return 'src'+  } else if (fileType == null) {+    return ''+  } else if (/msi$/.test(fileType) || /^win-/.test(fileType)) {+    return 'win'+  } else if (/^tar\..z$/.test(fileType)) {+    return 'src'+  } else if (/^headers\.tar\..z$/.test(fileType)) {+    return 'headers'+  } else if (/^linux-/.test(fileType)) {+    return 'linux'+  } else if (fileType === 'pkg' || /^darwin-/.test(fileType)) {+    return 'osx'+  } else if (/^sunos-/.test(fileType)) {+    return 'sunos'+  } else if (/^aix-/.test(fileType)) {+    return 'aix'+  } else {+    return ''+  }+}++function determineArch (fileType, winArch, os) {+  if (fileType != null) {+    if (fileType.indexOf('x64') >= 0 || fileType === 'pkg') {+      // .pkg for Node.js <= 0.12 were universal so may be used for either x64 or x86+      return 'x64'+    } else if (fileType.indexOf('x86') >= 0) {+      return 'x86'+    } else if (fileType.indexOf('armv6') >= 0) {+      return 'armv6l'+    } else if (fileType.indexOf('armv7') >= 0) { // 4.1.0 had a misnamed binary, no 'l' in 'armv7l'+      return 'armv7l'+    } else if (fileType.indexOf('arm64') >= 0) {+      return 'arm64'+    } else if (fileType.indexOf('ppc64le') >= 0) {+      return 'ppc64le'+    } else if (fileType.indexOf('ppc64') >= 0) {+      return 'ppc64'+    } else if (fileType.indexOf('s390x') >= 0) {+      return 's390x'+    }+  }++  if (os === 'win') {+    // we get here for older .msi files and node.exe files+    if (winArch && winArch.indexOf('x64') >= 0) {+      // could be 'x64' or 'win-x64'+      return 'x64'+    } else {+      // could be 'win-x86' or ''+      return 'x86'+    }+  }++  return ''+}++const logTransformStream = new Transform({+  writableObjectMode: true,+  transform (chunk, encoding, callback) {+    if (chunk.ClientRequestMethod !== 'GET' ||+        chunk.EdgeResponseStatus < 200 ||+        chunk.EdgeResponseStatus >= 300) {+      return callback()+    }++    if (chunk.EdgeResponseBytes < 1024) { // unreasonably small for something we want to measure+      return callback()+    }++    if (!extensionRe.test(chunk.ClientRequestPath)) { // not a file we care about+      return callback()+    }++    const requestPath = chunk.ClientRequestPath.replace(/\/\/+/g, '/')+    const uriMatch = requestPath.match(uriRe)+    if (!uriMatch) { // what is this then?+      return callback()+    }++    const path = uriMatch[1]+    const pathVersion = uriMatch[4]+    const file = uriMatch[5]+    const winArch = uriMatch[6]+    const fileVersion = uriMatch[8]+    const fileType = uriMatch[9]++    let version = ''+    // version can come from the filename or the path, filename is best+    // but it may not be there (e.g. node.exe) so fall back to path version+    if (versionRe.test(fileVersion)) {+      version = fileVersion+    } else if (versionRe.test(pathVersion)) {+      version = pathVersion+    }++    const os = determineOS(path, file, fileType)+    const arch = determineArch(fileType, winArch, os)++    const line = []+    line.push(strftime('%Y-%m-%d', new Date(chunk.EdgeStartTimestamp / 1000 / 1000))) // date+    line.push(chunk.ClientCountry.toUpperCase()) // country+    line.push('') // state/province, derived from chunk.EdgeColoCode probably+    line.push(chunk.ClientRequestPath) // URI+    line.push(version) // version+    line.push(os) // os+    line.push(arch) // arch+    line.push(chunk.EdgeResponseBytes)++    this.push(`${line.join(',')}\n`)++    callback()+  }+})+++exports.processLogs = (data, context, callback) => {+  console.log('Node version is: ' + process.version);+  const file = data;+  bucketName = file.bucket;+  fileName = file.name;+  console.log("DATA " + data);+  console.log("BUCKET " + bucketName);+  console.log("FILENAME " + fileName);+  processedFile = fileName.split(".")[0];+  processedFile = processedFile.split("_")[0].concat("_", processedFile.split("_")[1]); +  console.log("PROCESSEDFILENAME " + processedFile);++  storage.bucket(bucketName).file(file.name).createReadStream()+  .on('error', function(err) { console.error(err) })

Feel free to keep on pushing your changes to this branch, at least we can have a discussion around the same code in that case.

One of the tricky things about custom streams is that you have to always be aware of calling the callback on transform() and flush() on any path through your code. Failure to do so will cause weird errors that involve back-pressure that has a delayed effect so it can be hard to track down. So the jsonStream and logTransformStream are worth studying to find codepaths that may not properly trigger a callback(), possibly even on the case of an uncaught exception which will just halt that execution path.

You do have one obvious instance of a failure to call callback though, at the bottom in the failure case, maybe it should be callback(err) after the console.log("PIPELINE HAS FAILED", err), just for completeness. But presumably you'd see that somewhere in the output (not necessarily right at the end though).

rvagg

comment created time in 8 days

Pull request review commentipld/js-ipld-ethereum

fix: replace node buffers with uint8arrays

 const createUtil = (codec, deserialize) => {     /**      * Deserialize Ethereum block into the internal representation.      *-     * @param {Buffer} serialized - Binary representation of a Ethereum block.+     * @param {Uint8Array|Array<Uint8Array>} serialized - Binary representation of a Ethereum block.      * @returns {Object}      */-    deserialize,+    deserialize: (serialized) => {

Is this change because upstream only supports Buffers and they also support arrays of Buffers?

Probably needs a const { Buffer } = require('buffer') too, right?

achingbrain

comment created time in 8 days

issue commentrvagg/ghauth

Depreciated OAuth Authorization API

🤦 what a mess, so we're going to need to get a user's browser involved now for this to work. Yeah I'd look at a PR that dealt with that, there's a lot that depends on this library working and even the brownouts they're talking about are going to be annoying. This must be why the new gh tool from GitHub has a terrible auth experience. Thanks for raising this, I wasn't aware of it.

bcomnes

comment created time in 8 days

pull request commentfilecoin-project/specs

Piece description update

Hey, while we're doing this, if we end up with the longer description of how to get to CommP, how about we take the opportunity to insert a definition of "Fr32" into the specs because it's going to keep getting asked. @ribasushi was the latest one to ask just 2 days ago. Combining and rephrasing parts of @porcuquine's responses we could come up with something like:

The term Fr32 is derived from the name of a struct Filecoin uses to represent the elements of the arithmetic field of a pairing-friendly curve, specifically Bls12-381—which justifies use of 32 bytes. F stands for "Field", while r is simply a mathematic letter-as-variable substitution used to denote the modulus of this particular field.

yiannisbot

comment created time in 8 days

issue commentnodejs/build

libuv: stop using `-j` flag to `cmake --build .` with `make`, and/or use `ninja`

👍 thanks for catching that @vtjnash, we have a $JOBS on each of our CI machines and normally use that whenever we use -j or JOBS= but that's been left off in this instance. I've inserted that back and you should see a -j 2 now on these runs I think.

https://ci.nodejs.org/job/libuv-test-commit-osx-cmake/177/ is a repeat of the last green run, https://ci.nodejs.org/job/libuv-test-commit-osx-cmake/174/, to confirm this is working properly. Please close this issue if you agree it's fixed now.

vtjnash

comment created time in 8 days

push eventrvagg/go-hamt-ipld

Rod Vagg

commit sha 3d5b3607bc9dfb5f2385b202c0845591397c2fa3

fixup! chore: docs inline

view details

push time in 8 days

Pull request review commentipfs/go-hamt-ipld

Documentation

 func (n *Node) getValue(ctx context.Context, hv *hashBits, k string, cb func(*KV 		return chnd.getValue(ctx, hv, k, cb) 	} +	// if not isShard, then the key/value pair is local and we need to retrieve+	// it from the bucket. The bucket is sorted but only between 1 and+	// `bucketSize` in length, so no need for fanciness. 	for _, kv := range c.KVs { 		if string(kv.Key) == k { 			return cb(kv) 		}+		// TODO: getting here would indicate a malformed HAMT, return error of some

👍 good catch

rvagg

comment created time in 8 days

issue commentmultiformats/js-cid

Provide non-generic constructor methods

I don't buy the arguments you have for retaining constructors, making this thing feel like an ArrayBuffer would probably mean we've failed to make it a nice abstraction. Just because it might be built as an abstraction over ArrayBuffer doesn't mean we need to expose that to users, there's not a good reason to leak that upward. So if you put that aside, you run out of reasons to have constructors and static factory functions. So I'd just ditch constructors entirely, or make a single-use constructor of some kind that's justifiable. I'd be more than happy with static factory methods that were explicit about what they do.

I'm a big +1 wherever we can reduce argument overloading. It's one of the biggest curses that jQuery gave JavaScript culture that we still haven't shaken and it keeps on leading to APIs that cause far more harm than the convenience they supposedly offer.

Would this work?

declare class CID {
   static from(string|ArrayBuffer|ArrayBufferView|CID):CID
   static from(buffer:ArrayBuffer, byteOffset:number=0, byteLength:number=buffer.byteLength):CID
   static create(version:number, code:number, hash:Uint8Array):CID
}

It's a big jump from js-cid, a simple one for js-multiformats in its early stage though.

CID.from(cid) could presumably serve as the formerly proposed CID.asCID(cid) wouldn't it? Just a return cid where it's deemed that the passed argument is what it needs to be, otherwise instantiate a new instance using what can be derived from it. Or would we need a form that lets us perform a full clone?

Gozala

comment created time in 8 days

pull request commentmultiformats/multicodec

Add P-384

Anything above the single byte range is wide open, I see two slots that might be nice (this is mostly a subjective call given the space available!). After x11, we could probably start a block in 0x12XX for common curve pubkeys. There's also space we could work on in 0xb5XX, the preceding poseidon entries are somewhat related (in that they involve a curve).

OR13

comment created time in 8 days

issue openedipfs/go-hamt-ipld

Fix cleanChild() algorithm

Pulling this out of a TODO attached to cleanChild() in #52 because I think it might be the most important thing to address with the current impl:

// TODO(rvagg): I don't think the logic here is correct. A compaction should
// occur strictly when: there are no links to child nodes remaining (assuming
// we've cleaned them first and they haven't caused a cascading collapse to
// here) and the number of direct elements (actual k/v pairs) in this node is
// equal o bucketSize+1. Anything less than bucketSize+1 is invalid for a node
// other than the root node (which probably won't have cleanChild() called on
// it). e.g.
// https://github.com/rvagg/iamap/blob/fad95295b013c8b4f0faac6dd5d9be175f6e606c/iamap.js#L333
// If we perform this depth-first, then it's possible to see the collapse
// cascade upward such that we end up with some parent node with a bucket with
// only bucketSize elements. The canonical form of the HAMT requires that
// any node that could be collapsed into a parent bucket is collapsed and.

This is more than just a perf improvement and needs to be addressed - either to prove that my concerns are wrong, or get the algorithm correct so it's not pushing beyond CHAMP invariants.

created time in 8 days

issue commentfilecoin-project/go-amt-ipld

Collapse sparse AMTs

But I think that it's differentiating from structures like this that's the problem with the simple compaction:

     root
    /    \
a: v   b: o 
         /
      e: o
         \
       f: vvv

The whole index needs to contribute to locating the item, otherwise you can't be sure whether you addressed the right one.

     root 
    /    \
a: v   d: vvv (height +2)

An index 011000000111 where that initial root->d selection is the first part 011 and the d->v selection is the last part 111 would also be matched by a look-up for 011111111111, so you need a way to consider the missing middle bit, or at least check whether the end you've reached is the one you want. When it's fully spanning you can assert that because you're never discarding pieces of data. With a HAMT you are discarding pieces of the hash because you can reach the end before exhausting the hash, which is why we need to store the key with the value and perform the key<>key check. We'd need a mechanism to do the same here.

Stebalien

comment created time in 8 days

issue commentfilecoin-project/go-amt-ipld

Collapse sparse AMTs

Here's an algorithm which might work:

Any height>0 node that only addresses a single terminal element within its sub-tree gets that element inlined in its parent along with the index of that element.

So you'd end up with intermediate nodes that had both links to sub-trees as well as inlined values, (conceptually) like: [CID, CID, [1820422,<value>],CID]. Where the only indexes we need to store are ones at height>0. And you'd get to deal with perverse cases like a single large indexed element only needing a single node for storage (rather than all the blocks needed to form full height).

So as you navigate down through the structure and you hit one of these things that's an array rather than a CID (so these become a kinded union of link and array), you just check the index against the one you're traversing to and that tells you whether this is the entry you care about or if that entry doesn't exist. The canonical form of the data structure would require compaction whenever a deletion created a situation where there was a leaf with only one element (accounting for that makes this a little awkward because you'd need to perform some traversal to discover the full count, but because all sub-trees are like this the traversal would be minimal). Then when you perform insertions and hit one of these compacted nodes you'd need to push it maximally downward until it's on its own or is at height=0.

So there's some funky testing required to ensure that such an algorithm could remain stable and result in canonical form for any given set of data regardless of how you got there.

Stebalien

comment created time in 9 days

pull request commentmultiformats/js-cid

feat: expose codec code and allow construction by code

squashed and rebased to master

rvagg

comment created time in 9 days

push eventmultiformats/js-cid

Alex Potsides

commit sha a7ae250761a4888b82abccc3d9455a140ff7bad6

fix: replace node buffers with uint8arrays (#117) Relaxes input from requiring node `Buffer`s to being `Uint8Arrays`. This also means that the `.buffer` (now `.byte`) and `.prefix` properties have changed to be `Uint8Array`s. BREAKING CHANGES: - node `Buffer`s have been replaced with `Uint8Array`s - the `.buffer` property has been renamed to `.bytes` and is now a `Uint8Array` - the `.prefix` property is now a `Uint8Array`

view details

Rod Vagg

commit sha 845b2dfe4ed16794b4c1a7fe7a026818b21df155

feat: expose codec code and allow construction by code Ref: https://github.com/multiformats/js-cid/pull/117#issuecomment-668131658

view details

push time in 9 days

push eventmultiformats/js-cid

Rod Vagg

commit sha bcae55581e1cbdf8ae7be33e7f1e858d1f7606d8

fixup! feat: expose codec code and allow construction by code

view details

push time in 9 days

Pull request review commentmultiformats/js-cid

feat: expose codec code and allow construction by code

  export type Version = 0 | 1 export type Codec = string+export type CodecCode = number export type Multihash = Uint8Array export type BaseEncodedString = string export type MultibaseName = string  declare class CID<a> {   constructor(Version, Codec, Multihash, multibaseName?:MultibaseName): void;+  constructor(Version, CodecCode, Multihash, multibaseName?:MultibaseName): void;   constructor(BaseEncodedString): void;   constructor(Uint8Array): void;    +codec: Codec;+	+code: CodecCode;

:thumbsup: thanks

rvagg

comment created time in 9 days

issue commentfilecoin-project/go-amt-ipld

Collapse sparse AMTs

Collapsing is a bit hard when we don't store the index with the elements. We can't know without being able to slice an index into all of its components (3 bits per level for this width=8 impl) which particular entry we're at. When the height gets collapsed, we lose track of which pieces of an index we need to discard.

If we stored index along with the value ([index,value] at each leaf) then we could do a check when we reach a point where the full-depth is short-circuited to see if the index we care about is the index found or not (not the same? then the one you want doesn't exist). This would add an int+array for each terminal element being stored in the AMT, which might not be too much of a cost to pay if it means ditching a bunch of intermediates.

Another option is to inline-compact, maybe a rule where if branch only has a single element (or some other number), then don't link, inline it. So you'd end up with nodes that look like (conceptually): [bmap,[CID, CID, [bmap, [entry]], CID]]. So you retain the nested array structure giving you the depth information that you can use to determine the final index. The algorithm would remain stable, you just skip the traversal and end up paying the costs of inlining the full structure but avoid the cost of a CID (or many, because you could nest a lot of these, [[[bmap,[entry]]]]).

Costs everywhere so it'd depend on the nature of the data being stored whether these are worth it.

Stebalien

comment created time in 9 days

issue openedfilecoin-project/go-amt-ipld

Configurable width

The width provides the primary tuneable parameter, allowing for a selection along the block-size, mutation cost and traversal cost spectrums. A fixed width of 8 provides for a maximum arity of 8 which. It's also something that probably should be benchmarked to understand the true costs involved but we can't do that without the ability to change it. Some Bmap code in here suggests that variability was a TODO, but a single byte is the easy case and that's taken advantage of here, hence the 8.

created time in 9 days

pull request commentfilecoin-project/go-amt-ipld

chore: docs

Added doc.go with algorithm summary. More meat in here than the HAMT since we don't yet have a proper spec doc.

I thought about trying to pull in some ASCII art used in https://github.com/ipld/specs/blob/master/data-structures/vector.md but for sparse indexing but ran out of steam. Maybe that can be saved for a spec doc. Those are useful visualisations for what's going on here, you just have to imaging missing bits in the middle of the tree structures.

Ready for review.

@anorth @momack2 @mikeal @Stebalien

rvagg

comment created time in 9 days

push eventrvagg/go-amt-ipld

Rod Vagg

commit sha ddcc173a44ee9460ac67c9b7cabd2f7b1f3947a2

fixup! chore: docs

view details

push time in 9 days

push eventrvagg/go-amt-ipld

Rod Vagg

commit sha 42c11adba99307388dfc77e01231585d25f39d78

fixup! chore: docs

view details

Rod Vagg

commit sha 08c5ea5458f053563f32003bf1ec5a1e55116295

fixup! chore: docs

view details

push time in 9 days

issue commentnodejs/build

Offboard inactive members

done

AshCripps

comment created time in 9 days

Pull request review commentnodejs/build

ansible: metrics server (WIP) & removal of Cloudflare cache bypass

+#!/usr/bin/env node++const { pipeline, Transform } = require('stream')+const split2 = require('split2')+const strftime = require('strftime').timezone(0)+const {Storage} = require('@google-cloud/storage');++const storage = new Storage({keyFilename: "metrics-processor-service-key.json"});++const jsonStream = new Transform({+  readableObjectMode: true,+  transform (chunk, encoding, callback) {+    try {+      this.push(JSON.parse(chunk.toString()))+      callback()+    } catch (e) {+      callback(e)+    }+  }+})++const extensionRe = /\.(tar\.gz|tar\.xz|pkg|msi|exe|zip|7z)$/+const uriRe = /(\/+(dist|download\/+release)\/+(node-latest\.tar\.gz|([^/]+)\/+((win-x64|win-x86|x64)?\/+?node\.exe|(x64\/)?node-+(v[0-9.]+)[.-]([^? ]+))))/+const versionRe = /^v[0-9.]+$/++function determineOS (path, file, fileType) {+  if (/node\.exe$/.test(file)) {+    return 'win'+  } else if (/\/node-latest\.tar\.gz$/.test(path)) {+    return 'src'+  } else if (fileType == null) {+    return ''+  } else if (/msi$/.test(fileType) || /^win-/.test(fileType)) {+    return 'win'+  } else if (/^tar\..z$/.test(fileType)) {+    return 'src'+  } else if (/^headers\.tar\..z$/.test(fileType)) {+    return 'headers'+  } else if (/^linux-/.test(fileType)) {+    return 'linux'+  } else if (fileType === 'pkg' || /^darwin-/.test(fileType)) {+    return 'osx'+  } else if (/^sunos-/.test(fileType)) {+    return 'sunos'+  } else if (/^aix-/.test(fileType)) {+    return 'aix'+  } else {+    return ''+  }+}++function determineArch (fileType, winArch, os) {+  if (fileType != null) {+    if (fileType.indexOf('x64') >= 0 || fileType === 'pkg') {+      // .pkg for Node.js <= 0.12 were universal so may be used for either x64 or x86+      return 'x64'+    } else if (fileType.indexOf('x86') >= 0) {+      return 'x86'+    } else if (fileType.indexOf('armv6') >= 0) {+      return 'armv6l'+    } else if (fileType.indexOf('armv7') >= 0) { // 4.1.0 had a misnamed binary, no 'l' in 'armv7l'+      return 'armv7l'+    } else if (fileType.indexOf('arm64') >= 0) {+      return 'arm64'+    } else if (fileType.indexOf('ppc64le') >= 0) {+      return 'ppc64le'+    } else if (fileType.indexOf('ppc64') >= 0) {+      return 'ppc64'+    } else if (fileType.indexOf('s390x') >= 0) {+      return 's390x'+    }+  }++  if (os === 'win') {+    // we get here for older .msi files and node.exe files+    if (winArch && winArch.indexOf('x64') >= 0) {+      // could be 'x64' or 'win-x64'+      return 'x64'+    } else {+      // could be 'win-x86' or ''+      return 'x86'+    }+  }++  return ''+}++const logTransformStream = new Transform({+  writableObjectMode: true,+  transform (chunk, encoding, callback) {+    if (chunk.ClientRequestMethod !== 'GET' ||+        chunk.EdgeResponseStatus < 200 ||+        chunk.EdgeResponseStatus >= 300) {+      return callback()+    }++    if (chunk.EdgeResponseBytes < 1024) { // unreasonably small for something we want to measure+      return callback()+    }++    if (!extensionRe.test(chunk.ClientRequestPath)) { // not a file we care about+      return callback()+    }++    const requestPath = chunk.ClientRequestPath.replace(/\/\/+/g, '/')+    const uriMatch = requestPath.match(uriRe)+    if (!uriMatch) { // what is this then?+      return callback()+    }++    const path = uriMatch[1]+    const pathVersion = uriMatch[4]+    const file = uriMatch[5]+    const winArch = uriMatch[6]+    const fileVersion = uriMatch[8]+    const fileType = uriMatch[9]++    let version = ''+    // version can come from the filename or the path, filename is best+    // but it may not be there (e.g. node.exe) so fall back to path version+    if (versionRe.test(fileVersion)) {+      version = fileVersion+    } else if (versionRe.test(pathVersion)) {+      version = pathVersion+    }++    const os = determineOS(path, file, fileType)+    const arch = determineArch(fileType, winArch, os)++    const line = []+    line.push(strftime('%Y-%m-%d', new Date(chunk.EdgeStartTimestamp / 1000 / 1000))) // date+    line.push(chunk.ClientCountry.toUpperCase()) // country+    line.push('') // state/province, derived from chunk.EdgeColoCode probably+    line.push(chunk.ClientRequestPath) // URI+    line.push(version) // version+    line.push(os) // os+    line.push(arch) // arch+    line.push(chunk.EdgeResponseBytes)++    this.push(`${line.join(',')}\n`)++    callback()+  }+})+++exports.processLogs = (data, context, callback) => {+  console.log('Node version is: ' + process.version);+  const file = data;+  bucketName = file.bucket;+  fileName = file.name;+  console.log("DATA " + data);+  console.log("BUCKET " + bucketName);+  console.log("FILENAME " + fileName);+  processedFile = fileName.split(".")[0];+  processedFile = processedFile.split("_")[0].concat("_", processedFile.split("_")[1]); +  console.log("PROCESSEDFILENAME " + processedFile);++  storage.bucket(bucketName).file(file.name).createReadStream()+  .on('error', function(err) { console.error(err) })

Did you try removing the e from the jsonStream error callback? Perhaps we're getting a line somewhere that we can't parse properly for some reason but should be able to ignore - might be worth trying. It could be that an error in either jsonStream or logTransformStream is closing the downstream streams and split2 can't push anymore in.

util.pipeline() is really worth trying here, not just for the error handling but also for the nice cleanup it does and managing these complicated events.

rvagg

comment created time in 9 days

Pull request review commentipld/js-ipld-dag-cbor

fix: replace node buffers with uint8arrays

  ## Table of Contents -- [Install](#install)-  - [npm](#npm)-  - [Use in Node.js](#use-in-nodejs)-  - [Use in a browser with browserify, webpack or any other bundler](#use-in-a-browser-with-browserify-webpack-or-any-other-bundler)-  - [Use in a browser Using a script tag](#use-in-a-browser-using-a-script-tag)-- [Usage](#usage)-- [API](#api)-- [Contribute](#contribute)-- [License](#license)+- [js-ipld-dag-cbor](#js-ipld-dag-cbor)

can we avoid this as the top-level container of the TOC?

achingbrain

comment created time in 9 days

Pull request review commentipld/interface-ipld-format

docs: replace node buffers with uint8arrays

 IPLD Format APIs are restricted to a single IPLD Node, they never access any lin  `IpldNode` is a previously deserialized binary blob. -Returns an `Buffer` with the serialized version of the given IPLD Node.+Returns an [Uint8Array] with the serialized version of the given IPLD Node.

s/an/a

achingbrain

comment created time in 9 days

pull request commentmultiformats/js-cid

feat: expose codec code and allow construction by code

@Gozala would appreciate some focused eyes on my flow and TS changes, I think you might have more of a clue than me on both of those

rvagg

comment created time in 9 days

PR opened multiformats/js-cid

feat: expose codec code and allow construction by code

(this PR assumes #117, review should focus only on the HEAD commit please, c043bc5)

Ref: https://github.com/multiformats/js-cid/pull/117#issuecomment-668131658

The idea here is that js-multiformats is going to require a switch to using the multicodec integer code rather than string, which does away with the need to bundle the entire multicodec table (and if you want string and you're happy to have the table then that's fine but you get to choose that). Experience shows that this is the most painful compatibility problem when switching between this library and multiformats.CID (and vice versa). Having a constructor that can take (optionally) an integer code rather than a string makes it the same constructor as multiformats.CID and exposing the code property makes it the same interface. So we can write new code that works against CID and multiformats.CID without having to worry too much. The asCID() will be icing on the cake too.

As long as we're getting an awkward breaking change into the code, we may as well make sure that anyone who has that code also has this and there's not an in-between state where there's breaking code and then there's this feature out there and you may have one and not the other. Basically it'd be nice to have this out in the wild ASAP to ease future transition and the current breaking change release seems like a good mechanism to force that.

+212 -135

0 comment

10 changed files

pr created time in 9 days

create barnchmultiformats/js-cid

branch : rvagg/codec-code

created branch time in 9 days

pull request commentmultiformats/multicodec

Add P-384

This is fine by me, but we're now out of space in 0xeX, I'm wondering if we should just start populating a new space for curve public keys, since there's potentially a lot of them, leaving the current ones in place. There's 4 more prime curves in the NIST set that would go along with just this one and there's no space to nicely order them if we lock this one in.

Thoughts from anyone else?

@OR13 just out of interest, do you have a current use-case for a P-384 public key?

OR13

comment created time in 10 days

delete branch nodejs/email

delete branch : update-mmarchini-emails

delete time in 10 days

push eventnodejs/email

mary marchini

commit sha 387e60432bad4084dc1fac1a0139207f7c470f67

aliases: update my email + add to github-bot (#161)

view details

push time in 10 days

Pull request review commentfilecoin-project/specs

Piece description update

 It is important to highlight that data submitted to the Filecoin network go thro  1. When a piece of data, or file is submitted to Filecoin (in some raw system format) it is transformed into a _UnixFS DAG style data representation_ (in case it is not in this format already, e.g., from IPFS-based applications). The hash that represents the root of the IPLD DAG of the UnixFS file is the _Payload CID_, which is used in the Retrieval Market. The Payload CID is identical to an IPFS CID. 2. In order to make a _Filecoin Piece_ the UnixFS IPLD DAG is serialised into a ["Content-Addressable aRchive" (.car)](https://github.com/ipld/specs/blob/master/block-layer/content-addressable-archives.md#summary) file, which is in raw bytes format.-3. The resulting .car file is _padded_ with extra bits.+3. The resulting .car file is _padded_ with extra bits in order to get it to "power of 2" size. This is done in order for the file to make a binary Merkle tree. This means that two zero (0) bits need to be added to every 254 bits (to make the 256 bits). In case more padding is needed in order to reach the 254 bit threshold, then these bits are also filled with zeros. 4. The next step is to calculate the Merkle root out of the hashes of the Piece. The resulting root of the Merkle tree is the **Piece CID**. This is also referred to as _CommP_ or _Piece Commitment_.

I think you're a better judge of the audience here so should decide what level of detail to include. "in order for the file to make a binary Merkle tree" is only correct for one stage of padding, the fr32 is something else all-together. I'm not sure for this audience how much it matters that "your input padded internally and at the length such that it doesn't quite resemble your input before it is merklized". For now maybe it's fine as is, implementers and anyone else wanting to generate CommP separate from a full client (like we've had to do for these offline deals for Filecoin Discover data) will have to come to a more sophisticated understanding of the process, and this might not be the place for them to get that detail! Your call @yiannisbot, I think you have most of the details now.

yiannisbot

comment created time in 13 days

PR opened ipld/go-ipld-adl-hamt

switch keyed union keys

links are "0", buckets are "1"

+2 -2

0 comment

1 changed file

pr created time in 13 days

create barnchipld/go-ipld-adl-hamt

branch : rvagg/keyed-union-switch

created branch time in 13 days

Pull request review commentipld/go-ipld-prime

Struct tuple representation codegen

+package gengo++import (+	"io"+	"strconv"++	"github.com/ipld/go-ipld-prime/schema"+	"github.com/ipld/go-ipld-prime/schema/gen/go/mixins"+)++var _ TypeGenerator = &structReprTupleGenerator{}++// Optional fields for tuple representation are only allowed at the end, and contiguously.+// Present fields are matched greedily: if the struct has five fields,+//  and the last two are optional, and there's four values, then they will be mapped onto the first four fields, period.+// In theory, it would be possible to support a variety of fancier modes, configurably;+//  in practice, let's not: the ROI would be atrocious:+//   few people seem to want this;+//   the implementation complexity would rise dramatically;+//   and the next nearest substitutes for such behavior are already available, and cheap (and also sturdier).+// It would make about as much sense to support implicits as it does trailing optionals,+//  which means we probably should consider that someday,+//   but it's not implemented today.++func NewStructReprTupleGenerator(pkgName string, typ *schema.TypeStruct, adjCfg *AdjunctCfg) TypeGenerator {+	return structReprTupleGenerator{+		structGenerator{+			adjCfg,+			mixins.MapTraits{+				pkgName,+				string(typ.Name()),+				adjCfg.TypeSymbol(typ),+			},+			pkgName,+			typ,+		},+	}+}++type structReprTupleGenerator struct {+	structGenerator+}++func (g structReprTupleGenerator) GetRepresentationNodeGen() NodeGenerator {+	return structReprTupleReprGenerator{+		g.AdjCfg,+		mixins.ListTraits{+			g.PkgName,+			string(g.Type.Name()) + ".Repr",+			"_" + g.AdjCfg.TypeSymbol(g.Type) + "__Repr",+		},+		g.PkgName,+		g.Type,+	}+}++type structReprTupleReprGenerator struct {+	AdjCfg *AdjunctCfg+	mixins.ListTraits+	PkgName string+	Type    *schema.TypeStruct+}++func (structReprTupleReprGenerator) IsRepr() bool { return true } // hint used in some generalized templates.++func (g structReprTupleReprGenerator) EmitNodeType(w io.Writer) {+	// The type is structurally the same, but will have a different set of methods.+	doTemplate(`+		type _{{ .Type | TypeSymbol }}__Repr _{{ .Type | TypeSymbol }}+	`, w, g.AdjCfg, g)+}++func (g structReprTupleReprGenerator) EmitNodeTypeAssertions(w io.Writer) {+	doTemplate(`+		var _ ipld.Node = &_{{ .Type | TypeSymbol }}__Repr{}+	`, w, g.AdjCfg, g)+}++func (g structReprTupleReprGenerator) EmitNodeMethodLookupByIndex(w io.Writer) {+	doTemplate(`+		func (n *_{{ .Type | TypeSymbol }}__Repr) LookupByIndex(idx int) (ipld.Node, error) {+			switch idx {+			{{- range $i, $field := .Type.Fields }}+			case {{ $i }}:+				{{- if $field.IsOptional }}+				if n.{{ $field | FieldSymbolLower }}.m == schema.Maybe_Absent {+					return ipld.Absent, ipld.ErrNotExists{ipld.PathSegmentOfInt(idx)}+				}+				{{- end}}+				{{- if $field.IsNullable }}+				if n.{{ $field | FieldSymbolLower }}.m == schema.Maybe_Null {+					return ipld.Null, nil+				}+				{{- end}}+				{{- if $field.IsMaybe }}+				return n.{{ $field | FieldSymbolLower }}.v.Representation(), nil+				{{- else}}+				return n.{{ $field | FieldSymbolLower }}.Representation(), nil+				{{- end}}+			{{- end}}+			default:+				return nil, schema.ErrNoSuchField{Type: nil /*TODO*/, Field: ipld.PathSegmentOfInt(idx)}+			}+		}+	`, w, g.AdjCfg, g)+}++func (g structReprTupleReprGenerator) EmitNodeMethodLookupByNode(w io.Writer) {+	doTemplate(`+		func (n *_{{ .Type | TypeSymbol }}__Repr) LookupByNode(key ipld.Node) (ipld.Node, error) {+			ki, err := key.AsInt()+			if err != nil {+				return nil, err+			}+			return n.LookupByIndex(ki)+		}+	`, w, g.AdjCfg, g)+}++func (g structReprTupleReprGenerator) EmitNodeMethodListIterator(w io.Writer) {+	// DRY: much of this precalcuation about doneness is common with the map representation.+	//  (or at least: it is for now: the addition of support for implicits in the map representation may bamboozle that.)+	//  Some of the templating also experiences the `.HaveTrailingOptionals` branching,+	//   but not quite as much as the map representation: since we always know those come at the end+	//    (and in particular, once we hit one absent, we're done!), some simplifications can be made.++	// The 'idx' int is what field we'll yield next.+	// Note that this iterator doesn't mention fields that are absent.+	//  This makes things a bit trickier -- especially the 'Done' predicate,+	//   since it may have to do lookahead if there's any optionals at the end of the structure!++	// Count how many trailing fields are optional.+	//  The 'Done' predicate gets more complex when in the trailing optionals.+	fields := g.Type.Fields()+	fieldCount := len(fields)+	beginTrailingOptionalField := fieldCount+	for i := fieldCount - 1; i >= 0; i-- {+		if !fields[i].IsOptional() {+			break+		}+		beginTrailingOptionalField = i+	}+	haveTrailingOptionals := beginTrailingOptionalField < fieldCount++	// Now: finally we can get on with the actual templating.+	doTemplate(`+		func (n *_{{ .Type | TypeSymbol }}__Repr) ListIterator() ipld.ListIterator {+			{{- if .HaveTrailingOptionals }}+			end := {{ len .Type.Fields }}`++		func() string { // this next part was too silly in templates due to lack of reverse ranging.+			v := "\n"+			for i := fieldCount - 1; i >= beginTrailingOptionalField; i-- {+				v += "\t\t\tif n." + g.AdjCfg.FieldSymbolLower(fields[i]) + ".m == schema.Maybe_Absent {\n"+				v += "\t\t\t\tend = " + strconv.Itoa(i) + "\n"+				v += "\t\t\t} else {\n"+				v += "\t\t\t\tgoto done\n"+				v += "\t\t\t}\n"+			}+			return v+		}()+`done:+			return &_{{ .Type | TypeSymbol }}__ReprListItr{n, 0, end}+			{{- else}}+			return &_{{ .Type | TypeSymbol }}__ReprListItr{n, 0}+			{{- end}}+		}++		type _{{ .Type | TypeSymbol }}__ReprListItr struct {+			n   *_{{ .Type | TypeSymbol }}__Repr+			idx int+			{{if .HaveTrailingOptionals }}end int{{end}}+		}++		func (itr *_{{ .Type | TypeSymbol }}__ReprListItr) Next() (idx int, v ipld.Node, err error) {+			if itr.idx >= {{ len .Type.Fields }} {+				return -1, nil, ipld.ErrIteratorOverread{}+			}+			switch itr.idx {+			{{- range $i, $field := .Type.Fields }}+			case {{ $i }}:+				idx = itr.idx+				{{- if $field.IsOptional }}+				if itr.n.{{ $field | FieldSymbolLower }}.m == schema.Maybe_Absent {+					return -1, nil, ipld.ErrIteratorOverread{}+				}+				{{- end}}+				{{- if $field.IsNullable }}+				if itr.n.{{ $field | FieldSymbolLower }}.m == schema.Maybe_Null {+					v = ipld.Null+					break+				}+				{{- end}}+				{{- if $field.IsMaybe }}+				v = itr.n.{{ $field | FieldSymbolLower}}.v.Representation()+				{{- else}}+				v = itr.n.{{ $field | FieldSymbolLower}}.Representation()+				{{- end}}+			{{- end}}+			default:+				panic("unreachable")+			}+			itr.idx+++			return+		}+		{{- if .HaveTrailingOptionals }}+		func (itr *_{{ .Type | TypeSymbol }}__ReprListItr) Done() bool {+			return itr.idx >= itr.end+		}+		{{- else}}+		func (itr *_{{ .Type | TypeSymbol }}__ReprListItr) Done() bool {+			return itr.idx >= {{ len .Type.Fields }}+		}+		{{- end}}++	`, w, g.AdjCfg, struct {+		Type                  *schema.TypeStruct+		HaveTrailingOptionals bool+	}{+		g.Type,+		haveTrailingOptionals,+	})+}++func (g structReprTupleReprGenerator) EmitNodeMethodLength(w io.Writer) {+	// This is fun: it has to count down for any unset optional fields.

don't we have special rules for optional fields in tuple representation? like, only the last one can be optional for it to make much sense?

warpfork

comment created time in 13 days

Pull request review commentfilecoin-project/go-amt-ipld

fix: reset leaf node bitmap on flush

 func (n *Node) Flush(ctx context.Context, bs cbor.IpldStore, depth int) error { 		if len(n.expVals) == 0 { 			return nil 		}+	  n.Bmap = [...]byte{0}

Yes, and in doing that there's an option to do away with expLinks and expValues entirely and just keep more careful track of index translation through Bmap. They do add a lot of convenience for some operations but it wouldn't be hard to add in convenience functions to help with indexing. I don't think they add much in the way of performance, they could even hinder performance because they're used so liberally (e.g. a get() should really just map index through Bmap and fetch directly out of Values or Links, no need to expand just to read). And you have these extra structures hanging around in memory that aren't necessary so they'll add a little to memory bloat.

rvagg

comment created time in 13 days

issue commentnodejs/build

Apple cert expiring in 30 days

oh, and its partner came in a separate email:

Your Developer ID Application Certificate will no longer be valid in 30 days. To generate a new certificate, sign in and visit Certificates, Identifiers & Profiles https://developer.apple.com/account/.

Certificate: Developer ID Application Team ID: HX7739G8FX

rvagg

comment created time in 13 days

issue openednodejs/build

Apple cert expiring in 30 days

We got this email today:

Your Developer ID Installer Certificate will no longer be valid in 30 days. To generate a new certificate, sign in and visit Certificates, Identifiers & Profiles https://developer.apple.com/account/.

Certificate: Developer ID Installer Team ID: HX7739G8FX

We need to check if that cert is still in use or if we've moved to a newer one and this is just an unused one expiring. I think I might have made new ones when I did the notarization work and I'd be surprised if they're expiring so soon. I'm hoping this is a noop for us but it needs checking.

created time in 13 days

delete branch ipld/specs

delete branch : fix/typo

delete time in 13 days

push eventipld/specs

Alan Shaw

commit sha d74ce9221a0ff64b79b6fad87aa6196499b45a4e

fix: typo

view details

push time in 13 days

PR merged ipld/specs

fix: typo

CUD -> CID

+1 -1

0 comment

1 changed file

alanshaw

pr closed time in 13 days

more