profile
viewpoint
Rod Vagg rvagg require.io NSW, Australia http://r.va.gg Awk Ninja; Yak Shaving Rock Star

microsoft/ChakraCore 8110

ChakraCore is the core part of the Chakra JavaScript engine that powers Microsoft Edge

ded/reqwest 2842

browser asynchronous http requests

nodejs/nan 2706

Native Abstractions for Node.js

fat/bean 1380

an events api for javascript

ded/bonzo 1294

library agnostic, extensible DOM utility

ded/qwery 1103

a query selector engine

microsoft/node-v0.12 800

Enable Node.js to use Chakra as its JavaScript engine.

ded/morpheus 504

A Brilliant Animator

justmoon/node-bignum 403

Big integers for Node.js using OpenSSL

isaacs/st 371

A node module for serving static files. Does etags, caching, etc.

pull request commentnodejs/node-gyp

configure.js: escape '<'

#1582 went out with v5.0.0 so should be in npm already, maybe update npm: npm install npm -g

yeerkkiller1

comment created time in a day

issue commentnodejs/node-gyp

Corporate proxy and name resolution

EHOSTUNREACH 104.20.23.46:443 looks to me like it's not even trying to use your corporate proxy. Do you have one of the HTTP_PROXY environment variables set in your terminal so it knows what proxy to talk to? Without that it'll just blindly try and connect direct.

See https://github.com/nodejs/node-gyp/blob/master/lib/proxy.js#L70-L84 for some of the rules for detecting proxy. Use the environment variables HTTPS_PROXY, HTTP_PROXY or run npm config set proxy http..... to get this done.

cstffx

comment created time in a day

issue commentnodejs/build

Introduction: I'm John. How can I help?

Welcome @jkleinsc & @ckerr!

First I'll be honest that we still haven't figured out how best to onboard people smoothly. The challenges of balancing the access & trust required to access most of our resources creates a bit of a speed bump in getting people up to speed. So I hope you'll be patient with us as we work on this.

We also have @davidstanke, from Google, who is keen to contribute too (#2171) we're all very keen to find a way for you all to become productive contributors because we have a large load that we'd like to share! It is considerably easier to establish trust relationships with people who work for large companies that already contribute to Node (the contractual employment arrangement means we get a legal basis for trust). So I think we should be able to get you all up to our basic access level fairly quickly.

I think the first thing is to establish your presence around here so that there is a sense among the @nodejs/build team that there's a basic level of commitment. Some suggestions for that:

  • Watch this repo, it's not high traffic
  • Participate in our meetings (every 3 weeks-ish, it's on the Node.js gcal, which I think you can add from here: https://calendar.google.com/calendar/embed?src=nodejs.org_nr77ama8p7d7f9ajrpnu506c98%40group.calendar.google.com, there's an issue opened on this repo prior to each meeting as a reminder and we collect agenda items -- questions and requests for deep-dives are welcome).
  • Comment, review PRs, put up your hand if you think you can contribute something
  • Scan nodejs/node issues and PRs tagged with build: https://github.com/nodejs/node/issues?q=is%3Aissue+is%3Aopen+label%3Abuild (the ones where @nodejs/build are mentioned are particularly interesting to us, some others just touch toolchain items).
  • There's a list of open items being discussed in #2171, feel free to pick some things off and ask if you can contribute (in here, that issue, or a new issue).
  • Open new issues here if you want to make suggestions or want to dig in to things that are not clear

In terms of scope of potential work (/cc @davidstanke, @AshCripps and anyone else newish too), here's my thoughts from a historical perspective:

Aside from the Node.js codebase itself, the Build Working Group is the oldest formal part of what is now the OpenJS Foundation. We started it while we were still negotiating with Joyent for more open governance of Node.js (under the "node-forward" banner). That process eventually escalated with the fork (io.js) but even by that time we already had our Jenkins infrastructure which was farm more sophisticated than what Joyent had, and were accumulating donor infrastructure companies that were keen to contribute to Node and we were the only ones offering a way to do that.

After the "merge" and the formation of the Foundation, our infrastructure and the complexity of what we do ramped up, in accordance with the skill and time of the people we had contributing and the amount of donated infrastructure we were able to accumulate. A lot of that was ad-hoc, as these things often are, but most of it has persisted. We maintain most of what can be called "infrastructure" that the project relies on, not just Jenkins.

We're now 6 years removed from that, we've gone through a period of growth and the last few years have been more characterised by decline, mainly in terms of people-time. Thankfully IBM have stepped up in recent years and has been our main people-contributor and this has kept us going fairly strong.

We have a relatively sophisticated setup, a significant amount of resources (all still gratis!), but it constantly shows its age. We've talked many times about rearchitecting it to properly iron out all the wrinkles (flaws) that we see but it's hard to do that while still continuing to serve nodejs/node, libuv and some other associated projects. Most of our time is spent on fixing things and patching over older things, but we still have the basic architecture we've always had. Access keeps on being a problem we have to find ways around. Some of the tasks can only be done by a small core group who have access to key resources--it would be nice if our architecture allowed for more granularity than it currently does.

So in summary, there are two parallel ways to think about contributions here: (1) help us maintain and slowly improve what we have to serve Node.js (and associated projects), (2) help us build some next-generation pieces of the puzzle to replace what we're currently doing (CI, www asset serving, release pipeline, test/release/infra server maintenance, access granularity, relationships with donor companies, relationship with client projects, management of this repo, management of the team, etc. etc.). There's a lot of scope for doing things in entirely new ways, you'll just have to apply some skill in navigating such things in a well-established open source project with a team of opinionated (and sometimes grumpy) people.

jkleinsc

comment created time in a day

issue commentnodejs/build

OS X CI strategy roadmap

current, more granular, progress on where we're at right now can probably be followed in https://github.com/nodejs/node/pull/31459

sam-github

comment created time in a day

issue commentnodejs/node

CI is unusable

Browsing through some of these I'm not seeing a pattern that pertains to our build infra.

Unless you can hone in on a particular infra issue, it's on you all to give our infra better code so we don't have so many flaky failures.

tniessen

comment created time in a day

push eventrvagg/rust-fil-commp-generate

Rod Vagg

commit sha edb2a8a837d344bf2ae0eff4384e0fb82c3b665e

properly handle files smaller than DISK_MAX

view details

push time in a day

Pull request review commentnodejs/build

[WIP] ansible: add 10.15 macs

     - java.rc > 0 and not os|startswith("macos")   package: name="{{ java_package_name }}" state=present +- name: install java tap (macOS)+  become_user: administrator+  when: java.rc > 0 and os|startswith("macos")+  homebrew_tap:+        name: AdoptOpenJDK/openjdk+        state: present

So partials/brew.yml ran and you saw it running homebrew_tap for the AdoptOpenJDK/openjdk cask and the next task, which installs the package, failed until you yanked the homebrew_tap into this position? It's not an ordering problem is it, partials/brew.yml ran before this didn't it?

AshCripps

comment created time in 2 days

pull request commentnodejs/build

workflow: add stale action

I can't muster a strong opinion on this tbh, the open issues don't bother me so much on this repo since we're not so much building code (where open issues and prs bother me more) but managing infra (where status and information is more useful). If you think this will add value then I'm fine with working around it where it gets in my way, being able to label and/or bump is easy enough.

sam-github

comment created time in 2 days

delete branch nodejs/build

delete branch : rvagg/foundation-redirects

delete time in 2 days

push eventnodejs/build

Rod Vagg

commit sha 53838da3d0eb7b9bc2f7a923502a1bdffbdec179

www: redirects for renamed openjsf content Fixes: https://github.com/nodejs/build/issues/2194 PR-URL: https://github.com/nodejs/build/pull/2195 Reviewed-By: Brian Warner <brian@bdwarner.com>

view details

push time in 2 days

PR merged nodejs/build

www: redirects for renamed openjsf content

Fixes: https://github.com/nodejs/build/issues/2194

would appreciate nginx config sanity checking before I try putting this on the server.

+7 -1

1 comment

1 changed file

rvagg

pr closed time in 2 days

issue closednodejs/build

Request URL redirect: Node.js case studies

As expected when decommissioning foundation.nodejs.org, we ran into a few circumstances where links to content should be preserved after putting the redirects in place.

Would it be possible to add redirects on the foundation.nodejs.org subdomain for the following docs? Thanks!

https://foundation.nodejs.org/wp-content/uploads/sites/50/2017/09/Node_CaseStudy_Nasa_FNL.pdf -> https://openjsf.org/wp-content/uploads/sites/84/2020/02/Case_Study-Node.js-NASA.pdf

https://foundation.nodejs.org/wp-content/uploads/sites/50/2017/09/Node_CaseStudy_Fusion_Final.pdf -> https://openjsf.org/wp-content/uploads/sites/84/2020/02/Case_Study-Node.js-Fusion.pdf

https://foundation.nodejs.org/wp-content/uploads/sites/50/2017/09/Node_CaseStudy_Walmart_final-1.pdf -> https://openjsf.org/wp-content/uploads/sites/84/2020/02/Case_Study-Node.js-Walmart.pdf

https://foundation.nodejs.org/wp-content/uploads/sites/50/2017/09/Node_CaseStudy_HomeAway.pdf -> https://openjsf.org/wp-content/uploads/sites/84/2020/02/Case_Study-Node.js-HomeAway.pdf

https://foundation.nodejs.org/wp-content/uploads/sites/50/2017/09/Node_CapitalOne_FINAL_casestudy.pdf -> https://openjsf.org/wp-content/uploads/sites/84/2020/02/Case_Study-Node.js-CapitalOne.pdf

closed time in 2 days

brianwarner

pull request commentnodejs/build

www: redirects for renamed openjsf content

good thing I got you to check! whoops. deployed

rvagg

comment created time in 2 days

push eventnodejs/build

Rod Vagg

commit sha d94b1d1f5c1434a544ed7774424cf8a8fd36d640

www: redirects for renamed openjsf content Fixes: https://github.com/nodejs/build/issues/2194 PR-URL: https://github.com/nodejs/build/pull/2195 Reviewed-By: Brian Warner <brian@bdwarner.com>

view details

push time in 2 days

issue commentnodejs/node-gyp

request has gone into maintenance mode. Maybe replace it.

The main one I can think of is proxy detection, but we've done the heavy lifting now (recent addition) in proxy.js, which is mostly copied out of request, so we should be able to leverage that for free! I can't think of anything else off the top of my head, I'm just a little afraid that there are some because of the wide variety of ways people are using node-gyp (behind weird firewalls, proxies, redirecting to alternate source locations - like for electron...).

Tchiller

comment created time in 2 days

issue commentnodejs/node-gyp

request has gone into maintenance mode. Maybe replace it.

hyperquest is still "supported", it has many users who rely on it. It's not recently updated because it just works and is simple enough that it doesn't need constant curation. substack would consider it "finished" although he's handed over maintenance to others (Julian Gruber among others I believe). request is a beast of a codebase with a lot of dependencies, hyperquest is much more focused.

also, npm install node-gyp won't give a deprecation notice if we switch to hyperquest! that's the top-level concern here, there's no real problems with request (yet), but the deprecation notice is going to get severely annoying and I expect this issue to get busier as people encounter it more and more.

Tchiller

comment created time in 2 days

issue commentnodejs/node-gyp

request has gone into maintenance mode. Maybe replace it.

hyperquest seems dated, though that means support for older Node versions would be a given.

I'm thinking more about the amount of change required on our end to adopt something new. We're still working on a thoroughily old-style Node codebase. Modernising is an option but a costly one that I don't see happening in the short term. Hence, hyperquest might be a good short-term fix, in many cases it's a drop-in replacement for request.

Tchiller

comment created time in 2 days

pull request commentnodejs/node-gyp

configure.js: escape '<'

@bzoz, @joaocgreis, please confirm

yeerkkiller1

comment created time in 2 days

issue commentnodejs/node-gyp

request has gone into maintenance mode. Maybe replace it.

What version of Node we support is partly informed by npm. I don't have much insight into their current strategy but recent years suggest that they're being more aggressive with dropping older versions of Node from support as those versions get dropped upstream.

I don't think there's any reason to consider supporting Node 6, it's roughly 1 year out of support. Node 8 was dropped at the beginning of this year. There's an argument to be made that we should consider continuing to make node-gyp compatible with Node 8. I wouldn't want npm to decide to ship an older node-gyp because they wanted to be more compatible than us. But, at this point, the kinds of features we want to start adopting are in Node 8 anyway - async/await being the main one, so it probably wouldn't be hard.

After my last post here I thought of a possible interim solution, switching to hyperquest. I still use it in many of my Node libraries and it still works great. It's much simpler than request but has a very similar API so doesn't suffer from many of the costs that request had with its bloated feature set and dependency tree. The challenge we'll have is that some of those missing features are required by our users, such as proper proxy detection. But we've already adopted a big chunk of request's proxy code here to do some custom work with it so that should be fairly simple to insert to hyperquest to bring parity. We're going to have this feature problem with whatever we switch to I think; there'll be some weird edges where request has certain behaviour that our users are relying on that we will no longer support when we switch http clients.

Tchiller

comment created time in 2 days

pull request commentnodejs/build

ansible: add container for UBI 8.1

I think once you have inventory.yml set up properly in the secrets repo and new entries in jenkins, you should just be able to run ansible (playbooks/jenkins/docker-host.yaml) on the docker hosts and it should set this new one up and add it in. I think the existing containers should stay running even if new images are created, iirc. Try it on one of them and see how it goes!

richardlau

comment created time in 3 days

issue commentnodejs/build

Request URL redirect: Node.js case studies

@brianwarner please review https://github.com/nodejs/build/pull/2195

brianwarner

comment created time in 3 days

PR opened nodejs/build

www: redirects for renamed openjsf content

Fixes: https://github.com/nodejs/build/issues/2194

would appreciate nginx config sanity checking before I try putting this on the server.

+7 -1

0 comment

1 changed file

pr created time in 3 days

create barnchnodejs/build

branch : rvagg/foundation-redirects

created branch time in 3 days

pull request commentnodejs/docker-node

Update node.js v13.x to v13.9.0 with Yarn v1.22.0

Oh, of course, it's only where we need to compile from source - alpine where this is a problem. My advice is to write this release off for alpine. But if someone wanted to put in effort to special-case it it's possible to hack in a manual download o the missing piece just for this one.

PeterDaveHello

comment created time in 3 days

pull request commentnodejs/docker-node

Update node.js v13.x to v13.9.0 with Yarn v1.22.0

are you getting successful builds on the other platforms? or are you not using the tarballs to build? I thought the build error was universal for all tarballs and unofficial-builds is basically doing the same compile dance as you're doing in your dockerfiles?

PeterDaveHello

comment created time in 6 days

issue commentnodejs/node-gyp

Remove python 2.7 as requirement from the README.md

Yeah, it's still on 5.0.5 and unfortunately people are still directed here for details on what to install with native addons. Maybe once we get npm out with ^5.1.0 we can start being more aggressive with our README?

Pomax

comment created time in 6 days

issue closednodejs/unofficial-builds

13.9.0 folder only contains headers

Not sure if the CI got stuck somewhere, but the Alpine and ARM builds look like they didn't get triggered

closed time in 7 days

nschonni

issue commentnodejs/unofficial-builds

13.9.0 folder only contains headers

it's the zlib source tarball problem https://unofficial-builds.nodejs.org/logs/202002182057-v13.9.0/x86.log https://github.com/nodejs/node/issues/31858, not much we can do about it, will have to wait for the next release

nschonni

comment created time in 7 days

push eventrvagg/rust-fil-commp-generate

Rod Vagg

commit sha 95b6f010fa8b0777e69d8a155cf3c746e8ae834a

output tweaks, proper piece size calculation, update deps

view details

push time in 7 days

push eventrvagg/js-fil-utils

Rod Vagg

commit sha 52255d1603ac49d912d8a1ede66bab5c0baa228b

doc tweaks

view details

push time in 7 days

create barnchrvagg/js-fil-utils

branch : master

created branch time in 7 days

created repositoryrvagg/js-fil-utils

Miscellaneous JavaScript Filecoin proofs utilities

created time in 7 days

push eventrvagg/rust-fil-commp-generate

Rod Vagg

commit sha 94103c4c3866a161b9dc5131f992aa097b923ee7

update deps and dep pinning

view details

push time in 7 days

push eventrvagg/rust-fil-commp-generate

Volker Mische

commit sha a430270f16e47397d5228ce6bdebb5b8e2fb833c

chore: upgrade to merkle_light 0.16

view details

Volker Mische

commit sha f9d0c551a2068bbeee204650ccab4605cd03cfaf

feat: also monitor maximum allocated memory

view details

push time in 7 days

delete branch rvagg/rust-fil-commp-generate

delete branch : monitor-peak-vsz

delete time in 7 days

push eventrvagg/rust-fil-commp-generate

Volker Mische

commit sha 5243428115de18fb87cca6acdd172b89b1f68d7b

feat: also monitor maximum allocated memory

view details

Rod Vagg

commit sha eb93cb5e4a6ab687cb513cc15b90d0511b104b7b

Merge pull request #6 from rvagg/monitor-peak-vsz feat: also monitor maximum allocated memory

view details

push time in 7 days

pull request commentrvagg/rust-fil-commp-generate

feat: also monitor maximum allocated memory

@ribasushi yeah, probably, but we're only after approximations here because the precise numbers that it uses in lambda are going to be different anyway.

vmx

comment created time in 7 days

delete branch rvagg/rust-fil-commp-generate

delete branch : merkle16

delete time in 7 days

push eventrvagg/rust-fil-commp-generate

Volker Mische

commit sha a757a0d2f46d65839e627c435127cefdd56313b9

chore: upgrade to merkle_light 0.16

view details

Rod Vagg

commit sha 9b05f20c8b7db9467460fdbfa56244e2f7f735f3

Merge pull request #5 from rvagg/merkle16 chore: upgrade to merkle_light 0.16

view details

push time in 7 days

Pull request review commentrvagg/rust-fil-commp-generate

feat: move binaries source into the `src/bin` directory

 log = "^0.4" flexi_logger = "0.14.8" rusoto_core = "0.42.0" rusoto_s3 = "0.42.0"--[[bin]]-name = "commp"-path = "src/main.rs"--[[bin]]-name = "bootstrap"

I could, but I wanted it to be more obvious, bootstrap is pretty generic

vmx

comment created time in 8 days

PR closed rvagg/rust-fil-commp-generate

Add Lambda function & build + MerkleTree variant that splits memory between disk and memory

caveat emptor, this is some cowboy stuff from a not very experienced Rust programmer, but it seems to get the right results with my test data and I can squeeze it into Lambda's memory.

+1194 -259

4 comments

11 changed files

rvagg

pr closed time in 9 days

pull request commentrvagg/rust-fil-commp-generate

Add Lambda function & build + MerkleTree variant that splits memory between disk and memory

merged @ 98a1c7ff7036261095a82fe7a8ec7645e31c86c3

rvagg

comment created time in 9 days

delete branch rvagg/rust-fil-commp-generate

delete branch : use-bins

delete time in 9 days

PR closed rvagg/rust-fil-commp-generate

feat: move binaries source into the `src/bin` directory

This project is now a library which is used by two binaries named commp and lambda.

+0 -14

1 comment

4 changed files

vmx

pr closed time in 9 days

pull request commentrvagg/rust-fil-commp-generate

feat: move binaries source into the `src/bin` directory

k, restoring that [[bin]] with the new bin source gives me back bootstrap merged @ 2d38e9a5eccc799ba380f6aa4360c19429aac1c9

vmx

comment created time in 9 days

push eventrvagg/rust-fil-commp-generate

Rod Vagg

commit sha d43898c5e1b78c35740e38d991f579c108813f75

added lambda compile option incl docker and Makefile

view details

Rod Vagg

commit sha 98a1c7ff7036261095a82fe7a8ec7645e31c86c3

implement MerkleTree caching store that splits disk and memory

view details

Volker Mische

commit sha 2d38e9a5eccc799ba380f6aa4360c19429aac1c9

feat: move binaries source into the `src/bin` directory This project is now a library which is used by two binaries named `commp` and `lambda`.

view details

push time in 9 days

push eventrvagg/rust-fil-commp-generate

Rod Vagg

commit sha c54e967ec2bf6f7b7071f319a39a22309f50f4df

fixup! feat: move binaries source into the `src/bin` directory

view details

Rod Vagg

commit sha e3aebec7aec1589849bade8f261892817f22e425

fixup! implement MerkleTree caching store that splits disk and memory

view details

push time in 9 days

push eventrvagg/rust-fil-commp-generate

Rod Vagg

commit sha c540d34ec2a7a4d077ef6a41a83608eb8cf7ada6

fixup! implement MerkleTree caching store that splits disk and memory

view details

push time in 9 days

push eventrvagg/rust-fil-commp-generate

Volker Mische

commit sha f1b6dd9e4166f069abbdba3b36dee77005cf2eaa

feat: move binaries source into the `src/bin` directory This project is now a library which is used by two binaries named `commp` and `lambda`.

view details

Rod Vagg

commit sha 14a680076e83a139066e2ce5e110b2d3311179aa

fixup! implement MerkleTree caching store that splits disk and memory

view details

push time in 9 days

Pull request review commentrvagg/rust-fil-commp-generate

Add Lambda function & build + MerkleTree variant that splits memory between disk and memory

 impl<E: Element> Store<E> for MultiStore<E> {      fn read_range(&self, r: Range<usize>) -> Result<Vec<E>> {         if r.start > DISK_MAX {+            // entire range is in mem             let nr = Range {                 start: r.start - DISK_MAX,                 end: r.end - DISK_MAX,             };-            self.1.read_range(nr)+            self.mem.read_range(nr)+        } else if r.end > DISK_MAX {+            // split across disk and mem+            let nrdisk = Range {+                start: r.start,+                end: DISK_MAX,+            };+            let nrmem = Range {+                start: 0,+                end: r.end - DISK_MAX,+            };+            let rdisk = self.mem.read_range(nrdisk).unwrap();

I think I've got it now ... I was getting the same result commp regardless of whether this was switched around. My initial version of this had a panic when you queried a split range because <too hard> but when I ran it, it panicked, so I had to implement it! So those two facts combined had me confused and worried because it seemed like it didn't matter what data was being fed.

However, I've gone over the code again and I think I had > and >= wrongly used, I also implemented the split copy in copy_from_slice which is essentially the write version of this range_read (see 6ae7072). But now I have logging in both of those split blocks and it's no longer logging. I can't recall where I originally got the size of the DISK_MAX from, 262144 * 48 but I think 262144 was from inspecting how it worked originally and it seeming to work on 262144 chunks. Multiplied by 48 gave me roughly how much disk I wanted to use up. So 🤞 I think I'm hitting right on aligned slices and not needing these split read/write operations anymore. Hopefully my code now would survive if it did, though, but it's another reason we need to do a lot of testing of this stuff.

rvagg

comment created time in 9 days

push eventrvagg/rust-fil-commp-generate

Rod Vagg

commit sha 6ae7072609f738434a24279d0c642a6f0e2744f7

fixup! implement MerkleTree caching store that splits disk and memory

view details

push time in 9 days

Pull request review commentrvagg/rust-fil-commp-generate

feat: move binaries source into the `src/bin` directory

 log = "^0.4" flexi_logger = "0.14.8" rusoto_core = "0.42.0" rusoto_s3 = "0.42.0"--[[bin]]-name = "commp"-path = "src/main.rs"--[[bin]]-name = "bootstrap"

this is still necessary though, it has to be bootstrap for lambda, see also Makefile which bundles it into a ZIP.

vmx

comment created time in 9 days

issue commentnodejs/build

Tracking Jenkins changes

@mhdawson that job doesn't have it! It could be a newer feature and that job hasn't been touched since that feature was added.

See https://github.com/nodejs/jenkins-config-test/blob/c9a2f917b204e76735419bdeb0ef6873c22d1c28/jobs/node-test-commit-linux.xml#L91, it's apparently a plugin, <job-metadata plugin="metadata@1.1.0b">. job-info -> last-saved -> user -> {display-name, full-name}.

richardlau

comment created time in 9 days

pull request commentnodejs/node

net: autoDestroy Socket

@ronag just go to https://ci.nodejs.org/job/node-test-commit-windows-fanned/build and enter the details of the repo (yours) and git ref, leave the rebase bit blank. You should be able to do this for most jobs, but with these fanned jobs the parent jobs control the flow so you can't just run the child jobs.

ronag

comment created time in 9 days

pull request commentnodejs/node

http2: make compat finished match http/1

Thanks @ronag, don't treat this as a blocker for your code, it's a Jenkins problem on our end.

@nodejs/build we're having a commit-ref propagation problem, in https://ci.nodejs.org/job/node-test-pull-request/29176/:

  • https://ci.nodejs.org/job/node-cross-compile/27729/nodes=cross-compiler-armv7-gcc-6/console creates jenkins-node-test-commit-arm-fanned-e6a54b67290f8bab3de5b88504e531c20271f1aa-binary-pi1p/cc-armv7
  • https://ci.nodejs.org/job/git-nodesource-update-reference/23898/console syncs jenkins-node-test-commit-arm-fanned-e6a54b67290f8bab3de5b88504e531c20271f1aa-binary-pi1p/cc-armv7 👍
  • https://ci.nodejs.org/job/node-test-binary-arm-12+/4402/RUN_SUBSET=0,label=pi3-docker/console tries to pull jenkins-node-test-commit-arm-fanned-db24c2588175371c1ec2ec767541583ddf7fa0b1-binary-pi1p/cc-armv7 👎

anyone got time to look at this one? @joaocgreis I bet you'd have the most insight and could see the problem quickest if you're available.

ronag

comment created time in 10 days

issue commentmultiformats/js-cid

Follow the spec for decoding a string CID

> new CID('QmV88khHDJEXi7wo6o972MZWY661R9PhrZW6dvpFP6jnMn').multihash.join(',')
'18,32,100,204,247,215,142,116,181,80,84,227,13,252,171,193,40,208,244,156,33,84,136,182,185,131,50,55,21,110,224,183,153,169'
> new CID('QmV88khHDJEXi7wo6o972MZWY661R9PhrZW6dvpFP6jnMn').multihash.length
34
> new CID('QmV88khHDJEXi7wo6o972MZWY661R9PhrZW6dvpFP6jnMn').toString()
'QmV88khHDJEXi7wo6o972MZWY661R9PhrZW6dvpFP6jnMn'
> new CID('QmV88khHDJEXi7wo6o972MZWY661R9PhrZW6dvpFP6jnMn').toString().length
46

v0 is encoded in byte form as a pure multihash, hence that rule #2. v1 adds two varint's at the front of the byte form.

I'm doing byte-decoding of CIDs for my JS CAR file format library which handles this: https://github.com/rvagg/js-datastore-car/blob/9eb82a9c88ac4051f8c93901da366f82f910e49c/lib/coding-browser.js#L66-L71 Some Go code doing same thing: https://github.com/ipfs/go-car/blob/f188c0e24291401335b90626aad4ba562948b525/util/util.go#L23-L26

I don't think this issue is calling for this second case, it's just about being more strict with checking the first two characters rather than jumping straight into a straight base58 decode.

Although, it would have been great to be able to call into js-cid for the decode I did in js-datastore-car, go-car has the same problem where it's doing work that should be exposed by go-cid. "Here's a byte array, it could be v0 or v1, figure it out and give me a CID". The problem we have with the CAR format (and likely other forms of on-the-wire data), is that we don't have a neat bundle of bytes to check, you have a stream that you have to chew at, and test as you go, it could be 32 bytes, or it could be a lot longer with some crazy-long hash, but you'll find out when you: figure out if it's not a v0 so has some varints at the front that will tell you about the length.

alanshaw

comment created time in 10 days

push eventrvagg/rust-fil-commp-generate

Rod Vagg

commit sha 7f71cefc16e9ad0863b74ead080c78c82793bdfe

fixup! implement MerkleTree caching store that splits disk and memory

view details

push time in 10 days

Pull request review commentrvagg/rust-fil-commp-generate

Add Lambda function & build + MerkleTree variant that splits memory between disk and memory

 // Usage: commp <path to file> [fp] // specify "fp" to run through filecoin_proofs -use std::cmp;-use std::convert::TryFrom;+mod commp;+ use std::env; use std::fs::File;-use std::io;-use std::io::{Cursor, Read, Seek, SeekFrom}; -use anyhow::ensure;+#[macro_use]+extern crate log;+ use bytesize; use hex;--use filecoin_proofs::constants::DefaultPieceHasher;-use filecoin_proofs::fr32::write_padded;-use filecoin_proofs::{-    generate_piece_commitment, PaddedBytesAmount, SectorSize, UnpaddedBytesAmount,-};-use storage_proofs::fr32::Fr32Ary;-use storage_proofs::hasher::{Domain, Hasher};-use storage_proofs::pieces::generate_piece_commitment_bytes_from_source;-use storage_proofs::util::NODE_SIZE;--type VecStore<E> = merkletree::store::VecStore<E>;-pub type MerkleTree<T, A> = merkletree::merkle::MerkleTree<T, A, VecStore<T>>;--// use a file as an io::Reader but pad out extra length at the end with zeros up-// to the padded size-struct PadReader {-    file: File,-    fsize: usize,-    padsize: usize,-    pos: usize,-}--impl io::Read for PadReader {-    fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> {-        /*-        if self.pos == self.fsize {-          print!("reached file size, now padding ...")-        }-        */-        if self.pos >= self.fsize {-            for i in 0..buf.len() {-                buf[i] = 0;-            }-            let cs = cmp::min(self.padsize - self.pos, buf.len());-            self.pos = self.pos + cs;-            /*-            if cs < buf.len() {-              print!("done with file ...")-            }-            */-            Ok(cs)-        } else {-            let cs = self.file.read(buf)?;-            self.pos = self.pos + cs;-            Ok(cs)-        }-    }-}--// logic partly copied from Lotus' PadReader which is also in go-fil-markets-// figure out how big this piece will be when padded-fn padded_size(size: u64) -> u64 {-    let logv = 64 - size.leading_zeros();-    let sect_size = (1 as u64) << logv;-    let bound = u64::from(UnpaddedBytesAmount::from(SectorSize(sect_size)));-    if size <= bound {-        return bound;-    }-    return u64::from(UnpaddedBytesAmount::from(SectorSize(1 << (logv + 1))));-}--// from rust-fil-proofs/src/storage_proofs/pieces.rs-fn local_generate_piece_commitment_bytes_from_source<H: Hasher>(-    source: &mut dyn Read,-    padded_piece_size: usize,-) -> anyhow::Result<Fr32Ary> {-    ensure!(padded_piece_size > 32, "piece is too small");-    ensure!(padded_piece_size % 32 == 0, "piece is not valid size");--    let mut buf = [0; NODE_SIZE];-    use std::io::BufReader;--    let mut reader = BufReader::new(source);--    let parts = (padded_piece_size as f64 / NODE_SIZE as f64).ceil() as usize;--    let tree = MerkleTree::<H::Domain, H::Function>::try_from_iter((0..parts).map(|_| {-        reader.read_exact(&mut buf)?;-        <H::Domain as Domain>::try_from_bytes(&buf)-    }))?;--    let mut comm_p_bytes = [0; NODE_SIZE];-    let comm_p = tree.root();-    comm_p.write_bytes(&mut comm_p_bytes)?;--    Ok(comm_p_bytes)-}+extern crate simple_logger;

nevermind, figured it out:

use lambda_runtime::error::HandlerError;
use lambda_runtime::{lambda, Context};
use serde_derive::{Deserialize, Serialize};
rvagg

comment created time in 10 days

push eventrvagg/rust-fil-commp-generate

Rod Vagg

commit sha 355c027f1da462bd1eb4ad276f72005dbc9a051e

fixup! implement MerkleTree caching store that splits disk and memory

view details

push time in 10 days

Pull request review commentrvagg/rust-fil-commp-generate

Add Lambda function & build + MerkleTree variant that splits memory between disk and memory

+use std::cmp;+use std::convert::TryFrom;+use std::io;+use std::io::{BufReader, Cursor, Read, Seek, SeekFrom};++use filecoin_proofs::constants::DefaultPieceHasher;+use filecoin_proofs::fr32::write_padded;+use filecoin_proofs::{+    generate_piece_commitment, PaddedBytesAmount, SectorSize, UnpaddedBytesAmount,+};+use storage_proofs::fr32::Fr32Ary;+use storage_proofs::hasher::{Domain, Hasher};+use storage_proofs::pieces::generate_piece_commitment_bytes_from_source;+use storage_proofs::util::NODE_SIZE;++#[path = "multistore.rs"]+mod multistore;++type VecStore<E> = merkletree::store::VecStore<E>;+pub type MerkleTree<T, A> = merkletree::merkle::MerkleTree<T, A, VecStore<T>>;+pub type MerkleTreeMultiStore<T, A> =+    merkletree::merkle::MerkleTree<T, A, multistore::MultiStore<T>>;++pub struct CommP {+    pub padded_size: u64,+    pub piece_size: u64,+    pub bytes: [u8; 32],+}++// use a file as an io::Reader but pad out extra length at the end with zeros up+// to the padded size+struct PadReader<R>+where+    R: io::Read,+{+    size: usize,+    padsize: usize,+    pos: usize,+    inp: R,+}++impl<R: io::Read> io::Read for PadReader<R> {+    fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> {+        /*+        if self.pos == self.size {+          print!("reached file size, now padding ...")+        }+        */+        if self.pos >= self.size {+            for i in 0..buf.len() {+                buf[i] = 0;+            }+            let cs = cmp::min(self.padsize - self.pos, buf.len());+            self.pos = self.pos + cs;+            /*+            if cs < buf.len() {+              print!("done with file ...")+            }+            */+            Ok(cs)+        } else {+            let cs = self.inp.read(buf)?;+            self.pos = self.pos + cs;+            Ok(cs)+        }+    }+}++// logic partly copied from Lotus' PadReader which is also in go-fil-markets+// figure out how big this piece will be when padded+fn padded_size(size: u64) -> u64 {+    let logv = 64 - size.leading_zeros();+    let sect_size = (1 as u64) << logv;+    let bound = u64::from(UnpaddedBytesAmount::from(SectorSize(sect_size)));+    if size <= bound {+        return bound;+    }+    return u64::from(UnpaddedBytesAmount::from(SectorSize(1 << (logv + 1))));+}++// reimplemented from rust-fil-proofs/src/storage_proofs/pieces.rs+fn local_generate_piece_commitment_bytes_from_source<H: Hasher, R: Sized>(+    source: &mut R,+    padded_piece_size: usize,+) -> anyhow::Result<Fr32Ary>+where+    R: io::Read,+{+    let mut buf = [0; NODE_SIZE];+    let mut reader = BufReader::new(source);+    let parts = (padded_piece_size as f64 / NODE_SIZE as f64).ceil() as usize;++    info!("Calculating merkle ...");++    let tree = MerkleTree::<H::Domain, H::Function>::try_from_iter((0..parts).map(|_| {+        reader.read_exact(&mut buf)?;+        <H::Domain as Domain>::try_from_bytes(&buf)+    }))?;++    let mut comm_p_bytes = [0; NODE_SIZE];+    let comm_p = tree.root();+    comm_p+        .write_bytes(&mut comm_p_bytes)+        .expect("borked at extracting commp bytes");++    info!("CommP from merkle root: {:?}", comm_p_bytes);+    Ok(comm_p_bytes)+}++// same as local_generate_piece_commitment_bytes_from_source but uses a custom MerkleTree+// caching store from multistore.rs+fn local_multistore_generate_piece_commitment_bytes_from_source<H: Hasher, R: Sized>(+    source: &mut R,+    padded_piece_size: usize,+) -> anyhow::Result<Fr32Ary>+where+    R: io::Read,+{+    let mut buf = [0; NODE_SIZE];+    let mut reader = BufReader::new(source);+    let parts = (padded_piece_size as f64 / NODE_SIZE as f64).ceil() as usize;++    info!("Calculating merkle with multistore ...");++    let tree =+        MerkleTreeMultiStore::<H::Domain, H::Function>::try_from_iter((0..parts).map(|_| {+            reader.read_exact(&mut buf)?;+            <H::Domain as Domain>::try_from_bytes(&buf)+        }))?;++    let mut comm_p_bytes = [0; NODE_SIZE];+    let comm_p = tree.root();+    comm_p+        .write_bytes(&mut comm_p_bytes)+        .expect("borked at extracting commp bytes");++    info!("CommP from merkle root: {:?}", comm_p_bytes);+    Ok(comm_p_bytes)+}++fn padded<R: Sized>(inp: &mut R, size: u64) -> PadReader<&mut R>+where+    R: io::Read,+{+    let padded_size = padded_size(size);++    let pad_reader = PadReader {+        size: usize::try_from(size).unwrap(),+        padsize: usize::try_from(padded_size).unwrap(),+        pos: 0,+        inp: inp,+    };++    return pad_reader;+}++#[allow(dead_code)]

there's 2 bins in here, commp (main.rs) and bootstrap (lambda.rs), both use commp.rs but lambda.rs only uses one of the exposed pub fns and without these dead_code macros I get compiler warnings when compiling bootstrap. Is there a better way of handling this? It does seem a bit silly.

rvagg

comment created time in 10 days

Pull request review commentrvagg/rust-fil-commp-generate

Add Lambda function & build + MerkleTree variant that splits memory between disk and memory

+// mostly a copy of https://github.com/filecoin-project/merkle_light/blob/master/src/store/vec.rs++use std::ops::Range;++use anyhow::Result;++use merkletree::merkle::Element;+use merkletree::store::{DiskStore, Store, StoreConfig, VecStore};++#[derive(Debug)]+pub struct MultiStore<E: Element>(DiskStore<E>, VecStore<E>);++const DISK_MAX: usize = 262144 * 48; // ~375Mb++impl<E: Element> Store<E> for MultiStore<E> {+    fn new_with_config(size: usize, _config: StoreConfig) -> Result<Self> {+        Self::new(size)+    }++    fn new(size: usize) -> Result<Self> {+        Ok(MultiStore(+            DiskStore::new(DISK_MAX).unwrap(),+            VecStore::new(size - DISK_MAX).unwrap(),+        ))+    }++    fn write_at(&mut self, el: E, index: usize) -> Result<()> {+        if index > DISK_MAX {+            self.1.write_at(el, index - DISK_MAX)+        } else {+            self.0.write_at(el, index)+        }+    }++    fn copy_from_slice(&mut self, buf: &[u8], start: usize) -> Result<()> {+        if start + (buf.len() / E::byte_len()) > DISK_MAX {+            self.1.copy_from_slice(buf, start - DISK_MAX)+        } else {+            self.0.copy_from_slice(buf, start)+        }+    }++    fn new_from_slice_with_config(size: usize, data: &[u8], _config: StoreConfig) -> Result<Self> {+        Self::new_from_slice(size, &data)+    }++    fn new_from_slice(_size: usize, _data: &[u8]) -> Result<Self> {+        unimplemented!("nope, too hard");+    }++    fn new_from_disk(_size: usize, _config: &StoreConfig) -> Result<Self> {+        unimplemented!("Cannot load a MultiStore from disk");+    }++    fn read_at(&self, index: usize) -> Result<E> {+        if index > DISK_MAX {+            self.1.read_at(index - DISK_MAX)+        } else {+            self.0.read_at(index)+        }+    }++    fn read_into(&self, index: usize, buf: &mut [u8]) -> Result<()> {+        if index > DISK_MAX {+            self.1.read_into(index - DISK_MAX, buf)+        } else {+            self.0.read_into(index, buf)+        }+    }++    fn read_range_into(&self, _start: usize, _end: usize, _buf: &mut [u8]) -> Result<()> {+        unimplemented!("Not required here");+    }++    fn read_range(&self, r: Range<usize>) -> Result<Vec<E>> {+        if r.start > DISK_MAX {+            let nr = Range {+                start: r.start - DISK_MAX,+                end: r.end - DISK_MAX,+            };+            self.1.read_range(nr)+        } else {+            if r.end > DISK_MAX {+                let nr0 = Range {+                    start: r.start,+                    end: DISK_MAX,+                };+                let nr1 = Range {+                    start: 0,+                    end: r.end - DISK_MAX,+                };+                let r0 = self.1.read_range(nr0).unwrap();

yes, you're right, and this is concerning since I was getting the right numbers out of it! now I have to figure out how, because I'm pretty sure split ranges were involved in my test data.

rvagg

comment created time in 10 days

Pull request review commentrvagg/rust-fil-commp-generate

Add Lambda function & build + MerkleTree variant that splits memory between disk and memory

+// mostly a copy of https://github.com/filecoin-project/merkle_light/blob/master/src/store/vec.rs++use std::ops::Range;++use anyhow::Result;++use merkletree::merkle::Element;+use merkletree::store::{DiskStore, Store, StoreConfig, VecStore};++#[derive(Debug)]+pub struct MultiStore<E: Element>(DiskStore<E>, VecStore<E>);

nice!

rvagg

comment created time in 10 days

Pull request review commentrvagg/rust-fil-commp-generate

Add Lambda function & build + MerkleTree variant that splits memory between disk and memory

 // Usage: commp <path to file> [fp] // specify "fp" to run through filecoin_proofs -use std::cmp;-use std::convert::TryFrom;+mod commp;+ use std::env; use std::fs::File;-use std::io;-use std::io::{Cursor, Read, Seek, SeekFrom}; -use anyhow::ensure;+#[macro_use]+extern crate log;+ use bytesize; use hex;--use filecoin_proofs::constants::DefaultPieceHasher;-use filecoin_proofs::fr32::write_padded;-use filecoin_proofs::{-    generate_piece_commitment, PaddedBytesAmount, SectorSize, UnpaddedBytesAmount,-};-use storage_proofs::fr32::Fr32Ary;-use storage_proofs::hasher::{Domain, Hasher};-use storage_proofs::pieces::generate_piece_commitment_bytes_from_source;-use storage_proofs::util::NODE_SIZE;--type VecStore<E> = merkletree::store::VecStore<E>;-pub type MerkleTree<T, A> = merkletree::merkle::MerkleTree<T, A, VecStore<T>>;--// use a file as an io::Reader but pad out extra length at the end with zeros up-// to the padded size-struct PadReader {-    file: File,-    fsize: usize,-    padsize: usize,-    pos: usize,-}--impl io::Read for PadReader {-    fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> {-        /*-        if self.pos == self.fsize {-          print!("reached file size, now padding ...")-        }-        */-        if self.pos >= self.fsize {-            for i in 0..buf.len() {-                buf[i] = 0;-            }-            let cs = cmp::min(self.padsize - self.pos, buf.len());-            self.pos = self.pos + cs;-            /*-            if cs < buf.len() {-              print!("done with file ...")-            }-            */-            Ok(cs)-        } else {-            let cs = self.file.read(buf)?;-            self.pos = self.pos + cs;-            Ok(cs)-        }-    }-}--// logic partly copied from Lotus' PadReader which is also in go-fil-markets-// figure out how big this piece will be when padded-fn padded_size(size: u64) -> u64 {-    let logv = 64 - size.leading_zeros();-    let sect_size = (1 as u64) << logv;-    let bound = u64::from(UnpaddedBytesAmount::from(SectorSize(sect_size)));-    if size <= bound {-        return bound;-    }-    return u64::from(UnpaddedBytesAmount::from(SectorSize(1 << (logv + 1))));-}--// from rust-fil-proofs/src/storage_proofs/pieces.rs-fn local_generate_piece_commitment_bytes_from_source<H: Hasher>(-    source: &mut dyn Read,-    padded_piece_size: usize,-) -> anyhow::Result<Fr32Ary> {-    ensure!(padded_piece_size > 32, "piece is too small");-    ensure!(padded_piece_size % 32 == 0, "piece is not valid size");--    let mut buf = [0; NODE_SIZE];-    use std::io::BufReader;--    let mut reader = BufReader::new(source);--    let parts = (padded_piece_size as f64 / NODE_SIZE as f64).ceil() as usize;--    let tree = MerkleTree::<H::Domain, H::Function>::try_from_iter((0..parts).map(|_| {-        reader.read_exact(&mut buf)?;-        <H::Domain as Domain>::try_from_bytes(&buf)-    }))?;--    let mut comm_p_bytes = [0; NODE_SIZE];-    let comm_p = tree.root();-    comm_p.write_bytes(&mut comm_p_bytes)?;--    Ok(comm_p_bytes)-}+extern crate simple_logger;

why do I still need these bits:

#[macro_use]
extern crate lambda_runtime as lambda;
#[macro_use]
extern crate serde_derive;

? is log set up different such that info!() and friends are exported in some special way and the serde and lambda macros aren't?

rvagg

comment created time in 10 days

Pull request review commentrvagg/rust-fil-commp-generate

Add Lambda function & build + MerkleTree variant that splits memory between disk and memory

+mod commp;++use std::error::Error;+use std::str::FromStr;++#[macro_use]+extern crate lambda_runtime as lambda;+#[macro_use]+extern crate serde_derive;+#[macro_use]+extern crate log;+extern crate simple_logger;++use lambda::error::HandlerError;++use rusoto_core::credential::{AwsCredentials, StaticProvider};+use rusoto_core::region::Region;+use rusoto_s3::{GetObjectRequest, S3Client, S3};++use hex;++#[derive(Deserialize, Clone)]+struct CommPRequest {+    region: String,+    bucket: String,+    key: String,+}++#[derive(Serialize, Clone)]+struct CommPResponse {+    region: String,+    bucket: String,+    key: String,+    commp: String,+    size: u64,+    #[serde(rename = "paddedSize")]+    padded_size: u64,+    #[serde(rename = "pieceSize")]+    piece_size: u64,+}++fn main() -> Result<(), Box<dyn Error>> {+    simple_logger::init_with_level(log::Level::Info)?;+    lambda!(commp_handler);++    Ok(())+}++fn commp_handler(+    request: CommPRequest,+    _c: lambda::Context,+) -> Result<CommPResponse, HandlerError> {+    info!(+        "Received request: {}/{}/{}",+        request.region, request.bucket, request.key+    );++    let region = Region::from_str(request.region.as_str()).unwrap();++    let client = S3Client::new_with(+        rusoto_core::request::HttpClient::new().expect("Failed to creat HTTP client"),+        StaticProvider::from(AwsCredentials::default()),+        region,+    );++    let get_req = GetObjectRequest {+        bucket: request.bucket.to_owned(),+        key: request.key.to_owned(),+        ..Default::default()+    };++    let result = client+        .get_object(get_req)+        .sync()+        .expect("Couldn't GET object");

easier said than done! error handling and propagation is one thing I can't get a grip of yet, this use of anyhow seems to complicate things (I'm only pulling it in because some of rust-fil-proofs dabbles with it). expect, unwrap, ?, are still fairly opaque to me in where they should, can and can't be used.

rvagg

comment created time in 10 days

Pull request review commentrvagg/rust-fil-commp-generate

Add Lambda function & build + MerkleTree variant that splits memory between disk and memory

+use std::cmp;+use std::convert::TryFrom;+use std::io;+use std::io::{BufReader, Cursor, Read, Seek, SeekFrom};++use filecoin_proofs::constants::DefaultPieceHasher;+use filecoin_proofs::fr32::write_padded;+use filecoin_proofs::{+    generate_piece_commitment, PaddedBytesAmount, SectorSize, UnpaddedBytesAmount,+};+use storage_proofs::fr32::Fr32Ary;+use storage_proofs::hasher::{Domain, Hasher};+use storage_proofs::pieces::generate_piece_commitment_bytes_from_source;+use storage_proofs::util::NODE_SIZE;++#[path = "multistore.rs"]+mod multistore;++type VecStore<E> = merkletree::store::VecStore<E>;+pub type MerkleTree<T, A> = merkletree::merkle::MerkleTree<T, A, VecStore<T>>;+pub type MerkleTreeMultiStore<T, A> =+    merkletree::merkle::MerkleTree<T, A, multistore::MultiStore<T>>;++pub struct CommP {+    pub padded_size: u64,+    pub piece_size: u64,+    pub bytes: [u8; 32],+}++// use a file as an io::Reader but pad out extra length at the end with zeros up+// to the padded size+struct PadReader<R>+where+    R: io::Read,+{+    size: usize,+    padsize: usize,+    pos: usize,+    inp: R,+}++impl<R: io::Read> io::Read for PadReader<R> {+    fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> {+        /*+        if self.pos == self.size {+          print!("reached file size, now padding ...")+        }+        */+        if self.pos >= self.size {+            for i in 0..buf.len() {+                buf[i] = 0;+            }+            let cs = cmp::min(self.padsize - self.pos, buf.len());+            self.pos = self.pos + cs;+            /*+            if cs < buf.len() {+              print!("done with file ...")+            }+            */+            Ok(cs)+        } else {+            let cs = self.inp.read(buf)?;+            self.pos = self.pos + cs;+            Ok(cs)+        }+    }+}++// logic partly copied from Lotus' PadReader which is also in go-fil-markets+// figure out how big this piece will be when padded+fn padded_size(size: u64) -> u64 {+    let logv = 64 - size.leading_zeros();+    let sect_size = (1 as u64) << logv;+    let bound = u64::from(UnpaddedBytesAmount::from(SectorSize(sect_size)));+    if size <= bound {+        return bound;+    }+    return u64::from(UnpaddedBytesAmount::from(SectorSize(1 << (logv + 1))));+}++// reimplemented from rust-fil-proofs/src/storage_proofs/pieces.rs+fn local_generate_piece_commitment_bytes_from_source<H: Hasher, R: Sized>(+    source: &mut R,+    padded_piece_size: usize,+) -> anyhow::Result<Fr32Ary>+where+    R: io::Read,+{+    let mut buf = [0; NODE_SIZE];+    let mut reader = BufReader::new(source);+    let parts = (padded_piece_size as f64 / NODE_SIZE as f64).ceil() as usize;++    info!("Calculating merkle ...");++    let tree = MerkleTree::<H::Domain, H::Function>::try_from_iter((0..parts).map(|_| {+        reader.read_exact(&mut buf)?;+        <H::Domain as Domain>::try_from_bytes(&buf)+    }))?;++    let mut comm_p_bytes = [0; NODE_SIZE];+    let comm_p = tree.root();+    comm_p+        .write_bytes(&mut comm_p_bytes)+        .expect("borked at extracting commp bytes");++    info!("CommP from merkle root: {:?}", comm_p_bytes);+    Ok(comm_p_bytes)+}++// same as local_generate_piece_commitment_bytes_from_source but uses a custom MerkleTree+// caching store from multistore.rs+fn local_multistore_generate_piece_commitment_bytes_from_source<H: Hasher, R: Sized>(+    source: &mut R,+    padded_piece_size: usize,+) -> anyhow::Result<Fr32Ary>+where+    R: io::Read,+{+    let mut buf = [0; NODE_SIZE];+    let mut reader = BufReader::new(source);+    let parts = (padded_piece_size as f64 / NODE_SIZE as f64).ceil() as usize;++    info!("Calculating merkle with multistore ...");++    let tree =+        MerkleTreeMultiStore::<H::Domain, H::Function>::try_from_iter((0..parts).map(|_| {+            reader.read_exact(&mut buf)?;+            <H::Domain as Domain>::try_from_bytes(&buf)+        }))?;++    let mut comm_p_bytes = [0; NODE_SIZE];+    let comm_p = tree.root();+    comm_p+        .write_bytes(&mut comm_p_bytes)+        .expect("borked at extracting commp bytes");++    info!("CommP from merkle root: {:?}", comm_p_bytes);+    Ok(comm_p_bytes)+}++fn padded<R: Sized>(inp: &mut R, size: u64) -> PadReader<&mut R>+where+    R: io::Read,+{+    let padded_size = padded_size(size);++    let pad_reader = PadReader {+        size: usize::try_from(size).unwrap(),+        padsize: usize::try_from(padded_size).unwrap(),+        pos: 0,+        inp: inp,+    };++    return pad_reader;+}++#[allow(dead_code)]+pub fn generate_commp_filecoin_proofs<R: Sized>(+    inp: &mut R,+    size: u64,+) -> Result<CommP, std::io::Error>+where+    R: io::Read,+{+    let pad_reader = padded(inp, size);+    let padded_size = pad_reader.padsize;++    let info = generate_piece_commitment(pad_reader, UnpaddedBytesAmount(padded_size as u64))+        .expect("failed to generate piece commitment");++    Ok(CommP {+        padded_size: padded_size as u64,+        piece_size: u64::from(info.size),+        bytes: info.commitment,+    })+}++#[allow(dead_code)]+pub fn generate_commp_storage_proofs<R: Sized>(+    inp: &mut R,+    size: u64,+) -> Result<CommP, std::io::Error>+where+    R: io::Read,+{+    let pad_reader = padded(inp, size);+    let padded_size = pad_reader.padsize;++    // Grow the vector big enough so that it doesn't grow it automatically+    let mut data = Vec::with_capacity((padded_size as u64 as f64 * 1.01) as usize);+    let mut temp_piece_file = Cursor::new(&mut data);++    // send the source through the preprocessor, writing output to temp file+    let piece_size =+        UnpaddedBytesAmount(write_padded(pad_reader, &mut temp_piece_file).unwrap() as u64);+    temp_piece_file.seek(SeekFrom::Start(0))?;+    let commitment = generate_piece_commitment_bytes_from_source::<DefaultPieceHasher>(+        &mut temp_piece_file,+        PaddedBytesAmount::from(piece_size).into(),+    );++    Ok(CommP {+        padded_size: padded_size as u64,+        piece_size: u64::from(piece_size),+        bytes: commitment.unwrap(),+    })+}++pub fn generate_commp_storage_proofs_mem<R: Sized>(+    inp: &mut R,+    size: u64,+    multistore: bool,+) -> Result<CommP, std::io::Error>+where+    R: io::Read,+{+    let pad_reader = padded(inp, size);+    let padded_size = pad_reader.padsize;++    // Grow the vector big enough so that it doesn't grow it automatically+    let fr32_capacity = (padded_size as f64) * 1.008; // rounded up extra space for 2 in every 254 bits+    info!(+        "Padded size = {}, allocating vector with fr32 size = {}",+        padded_size, fr32_capacity+    );+    let mut data = Vec::with_capacity(fr32_capacity as usize);+    let mut temp_piece_file = Cursor::new(&mut data);++    // send the source through the preprocessor, writing output to temp file+    let piece_size =+        UnpaddedBytesAmount(write_padded(pad_reader, &mut temp_piece_file).unwrap() as u64);+    info!("Piece size = {:?}", piece_size);+    temp_piece_file.seek(SeekFrom::Start(0))?;+    let commitment: Fr32Ary;++    if multistore {+        commitment = local_multistore_generate_piece_commitment_bytes_from_source::<+            DefaultPieceHasher,+            Cursor<&mut Vec<u8>>,+        >(+            &mut temp_piece_file,+            PaddedBytesAmount::from(piece_size).into(),+        )+        .unwrap();

same thing as the previous Result garbage: the trait std::convert::From<anyhow::Error> is not implemented forstd::io::Error`, I have no idea.

rvagg

comment created time in 10 days

Pull request review commentrvagg/rust-fil-commp-generate

Add Lambda function & build + MerkleTree variant that splits memory between disk and memory

+use std::cmp;+use std::convert::TryFrom;+use std::io;+use std::io::{BufReader, Cursor, Read, Seek, SeekFrom};++use filecoin_proofs::constants::DefaultPieceHasher;+use filecoin_proofs::fr32::write_padded;+use filecoin_proofs::{+    generate_piece_commitment, PaddedBytesAmount, SectorSize, UnpaddedBytesAmount,+};+use storage_proofs::fr32::Fr32Ary;+use storage_proofs::hasher::{Domain, Hasher};+use storage_proofs::pieces::generate_piece_commitment_bytes_from_source;+use storage_proofs::util::NODE_SIZE;++#[path = "multistore.rs"]+mod multistore;++type VecStore<E> = merkletree::store::VecStore<E>;+pub type MerkleTree<T, A> = merkletree::merkle::MerkleTree<T, A, VecStore<T>>;+pub type MerkleTreeMultiStore<T, A> =+    merkletree::merkle::MerkleTree<T, A, multistore::MultiStore<T>>;++pub struct CommP {+    pub padded_size: u64,+    pub piece_size: u64,+    pub bytes: [u8; 32],+}++// use a file as an io::Reader but pad out extra length at the end with zeros up+// to the padded size+struct PadReader<R>+where+    R: io::Read,+{+    size: usize,+    padsize: usize,+    pos: usize,+    inp: R,+}++impl<R: io::Read> io::Read for PadReader<R> {+    fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> {+        /*+        if self.pos == self.size {+          print!("reached file size, now padding ...")+        }+        */+        if self.pos >= self.size {+            for i in 0..buf.len() {+                buf[i] = 0;+            }+            let cs = cmp::min(self.padsize - self.pos, buf.len());+            self.pos = self.pos + cs;+            /*+            if cs < buf.len() {+              print!("done with file ...")+            }+            */+            Ok(cs)+        } else {+            let cs = self.inp.read(buf)?;+            self.pos = self.pos + cs;+            Ok(cs)+        }+    }+}++// logic partly copied from Lotus' PadReader which is also in go-fil-markets+// figure out how big this piece will be when padded+fn padded_size(size: u64) -> u64 {+    let logv = 64 - size.leading_zeros();+    let sect_size = (1 as u64) << logv;+    let bound = u64::from(UnpaddedBytesAmount::from(SectorSize(sect_size)));+    if size <= bound {+        return bound;+    }+    return u64::from(UnpaddedBytesAmount::from(SectorSize(1 << (logv + 1))));+}++// reimplemented from rust-fil-proofs/src/storage_proofs/pieces.rs+fn local_generate_piece_commitment_bytes_from_source<H: Hasher, R: Sized>(+    source: &mut R,+    padded_piece_size: usize,+) -> anyhow::Result<Fr32Ary>+where+    R: io::Read,+{+    let mut buf = [0; NODE_SIZE];+    let mut reader = BufReader::new(source);+    let parts = (padded_piece_size as f64 / NODE_SIZE as f64).ceil() as usize;++    info!("Calculating merkle ...");++    let tree = MerkleTree::<H::Domain, H::Function>::try_from_iter((0..parts).map(|_| {+        reader.read_exact(&mut buf)?;+        <H::Domain as Domain>::try_from_bytes(&buf)+    }))?;++    let mut comm_p_bytes = [0; NODE_SIZE];+    let comm_p = tree.root();+    comm_p+        .write_bytes(&mut comm_p_bytes)+        .expect("borked at extracting commp bytes");

I can't work out how to use this in my generate_commp_filecoin_proofs() implementation which returns a plain Result which comes out of generate_piece_commitment(). Doing a context() with that gives me the trait std::convert::From<anyhow::Error> is not implemented for std::io::Error which I suppose means i need to wrap it somehow but can't figure that out. So I still have that expect() in place. Let me know if you have a better idea for it.

rvagg

comment created time in 10 days

Pull request review commentrvagg/rust-fil-commp-generate

Add Lambda function & build + MerkleTree variant that splits memory between disk and memory

+use std::cmp;+use std::convert::TryFrom;+use std::io;+use std::io::{BufReader, Cursor, Read, Seek, SeekFrom};++use filecoin_proofs::constants::DefaultPieceHasher;+use filecoin_proofs::fr32::write_padded;+use filecoin_proofs::{+    generate_piece_commitment, PaddedBytesAmount, SectorSize, UnpaddedBytesAmount,+};+use storage_proofs::fr32::Fr32Ary;+use storage_proofs::hasher::{Domain, Hasher};+use storage_proofs::pieces::generate_piece_commitment_bytes_from_source;+use storage_proofs::util::NODE_SIZE;++#[path = "multistore.rs"]

If I remove #[path = "multistore.rs"] it won't compile. I can't figure out how to make this thing exist in its own file without this pragma or doing the whole mod directory thing (which I still don't understand either)

rvagg

comment created time in 10 days

issue closednodejs/build

Centos 7 socket hang up multiple times on 12.16.1 release

My gut is that this is an infra issue, but it does only appear to be happening with the 12.16.1 release right now :(

https://ci.nodejs.org/job/node-test-commit-linux/nodes=centos7-64-gcc6/32867/console https://ci.nodejs.org/job/node-test-commit-linux/nodes=centos7-64-gcc6/32868/console https://ci.nodejs.org/job/node-test-commit-linux/nodes=centos7-64-gcc6/32869/console

10:43:47 Error: socket hang up
10:43:47     at connResetException (internal/errors.js:604:14)
10:43:47     at TLSSocket.socketCloseListener (_http_client.js:400:25)
10:43:47     at TLSSocket.emit (events.js:333:22)
10:43:47     at net.js:668:12
10:43:47     at TCP.done (_tls_wrap.js:556:7) {
10:43:47   code: 'ECONNRESET'
10:43:47 }

closed time in 10 days

MylesBorins

issue commentnodejs/build

Centos 7 socket hang up multiple times on 12.16.1 release

Those are all on the same machine, test-softlayer-centos7-x64-1, so there's a chance at least it's an infra problem. Build history of the two machines of this type make it look like this one might be more flaky than the other, but we can't isolate 12.x runs unfortunately:

https://ci.nodejs.org/computer/test-rackspace-centos7-x64-1/builds https://ci.nodejs.org/computer/test-softlayer-centos7-x64-1/builds

I've updated the machine, cleared the workspace and rebooted, 🤞

MylesBorins

comment created time in 10 days

pull request commentnodejs/build

ansible: metrics server (WIP) & removal of Cloudflare cache bypass

no movements since last time, still needs me to allocate time for it

rvagg

comment created time in 10 days

issue commentnodejs/node-gyp

request has gone into maintenance mode. Maybe replace it.

Yeah, maybe not, npm uses its own library that's a wrapper around a fork of node-fetch, not an ideal tangle to get involved in https://github.com/npm/make-fetch-happen

We could consider bent but we have yet to embrace async/await in node-gyp. We're either going to have to do an awkward piece-wise migration or go whole-hog at some point.

Too hard to think about right now! Proposals are welcome!

Tchiller

comment created time in 10 days

issue commentnodejs/node-gyp

request has gone into maintenance mode. Maybe replace it.

Yeah, thanks @Tchiller, it got deprecated last week after a long warning period (mind you, I didn't pick up on the fact that it would be properly deprecated in npm so this was a bit of a surprise).

We don't have a plan to deal with this, maybe we'll follow what npm does (or is doing, I don't know status). This will take someone doing the work and making sure it's maximally compatible back to Node 10.x at least (yes 10 is going away soon but node-gyp can't be as aggessive as a lot of software since it's a piece of core infra).

Tchiller

comment created time in 10 days

issue closednodejs/build

Why did download metrics drop significantly around October 2019?

The download counts of node has dropped significant around October 2019, as shown in https://nodejs.org/metrics/. What happens? Does node.js lost half of its users in one day?

total

closed time in 10 days

golopot

push eventrvagg/rust-fil-commp-generate

Rod Vagg

commit sha 844cb04e323db144968beae6bf98daf88cf7155f

fixup! implement MerkleTree caching store that splits disk and memory

view details

push time in 13 days

pull request commentrvagg/rust-fil-commp-generate

Add Lambda function & build + MerkleTree variant that splits memory between disk and memory

The storage stuff it's implementing is in https://github.com/filecoin-project/merkle_light/tree/v0.15.2/src/store FYI

rvagg

comment created time in 13 days

pull request commentrvagg/rust-fil-commp-generate

Add Lambda function & build + MerkleTree variant that splits memory between disk and memory

With a plain VecStore, a 1G piece will take up just under 3,100Mb to calculate with the combination of the padded version and Merkle caching in the VecStore.

Lambda has a limit of 3,008Mb in memory and 500Mb of disk. This includes whatever additional is needed to run the executable and whatever dynamic libraries it needs to pull in. So we're too big in memory.

This PR includes a MultiStore that uses around 375Mb in a DiskStore and the rest in a VecStore, so we can make use of that scratch disk space to scrape under the limit.

rvagg

comment created time in 13 days

PR opened rvagg/rust-fil-commp-generate

Add Lambda function & build + MerkleTree variant that splits memory between disk and memory

caveat emptor, this is some cowboy stuff from a not very experienced Rust programmer, but it seems to get the right results with my test data and I can squeeze it into Lambda's memory.

+1188 -221

0 comment

10 changed files

pr created time in 13 days

create barnchrvagg/rust-fil-commp-generate

branch : rvagg/lambda+memory-management

created branch time in 13 days

push eventipld/roadmap

Rod Vagg

commit sha 2322ac22dac31c239925c9e1e91ac0ba39be50eb

Added Mid-q scores for 2020/Q1

view details

push time in 13 days

issue commentnodejs/build

Add Universal Base Image (UBI) build to CI system

Nice! Do you have time to set up a preliminary version of a Dockerfile for this? See the Dockerfiles in here: https://github.com/nodejs/build/tree/master/ansible/roles/docker/templates, the one that it might be close to is ubuntu1804_sharedlibs.Dockerfile.j2 where we previously had FIPS tests running. Basically the image needs to be set up so that a Node repo can be mounted and compiled, the environment variables tell it where to link. Might not be needed for this image though if it's the default openssl installed I suppose.

Do you know if you can FROM registry.access.redhat.com/ubi8:8.1 in a Dockerfile? If not then we'll have to do some extra Ansible work to pull from a custom registry.

danbev

comment created time in 14 days

issue commentnodejs/build

provision mac minis at nearform as osx machines in jenkins CI

@AshCripps can we have an update on this? It would be good to get a new release machine up to at least start getting 13.x on newer macOS & Xcode with notarization. Ideally we could get two new release machines, one in nearForm and one in Macstadium and run all release lines through the same setup.

mhdawson

comment created time in 14 days

pull request commentnodejs/node

build: macOS package notarization

@AndrewTrapani this PR should be doing that already, it's using gon and the config in here has staple = true.

@MajuMadhusudanan no ETA, we're waiting on some infra changes.

rvagg

comment created time in 14 days

issue commentnodejs/build

node-www out of space?

^ that would be a good basis for a PR for a new policy doc so we can nail it down and agree on what we're going to do (if anything).

For now I've finalised an upgrade which doubles the size of that disk, so we have 2Tb of headroom now before needing to take more action. Old disk still mounted for reference but I'll discard it at some point in the near future.

BethGriggs

comment created time in 15 days

pull request commentipld/ipld

Bring entry point docs inline with reality

can + worms

How committed are you to this right now? I think the best thing you could do is invest in some quality introductory text, something we can reuse at all our main entry points.

That "Codec Implementations" could be dropped, doing it across language isn't useful when you think of the audience (i.e. they're mostly going to be language-specific). So the per-language sections are a good idea but they're going to need to have a short explanation of the flux that's going on in each of them. "Existing JS stuff start here ... have a look at newer stuff here ..." which could be very similar for Go. But you have to explain why we're not evolving older stuff and replacing it all with newer things and give a good explanation of the state of that work .. and then we need to keep it updated! But I suppose we need to do that somewhere anyway.

mikeal

comment created time in 15 days

issue commentnodejs/build

Node.js Foundation Build WorkGroup Meeting 2020-02-11

doh! so sorry, this one fell off my radar.

mhdawson

comment created time in 15 days

PR closed rvagg/rust-fil-commp-generate

storage-proofs local / reimplemented version with VecStore

This adds a third method of generating commp by locally reimplememting the storage-proofs code and forcing the use of a memory cache rather than a disk cache.

With this method, the only significant thing it uses in rust-fil-proofs is fr32 which does the padding of the piece (adding 2 bits per 254 bits). It reimplements generate_piece_commitment_bytes_from_source (complete copypasta) but that is able to call a version of MerkleTree (from the merkletree library) that uses VecStore rather than the DiskStore version that's hardwired in to rust-fil-proofs (in merkle.rs).

So now, the three methods give us the same result for the 1G file, monitoring disk and memory usage:

filecoin-proofs (calling in to filecoin_proofs::generate_piece_commitment() to do padding plus commp):

	Size: 1007.9 MB
	Padded Size: 1065.4 MB
	Piece Size: 1065.4 MB
	CommP 1232be3a006dfb6e978a5aa4c3da433e78a660c204ea40bbd3fe31ba67c03d33
Took ~13 seconds
Peak disk 3069 Mb
Peak mem 4 Mb

storage-proofs (calling in to storage_proofs::generate_piece_commitment_bytes_from_source() to generate commp after generating the padded version in an array locally, as per #1):

	Size: 1007.9 MB
	Padded Size: 1065.4 MB
	Piece Size: 1065.4 MB
	CommP 1232be3a006dfb6e978a5aa4c3da433e78a660c204ea40bbd3fe31ba67c03d33
Took ~7 seconds
Peak disk 2031 Mb
Peak mem 1029 Mb

storage-proofs local (reimplementing storage_proofs::generate_piece_commitment_bytes_from_source() but calling MerkleTree with a VecStore cache):

	Size: 1007.9 MB
	Padded Size: 1065.4 MB
	Piece Size: 1065.4 MB
	CommP 1232be3a006dfb6e978a5aa4c3da433e78a660c204ea40bbd3fe31ba67c03d33
Took ~7 seconds
Peak disk 0 Mb
Peak mem 2995 Mb

Keep in mind that this is just for doing these tasks, any more functionality to talk to S3 and manage AWS credentials would be on top of that if this were to go into Lambda, so we're edging right up to the 3,008 Mb limit if it were run there, and this will change with input file size too of course.

+90 -16

2 comments

4 changed files

rvagg

pr closed time in 16 days

delete branch rvagg/rust-fil-commp-generate

delete branch : rvagg/storage-proofs-local

delete time in 16 days

push eventrvagg/rust-fil-commp-generate

Rod Vagg

commit sha 21715d5573866dbee2a403f69d79e9e60a2e2a02

storage-proofs local / reimplemented version with VecStore

view details

push time in 16 days

issue commentfilecoin-project/specs

Increase detail about serialization/encoding in the spec

@anorth you might be using uint64s in Go but your encoder is doing it smallest-possible at the block level: https://github.com/polydawn/refmt/blob/3d65705ee9f12dc0dfcc0dc6cf9666e97b93f339/cbor/cborEncoderTerminals.go#L12-L33 as per suggested canonical cbor spec. So its only a true uint64 in the block if it cant be represented in less than 64 bits. See https://github.com/ipld/specs/pull/236 for more discussion around IPLD representation strictness in CBOR.

In untyped or loosely typed languages this starts to get blurry. In JavaScript a Number is a Number, it just behaves a little differently above 32-bits and gets weird beyond Number.MAX_SAFE_INTEGER.

In most cases this becomes a language level concern. On the block it's a different matter.

If an int is always going to be represented as 64 bits, then that becomes a documentation-level concern I think: explaining why that is the case and why any implementation needs to make room for very large numbers that may require fancier abstractions.

pooja

comment created time in 16 days

issue commentipld/js-block

Work with react native

Sorry, I don't think we have anyone with any experience with React Native. My guess is that it's a conflict between being treated as a browser vs Node and there's a mixup in that process somewhere.

I'd say your first problem is arising out of https://github.com/ipld/js-get-codec which has different bundling rules for the browser vs Node.js. You may need to manually insert the codec you want rather than letting it choose for you.

The second, probably https://github.com/multiformats/js-multihashing-async, which also has browser-specific rules that may need to be overiden (see package.json#browser which diverts sha to browser code that uses crypto.subtle which isn't available in Node.js).

IanPhilips

comment created time in 16 days

issue commentnodejs/Release

Trim active releasers list

Sounds fine to me. It'd just mean keeping the secrets directory gpg keys in sync with the GitHub team, and someone has to be on the hook for the yearly checkup, team pruning, rotation and gpg secrets key syncing. Someone has to be on the hook for something, pick your poison!

rvagg

comment created time in 16 days

issue commentnodejs/node

enabling eslint's prefer-const rule

bleh, I'm just a slave to whatever standard tells me these days

misterdjules

comment created time in 16 days

issue commentnodejs/node

Node v10.19.0-linux-armv6l cannot be run on Debian Jessie

From the 10.19.0 build logs (here but not publicly accessible):

10:31:09 ++ echo 'Using compiler at:     /opt/raspberrypi/rpi-newer-crosstools/x64-gcc-4.9.4-binutils-2.28/arm-rpi-linux-gnueabihf/bin/arm-rpi-linux-gnueabihf-gcc'
10:31:09 Using compiler at:     /opt/raspberrypi/rpi-newer-crosstools/x64-gcc-4.9.4-binutils-2.28/arm-rpi-linux-gnueabihf/bin/arm-rpi-linux-gnueabihf-gcc
10:31:09 ++ echo 'Using commpiler flags: -march=armv6zk'
10:31:09 Using commpiler flags: -march=armv6zk
...
10:31:09 Compiling with GCC 4.9.4
10:31:09 + ccache /opt/raspberrypi/rpi-newer-crosstools/x64-gcc-4.9.4-binutils-2.28/arm-rpi-linux-gnueabihf/bin/arm-rpi-linux-gnueabihf-gcc -march=armv6zk --version
10:31:09 arm-rpi-linux-gnueabihf-gcc (crosstool-NG crosstool-ng-1.23.0) 4.9.4

The crosstools binaries and config used for this are at https://github.com/rvagg/rpi-newer-crosstools

I think these might be pulling in a newer libstdc++ than you get by default on Jessie, which is probably the problem here.

Maybe try apt-get install libstdc++6. See https://github.com/nodejs/build/blob/master/ansible/roles/jenkins-worker/templates/rpi_jessie.Dockerfile.j2 for our test config for these binaries. Its based on Raspbian and pulling in g++-4.9 will be pulling in that libstdc++6.

bertlea

comment created time in 16 days

issue commentnodejs/build

Proposal that release team have more access to release resources

Sufficient access to solve #2162 to remove old rcs.

That's not how it was resolved though, we haven't removed anything in the release directory and I'm still hesitant to give them more ability to mess with live files unless we are clear with policies about what needs to be kept and what isn't.

In fact, that main way I initially freed up space was to remove things that were stale in staging. The release team has access to do this. But the setup on the server is not simple and I doubt anyone would be brave enough to go do such a thing.

If we can find an intersection between release team and sysadmin expertise and knowledge of how these things are set up then we should expand in that direction. Currently this role is being served by non-release team members (well non-active): myself, Michael and Joao (Johan has access too and could probably figure things out in an emergency). We're roughly distributed around the globe so mostly have good coverage when things go bad.

We don't really have well established channels to raise problems like this (I only discovered it when I went through my GitHub notifications late in the day, a few hours after I was active and could have taken care of it!). Maybe that's something we could work on too?

Again, I perhaps misunderstand the release team's powers, but if they have the ability to push binaries to node-www, shouldn't the team be trimmed to ones that actually need to do that?

Well this gets to an important point for this discussion - that team is not a very organised team, it's a lose collection of people who have been given access at some point by the TSC. It merged with the LTS WG (IIRC) but I'm not sure you could properly call it an active working group these days. The team itself doesn't put much effort into trimming its list; that's arguably not even part of its role. If we raise the level of access of this team then we'd probably want it to be more organised and strict in its delegation of responsibility. Maybe this is a TSC role though? I'm not super keen for Build to take on a duration role for Release membership.

  • See https://github.com/nodejs/release#lts-team-members for the list of members of that team (is that even an accurate representation of the GitHub team?). Compare to the people you know who have released in the past 12 months. Even compare to people active in this org!
  • See https://github.com/nodejs/Release/issues/499 for my recent attempt to clean this up. I don't have a lot of clarity in those responses and haven't yet taken action on it. But action based on that is going to be me taking a strong line with people who haven't responded.

Also, a bunch of SSH keys were put on the server without identifiers, further complicating the trimming process.

(This would have been a nice place for KeyBox to be inserted).

sam-github

comment created time in 16 days

issue commentfilecoin-project/specs

Increase detail about serialization/encoding in the spec

A set of PRs I tried against the old specs tried to do this and we added some new special types in IPLD Schemas that came out of these specs (like byteprefix):

  • https://github.com/filecoin-project/specs/pull/514
  • https://github.com/filecoin-project/specs/pull/515
  • https://github.com/filecoin-project/specs/pull/516
  • https://github.com/filecoin-project/specs/pull/517

I wouldn't mind revisiting this soon, if someone else gets to it before me feel free to rope me in I'd love to help.

One thing we lack is any kind of language-specific annotation, so if an Int is meant to be a UInt64 then that needs to be adjacent to Schema language. We're getting toward this kind of detail with the Go codegen work I think and I think we still suspect there will be a kind of annotation addon to IPLD Schemas to hint at specific types that are more specific than the IPLD data model. Part of the intention of IPLD Schemas is that they serve as a documentation tool and be surrounded by additional details re higher-level concerns.

pooja

comment created time in 16 days

issue commentnodejs/build

Platform requirements for Node.js 14

General question: are we are of any new compiler (C++) constraints that may be heading our way via V8? Are we going to struggle at any point in 14's lifetime with GCC 6? @targos you seem to be on the ball with this kind of stuff, any ideas?

Ubuntu

Ubuntu tends to be on the easier end of the support spectrum of Linux for us, mainly because of ubuntu-toolchain-r. But, 16.04 is only publicly supported till early next year. The 2024 timeframe is for ESM, which is a paid extra. I tried once to get Canonical to sponsor the project with some free ESM licenses but that never eventuated. So we probably shouldn't lock ourselves into supporting an OS that isn't getting security updates. +1 to dropping for simplication.

Re the 18.04 machines you listed @sam-github, the ones with "docker" in their name are docker hosts and don't run tests bare. So we only have 2 x64 machines in rotation and even the armv7l ones are running Debian images! 18.04 is a good default for infra-type machines for now. That should switch to 20.04 soon.

macOS

  • Notarization places strict requirements on our release machines as noted
  • Supporting older versions ties up resources
  • We experience very few (I can remember one) issues where older versions have needed special treatment. Xcode seems to do a good job with compatibility.

I'd be in favour of raising our minimum tested to 10.13 (High Sierra) and attempting to get all our releases onto 10.15 (Catalina).

There's also -mmacosx-version-min we need to consider, common.gypi:

'MACOSX_DEPLOYMENT_TARGET': '10.10',      # -mmacosx-version-min=10.10

Let's bump that again. I'd be fine with doing it to 10.13 as well.

Linux (x64)

I'm in favour of bumping our release machines to devtoolset-8 for 14+. This is probably complicated for non-x64 but we're not raising compiler minimums so they should be fine on devtoolset-6 as they are now. But if we can take advantage of a newer compiler toolchain while maintaining our current libc compatibility then we should probably take it.

richardlau

comment created time in 16 days

issue commentnodejs/build

ci,libuv: http://ci.nodejs.org/downloads/zos/gyp.tar.gz?

Re macOS + z/OS: 🤷‍♂ another Jenkins mystery we'll never solve I suspect. Let's get that switched, perhaps a genuine mistake on someone's part.

Re Windows, what's the problem there? Don't you need to add in a GYP to make it compile with GYP? IIRC we setup a bunch of the libuv build machines with static GYP installs that we copied into place.

Re GYP version:

  • https://github.com/nodejs/node-gyp/pull/1975
  • https://github.com/nodejs/node/pull/30563

Hopefully we'll have one true GYP sooner than later. I want to cut a node-gyp@7 with that PR merged soon.

bnoordhuis

comment created time in 16 days

Pull request review commentipld/team-mgmt

Add weekly sync meeting notes 2020-02-10

+# 🖧 IPLD Weekly Sync 🙌🏽 2020-02-10++- **Lead:** @vmx+- **Notetaker:**+- **Attendees:**+  - Tim Caswell

s/Tim Caswell/@creationix

vmx

comment created time in 16 days

more