profile
viewpoint
Mikeal Rogers mikeal Protocol Labs San Francisco http://mikealrogers.com Creator of NodeConf and request.

apache/nano 1127

Nano is now part of Apache CouchDB. Repo moved to https://GitHub.com/apache/couchdb-nano

indutny/caine 141

Friendly butler

ipfs/blog 91

Source for the IPFS Blog

dscape/spell 87

spell is a javascript dictionary module for node.js, and the browser (including amd)

compretend/compretend-img 24

Image element that understands what is in the image, powered by ML.

jhs/couchdb 15

Mirror of Apache CouchDB

ipfs/metrics 11

Regularly collect and publish metrics about the IPFS ecosystem

ipld/roadmap 10

IPLD Project Roadmap

isaacs/http-duplex-client 10

Duplex API for making an HTTP request (write the req, read the response)

ipld/js-ipld-stack 7

EXPERIMENTAL: Next JS IPLD authoring stack

push eventProtoSchool/protoschool.github.io

Deployment Bot (from Travis CI)

commit sha f6f1610c70e686ab377c21b0d83b457fe13f7a17

Deploy proto.school to github.com/ProtoSchool/protoschool.github.io.git:master

view details

push time in 2 hours

push eventipld/docs

Mikeal Rogers

commit sha 20fd1993cf895942e0aa1178c15609526a8351cf

fix: better, almost done except for multi-lang samples

view details

push time in 20 hours

issue commentmultiformats/multihash

Size limit of identity hash

How is this different then my proposal?

I’m recommending that we add a column to the table to capture recommended max sizes. What I think you’re recommending is adding new entires for each hash+length.

vmx

comment created time in a day

issue commentmultiformats/multihash

Size limit of identity hash

Every table entry has a cost as it represents another barrier to compatibility between implementations, as each implementation must be configured with a hashing function that matches the identifier. If a hashing function has a variable length we can maximize the compatibility between implementations by leaving an affordance for the variability in length associated with only that one table entry and hashing function, making far more implementations compatible with it.

We cannot expect every multiformats implementation to have support for every hash function. We also cannot arbitrarily limit the number of new hash function entries in order to try and reduce proliferation because people who are using obscure hash functions have grounded reasons to do so and will likely sacrifice multihash compatibility if necessary. The only thing we can do is allow for optionality where we can, which is in the length. As far as I can tell that’s the only place we can, so we should.

vmx

comment created time in a day

PullRequestEvent

push eventmultiformats/js-multiformats

Mikeal Rogers

commit sha 49fb5ad2b2593776403cb7518176cc1702034377

fix: much better

view details

push time in a day

PR closed multiformats/js-multiformats

Reviewers
feat: asCID property self reference

This came up in a conversation I had with @gozala

This feature is nice for a few reasons.

  1. it’s a relatively unique property you can check that doesn’t rely on symbols.
  2. this makes the toJSON() representation serializable through a Worker but, since it includes a circular reference, can still be understood as a CID instead of just a regular object.
  3. this breaks CID serialization and causes an exception when you try to naively encode them as objects.

However, if we take this change it will require some updates to our codecs. In both dag-cbor and in dag-json we do an isCircular check before encoding an object and that runs through a relatively naive third party library that will now throw. But it’s already in my mental backlog to get rid of is-circular so it’s probably fine.

+0 -0

9 comments

0 changed file

mikeal

pr closed time in a day

push eventmultiformats/js-multiformats

Mikeal Rogers

commit sha 47ce14f309ba2fb792992699de956a3e7ef4c655

feat: cjs require (#20) * feat: cjs require * fix: test improvements * doc: add require docs

view details

Mikeal Rogers

commit sha a44c49dfc1f59b575ed34a656f1a5332830d3ebf

fix: export fixes for cjs/import

view details

Mikeal Rogers

commit sha 1d28bfef9a0b22ebfb28b583ec3b2935f9bfc342

doc: remove special note for require

view details

Mikeal Rogers

commit sha 9da3655d7c7ef99c9d72ec81cd92e6ba8c38b885

fix: add npmignore to get dist ignore out of pack

view details

Mikeal Rogers

commit sha de4d3014e655e10b05cc75723f1d518d7f9c2220

fix: empty ignore

view details

Mikeal Rogers

commit sha fe9e051fdb71fc5d4bf8d4946a7e818a4cb20bea

fix: no need to publish .github directory

view details

Mikeal Rogers

commit sha bcdcd1f97795fc75fe18db8c8b2380c2a227810c

test: compile all test to cjs and run (#21)

view details

Mikeal Rogers

commit sha 3de98bc370d4f2b88c20965d96b413771f2e9fda

debug: migrating to new action

view details

Mikeal Rogers

commit sha af325b87229a6b625f2fa1d68a6a9da9aaa14530

build: removing old action

view details

Mikeal Rogers

commit sha cb715976b6edc988a2e88961cede785064ab390f

fix: browser export mapping this is to make sure nothing breaks when compilers support export maps

view details

Mikeal Rogers

commit sha 102497058ec03f250c8caf390319e5d4875c5174

feat: improved cjs compile

view details

Mikeal Rogers

commit sha 4004f7bbcd23a729a33f72ee9cfadbee53d1ff74

fix: upgrade file from globals repo

view details

Mikeal Rogers

commit sha f21129348ff25a166657fa825206ac89f03bb6e7

fix: upgrade file from globals repo

view details

Mikeal Rogers

commit sha bc4c2189e2acab813a7776abece87f6086320358

fix: test should work in CJS

view details

push time in a day

push eventmultiformats/js-multiformats

Mikeal Rogers

commit sha d2df017d2813a171240765aec1197bc7e0771ca4

fix: don't break toJSON

view details

push time in a day

issue commentmultiformats/multihash

Size limit of identity hash

However, I think it's a big problem to have something that is effectively a spec (i.e. the IPFS blake3 multihash is at most 1KiB) but not have it expressed explicitly via the codec.

A surprising number of hash functions are of variable size so it’s not really practical to do this.

Something @ribasushi brought up a few months back is that we need to put a size limit in our code for pretty much every hash function. Since these recommendations are variable, we may want to capture this in another table, or expand the multicodec table, because the expected/recommended max size for most hashing functions doesn’t vary by programming language.

We haven’t implemented this yet, but we intend to.

vmx

comment created time in a day

issue commentipld/team-mgmt

JavaScript patterns and practices

another one might be to integrate modern BTC, Zcash and ETH work that I'm doing which is entirely based around the new multiformats stack

This is what I wrote the legacy interface for. We can take these new-style codecs and get the old style interface with everything turned back to Buffer and the old CID and js-ipfs can use them without taking any of our breaking changes.

mikeal

comment created time in a day

issue commentipld/team-mgmt

JavaScript patterns and practices

@Gozala I have some code running in Deno now, and as a result I’m a bit more skeptical on what the timeline might be for our stack to work on it OOTB.

Sure, we’ve mostly shed our dependence on Node.js stdlib and are on ESM, but it’s not very practical right now to run on Deno unless you have no dependencies whatsoever. That includes having dependencies between our own libraries. The solution to this problem just isn’t entirely worked out yet in Deno. I’m confident this will get better over time, but I wouldn’t hold my breath for our stack to work in the near term.

I could probably get js-multiformats w/o the legacy API running given a little time, because there’s almost no dependencies, but for all the libraries up the stack that depend on that module, they don’t have a great way to depend on it and import it reliably that can be published in a cross-platform way.

mikeal

comment created time in a day

push eventipld/docs

Mikeal Rogers

commit sha 8c52d854fea8296b3e180895267d4232d8b572a6

fix: use relative url while in debug

view details

push time in a day

push eventipld/docs

Mikeal Rogers

commit sha c9c0ca4b6fc69f72b7df4dc4b418925c5bbd7597

debug: difference in latest action version

view details

push time in a day

push eventipld/docs

Mikeal Rogers

commit sha c5b911c0d31acd89f903e1b5f052af67aacdca5d

fix: try deploy with just ghtoken

view details

push time in a day

create barnchipld/docs

branch : gh-pages

created branch time in a day

delete branch ipld/docs

delete branch : gh-pages

delete time in a day

create barnchipld/docs

branch : gh-pages

created branch time in a day

push eventipld/docs

Mikeal Rogers

commit sha 99932c54d588b94c701079c86680c0e8fe6cdcb4

fix: ignore dist

view details

Mikeal Rogers

commit sha 236ae888771a08617bf92ffc8784ec0912c0d5b0

build: automated deploys

view details

push time in a day

push eventProtoSchool/protoschool.github.io

Deployment Bot (from Travis CI)

commit sha ce9420620478c30aba2f0b4b2da6c421b08c737a

Deploy proto.school to github.com/ProtoSchool/protoschool.github.io.git:master

view details

push time in a day

push eventProtoSchool/protoschool.github.io

Deployment Bot (from Travis CI)

commit sha 7f6f32f90db0e8e3131bd74bfa7c5efcf78865a4

Deploy proto.school to github.com/ProtoSchool/protoschool.github.io.git:master

view details

push time in a day

push eventProtoSchool/protoschool.github.io

Deployment Bot (from Travis CI)

commit sha 6a2fc6c5da9ae99b045560ef231a21f7693236f3

Deploy proto.school to github.com/ProtoSchool/protoschool.github.io.git:master

view details

push time in a day

push eventmikeal/estest

Mikeal Rogers

commit sha eeaa1a638104edf495df6f68ac11d0d8a861bb11

feat: after()

view details

push time in 2 days

issue openeddenoland/deno

feat: stdout.cursorTo()

I’ve been working on a test system for native ESM tests and just got the runner working on Deno, but I did have to use a version of the runner’s display code that is less than awesome because there’s currently no way in Deno to move the stdout cursor.

I don’t have a strong opinion about where the API lives or how it should work, only that the functionality is there. In node.js it’s at process.stdout.cursorTo() but I don’t know if you want to copy that pattern. Adding properties to the stream seems a little off to me, and if you aren’t going to support the entire API surface that Node.js has here for tty it’s probably best to diverge completely.

created time in 2 days

push eventmikeal/estest

Mikeal Rogers

commit sha 30a3dad79f74a24b029ffcb06d6fda8d88687537

doc: simpler example

view details

push time in 2 days

push eventmikeal/estest

Mikeal Rogers

commit sha 5c55157ec5feccf66563912bc8810fdbd5615a5c

build: fix workflow

view details

push time in 2 days

push eventmikeal/estest

Mikeal Rogers

commit sha 37eb2c42da7a6a0f367ba13a3561f25d3b04f4c5

doc: styling

view details

push time in 2 days

push eventmikeal/estest

Mikeal Rogers

commit sha a5be35026a764bd47dc27d99857c6e363efd6da6

doc: minimal docs

view details

push time in 2 days

push eventmikeal/estest

Mikeal Rogers

commit sha aaff427d1c0d70a1ee4d24318993fd29e2486bc3

build: setting up automated releases

view details

push time in 2 days

push eventmikeal/estest

Mikeal Rogers

commit sha c08663e12727aefbf48b9d8dfe2c3db0f886b578

fix: deno fixes

view details

push time in 2 days

push eventmikeal/estest

Mikeal Rogers

commit sha caed2cb580a998a5980fdfa27715f009e3ab7df0

fix: moving things around for deno

view details

Mikeal Rogers

commit sha 2f1314b5d402fe5cadb5484febc45f276bd5050e

feat: deno support!

view details

push time in 2 days

push eventmikeal/estest

Mikeal Rogers

commit sha e3d80268368943640f90ccd228fef1a7fc169fff

fix: working concurrent runner

view details

push time in 2 days

push eventmikeal/mikeal

Mikeal Rogers

commit sha 21d369463492a23e255ae720412ad48b2861b790

fix: nope, equals

view details

push time in 2 days

push eventmikeal/mikeal

Mikeal Rogers

commit sha 4226d1326f87df2fc572062d2968ebb2e7e049f6

fix: “half a brain” is not half a brain or larger

view details

push time in 2 days

issue commentmultiformats/multihash

Size limit of identity hash

Your application would work on one implementation, but not by default on some other.

This is true of pretty much every protocol. TCP, UDP, HTTP, etc, there’s no size limit in the protocol specification for the total data transferred, but every service provider and implementation sets one. These limits don’t seem to negate the benefits of agreeing on common protocols and clients learn to live within reasonable limits the ecosystem of service providers have set.

As an example: I have a script that does GraphQL queries to GitHub’s service and if the request takes too long the gateway kills the connection even though the query was well within the rate limit GraphQL and even their HTTP service set. Service limitations are application and provider specific, and they are applied to all the protocols you touch, we can’t enforce them for everyone or even hypothesize about what all the use cases are. For a lot of people, setting a block size limit of 1mb solves any concerns they might have about large CID’s as a side effect. For others, maybe not.

I’m very interested in recommending a reasonable size limit and would expect many consumers to adopt it (similar to how we handled block size limits) but to set a hard limit in the standard is too much for me.

vmx

comment created time in 2 days

pull request commentfilecoin-project/go-fil-commcid

Implement final FilMultihash/FilCodec

@whyrusleeping does there need to be some kind of plan to roll this out to lotus? it’s sort of a breaking change, but you’d know better than me what scale of breakage it could potentially cause.

ribasushi

comment created time in 2 days

push eventmikeal/estest

Mikeal Rogers

commit sha 0ce18e30809f0d660736a99e9f01065b8a937534

fix: nesting respects concurrency

view details

push time in 3 days

push eventmikeal/estest

Mikeal Rogers

commit sha 2075089622a6fa21f41dc0418b43590609908894

fix: working display and runner test

view details

push time in 3 days

issue commentdenoland/deno

"Too many open files" error on macOS when benchmarking deno

This looks like a typical error you’d encounter when you go past file descriptor limit. These settings vary by OS and configuration, try increasing the limits set for the system and the user you’re running as.

trivikr

comment created time in 3 days

push eventmikeal/estest

Mikeal Rogers

commit sha d13b76b6934c1087262f3daa1540ec98b169a352

fix: much better

view details

push time in 5 days

push eventmikeal/estest

Mikeal Rogers

commit sha 01006cd2d78bdd3fe0718247b25e77d48efd4ec9

wip: test passes

view details

push time in 5 days

push eventmikeal/estest

Mikeal Rogers

commit sha abd881f8413706ba999f03a82045de579f13cc55

wip: coming along now

view details

push time in 5 days

push eventipld/docs

Mikeal Rogers

commit sha e5531eb95e72575bb0909242c06a66b5c0739ed7

fix: getting into linking now

view details

push time in 5 days

create barnchipld/docs

branch : master

created branch time in 5 days

created repositoryipld/docs

[WIP] All you need to know about IPLD

created time in 5 days

issue commentmultiformats/multihash

Size limit of identity hash

I think this is where the disconnect is. Identity CIDs operate on a layer below where a "kinded union" would exist, they are strictly in "codec-land".

Once you recognize that links in a node graph are transparently traversed, that the link is resolved to a node that replaces the link representation to become the node representation for that property in the parent node, they are functionally equivalent. Both put the node data in the same place and stored in the same block.

Conceptually, this never exists in the decoded node graph:

ParentNode -> Link -> ChildNode

Instead, this is what happens:

// before resolution
ParentNode -> Link
// after resolution
ParentNode -> ChildNode

You can observe this in our pathing, where named properties that are links get resolved to their decoded node value. There’s actually no way to return a link from a fully resolved path.

let value = 1234
let link = Link( value )
{ property: link }

If you resolve the path /property of this block you’ll get 1234, there’s no pathable reference to the link itself.

vmx

comment created time in 5 days

issue commentipfs/notes

Implications of the Filecoin launch for IPFS

Something to consider.

Could IPFS just have a default, but configurable, max limit on the number of CID’s it was trying to broadcast? A reasonable default here would greatly reduce the risk profile. I can’t think of a case in which a regular user would need to broadcast more than 10K CID’s, the user experience when someone does this would be so poor that it’s hard to imagine it’s something that someone would want to do.

Breaching the max limit would cause an import error that could point them towards the settings for only publishing the CID of your pins, which we expect is what they probably want to do with a graph this size.

This doesn’t solve every concern, but when someone does the wrong thing this would greatly reduce the potential harm it would cause and it would take many many more users doing the wrong thing all at once to arrive at the same load that, right now, only a few would need to do.

mikeal

comment created time in 5 days

issue commentmultiformats/multihash

Size limit of identity hash

link loader

That’s specific to Go, where the link loader is an abstraction between the storage layer and the decoded node layer. We don’t have that in every language, often the node layer just talks directly to the storage layer, which means every storage API needs to handle this or every line that asks for data by CID needs to handle this.

Also, @warpfork will need to weigh in, but I recall him mentioning that there are plenty of things in go-ipld-prime that won’t work well when inlining data this way.

It is much much more elegant

The solution we spent considerable time working through to this problem is unions (mostly kinded unions using Link as a kind). It’s a core feature of IPLD Schemas and translates nicely into every programming language and all the abstractions we’ve built.

It’s problematic to have multiple approaches to inlining data and a kinded union provides a much cleaner approach that keeps the type differences clear to everyone. It sounds like you actually want to blur the line a bit on the type differences so I can see how that approach with be more attractive, but as we build out generic libraries it’s rather difficult to have a single type mean very different things.

That said, we’re not going to break or disallow anything that is valid CID/multihash, we just may not have a very nice interface for you to use when you inline data this way, which you probably don’t care about since you have your own libraries ;) And as I’ve already stated, I’m rather opposed to setting a hard limit on multihash size in the specs or core implementations. Some libraries and consumers may set limits you’ll have to contend with and I suspect languages or libraries that want to optimize memory allocations will set a configurable limit, none of which are an issue if you were to take the kinded union approach instead.

vmx

comment created time in 5 days

issue commentmultiformats/multihash

Size limit of identity hash

Now the question is. Could we come up with a solution for the Peergos use case, which Peergos could upgrade to, while limiting the identity hash to a small size as some libraries already do (and also I'd be in favour of).

I struggle to see how these inline use cases would exist in dag-cbor and newer codecs. From what I can tell, this looks like a workaround for some limits in dag-pb or perhaps in unixfsv1 (I haven’t gone deep enough to know for sure).

You can “inline” node data into the block using any codec that supports the full IPLD data model without hacking it into the CID. I can’t see the utility here other than “we forgot to make this part of the data structure a union” and it seems like the right thing to do there would be to fix the data structure to support that because it’s a lot more complex to deal with data that has been inlined into the multihash. I understand that in the case of dag-pb we may not be able to change the data structures, but that doesn’t mean we should port this practice over to users that have access to the complete IPLD Data Model.

It’s not that inlining data isn’t a common and necessary feature, it is, that’s why we fully considered it in dag-cbor and in the IPLD Data Model and have a compelling feature set. If this pattern is common enough we could even consider adding syntax to IPLD Schemas to make kinded unions on links easier, similar to how we have syntactic affordances for making links in general easier.

But, across a lot of our code, data inlined into the multihash throws a wrench in our layer model and is difficult to support across different Block, Link, and storage interfaces. Most code thinks of a CID as a key and its data living somewhere that it can retrieve by that key. If you put the data in the key, the representational pairing of [ key, value ] is lost and there’s not a very clean way to maintain the interfaces without pushing this to users (which is what happens currently, if you put data in the multihash you’re going to be pulling it out and working with it very manually).

vmx

comment created time in 5 days

issue commentmultiformats/multihash

Size limit of identity hash

The nice thing about having a limit in multihash is, that you can highly optimize it. You would always know the upper bound of the supported hashes, hence e.g. do things with stack allocations only. This is not possible if you want to support the Identity Hash with maximum compatibility, which would mean 8EiB.

Then don’t “support the Identity Hash with maximum compatibility” ;)

Users and implementations are free to make domain specific decisions about these limits, the right decision for one user will not be the same for another. It’s not the job of the underlying primitive to make these decisions on your behalf because we don’t know what each user’s requirements are.

Look at the block limit, to my knowledge only one transport has a real block limit and yet pretty much every user imposes block limits at half the current transport limit because we called it out as a good practice. It’s not a hard requirement in the spec and it’s not enforced by our codec libraries, but it’s a functioning limit everywhere that it matters.

I’m not saying we shouldn’t define good practices, and even document what we think is a reasonable target limit for multihash, but we shouldn’t impose that limit in these libraries at that layer or call it out in the specification as a hard requirement.

If @ianopolous wants to have big multihashes in his CID’s, he shouldn’t have an issue at the multihash layer, even if he will have issues at the IPFS layer. In the same way that I can create 5MB blocks for a one-off use case knowing that if it ever needs to be used in Bitswap it’s going to break.

vmx

comment created time in 5 days

pull request commentmultiformats/multicodec

Feat/draft v standard

Why don’t we just use a markdown table and write a simple script that bulids a .csv and .json file on every checking using GitHub Actions?

Stebalien

comment created time in 5 days

issue commentipld/team-mgmt

JavaScript patterns and practices

Oh ya, one thing I should mention, we already wrote a compatibility layer so that new codec work doesn’t fork away from what js-ipfs can consume.

https://github.com/multiformats/js-multiformats/blob/master/legacy.js

We can take any new-style codec and get an interface in the old ipld-format style and it’ll swap Uint8Array for Buffer and switch the CID implementations. I wrote this because I figured it would be a while before js-ipfs would be able to migrate and I didn’t want us to have to manage two implementations of every codec during that time.

All that’s left to do is to build and publish CJS-only packages of the old style interfaces as part of our automated releases. In the meantime we have a separate repo for both dag-cbor implementations until we write the build tooling and sufficiently test it.

mikeal

comment created time in 5 days

issue commentipld/team-mgmt

JavaScript patterns and practices

I’m working through some of the implications, but it looks like this isn’t a “wait and it’ll get better” situation. Webpack has been defending this bug for years and is still shipping it, so this is a break in the ecosystem that everyone migrating to ESM will have to contend with.

That said, there’s ways to work around it and to mitigate it and we’re still building tooling that will improve that.

It’s premature to talk about js-ipfs adopting our new stack, we’re not even done updating our own code to it and I would expect smaller/newer users like js-ipfs-lite to adopt it before js-ipfs as well. We’ll learn a lot in that process and should have acceptable solutions to these compatibility concerns.

Honestly, this is probably not the thing you’ll spend the most time dealing with migrating to. Switching to Uint8Array and the new CID interface are much harder because they are value types that get passed around all over the place, swapping import and export statements is relatively easy by comparison.

I’m going to be in Monday’s IPFS Core Implementations call to discuss in more detail.

mikeal

comment created time in 5 days

pull request commentmultiformats/js-multiformats

feat: asCID property self reference

I'm strongly against breaking toJSON().

Ya, we won’t be, if this lands it’ll land without the toJSON() break.

mikeal

comment created time in 5 days

issue commentmultiformats/js-cid

Proposal: Let's introduce `asCID` static method

@Gozala these answers are a bit complicated, I’m going to join the Core Implementations meeting on Monday and can discuss it in more detail there.

Gozala

comment created time in 5 days

issue commentmultiformats/multihash

Size limit of identity hash

multihash is a general purpose standard with general purpose libraries. Given how young the project is, we should assume that the use cases we currently understand are not a complete set and we need to make sure we stay open and accessible to being used for things in the future that we haven’t even thought of yet.

With that in mind, I don’t think that we should:

  • Create and enforce limits based on our own opinions about what is “good practice.”
  • Adopt and enforce limits on behalf of specific use cases.

If there’s a universal reason to have a limit on the size of a multihash that we’re confident is always going to be true, then we should adopt it, but that’s not at all what I’m seeing.

If, as we already know, IPFS wants to use CID’s for subdomains and therefor needs to enforce a size limit on CID’s which in effect limits the size of a multihash, that’s fine, but that’s IPFS’s decision to make and their limit to enforce. That doesn’t belong in the core of multihash because it’s not universally representative of all the use cases someone might build on multihash.

vmx

comment created time in 5 days

push eventProtoSchool/protoschool.github.io

Deployment Bot (from Travis CI)

commit sha 1a1ad0d81cfc9be129755fc33619dfb4d8e4bf73

Deploy proto.school to github.com/ProtoSchool/protoschool.github.io.git:master

view details

push time in 5 days

push eventmikeal/mikeal

Mikeal Rogers

commit sha a326d74760aa43a43274025c5a2d4ea6dd2e7c10

fix: well that was embarrassing wasn’t it

view details

push time in 6 days

push eventmikeal/mikeal

Mikeal Rogers

commit sha aaa81cd6ea8b23c2beede5262eb82845ef6a0746

fix: no need for this anymore since it has to return an escape

view details

push time in 6 days

push eventmikeal/mikeal

Mikeal Rogers

commit sha 9ffb8af273deed0fbbfb71c7e866c6b0ca51b128

fix: out of scope issue

view details

push time in 6 days

push eventmikeal/mikeal

Mikeal Rogers

commit sha c85962561d206157a4e7c242044343ee142a2f83

feat: much better

view details

push time in 6 days

push eventmikeal/mikeal

Mikeal Rogers

commit sha 67166c6a42e30beed4e9a8ea631af29f8af12e8b

feat: so much better

view details

push time in 6 days

push eventmikeal/mikeal

Mikeal Rogers

commit sha fd045f6acec82d61990c7de69f9bdfa15ee417f7

fix: wrong int

view details

push time in 6 days

push eventmikeal/mikeal

Mikeal Rogers

commit sha 3bd9067a29d2a3f832ce0ee021af559e6a9deb13

fix

view details

push time in 6 days

create barnchmikeal/mikeal

branch : master

created branch time in 6 days

created repositorymikeal/mikeal

Programmer

created time in 6 days

issue commentmultiformats/js-cid

Proposal: Let's introduce `asCID` static method

Now that you’re considering a breaking change (deprecating isCID) I should note that there’s a future upgrade to multiformats which is also a breaking change to CID, among other things. If you do go down that path you should probably combine it with a migration to the new interface so that you don’t suffer 2 big refactors.

Gozala

comment created time in 6 days

issue commentmultiformats/multihash

Size limit of identity hash

Where/how would we enforce the limit? We’ve had very similar threads about block size limits and ended up punting any hard limits to the network and storage layer, which in this case isn’t really an option since they rarely decode the block data.

Also, just as a matter of fact, many of our existing libraries don’t handle identity multihashes in CID’s correctly and will hand the Link/CID type off to the storage layer to get as if it were any other CID. Similarly, our Block interfaces do not all support data encoded to/from the identity multihash. The strategy for representing and considering “inline blocks” is still in the experimental phase when it’s handled at all.

This is a good time for this discussion given that inline block support is still under development, but I want to make sure that we’re setting the right expectations about where this might land.

vmx

comment created time in 6 days

issue commentmultiformats/multibase

What about Base62?

@vmx I think this (base encodings) is a little different than codecs. There isn’t a gigantic list of prior formats to consider, nor do we expect the list of base encodings to grow a lot over time, and the multibase table is separate (do we have a proper table yet?) from the multicodec table.

Pitometsu

comment created time in 6 days

pull request commentmultiformats/js-multiformats

feat: asCID property self reference

Should this be merged or do we need resolution on toJSON? It's been approved and sitting for a while.

I’ll be returning to it eventually, other priorities have been getting in the way. I started this PR to fully consider the interface and I don’t feel like I’ve really had the time to do that yet.

Also, is the plan to replace the cids module everywhere? If so, is there a rollout plan for that?

Yes, some day, but not today :)

For js-ipfs this shouldn’t get adopted any time too soon, and should be part of a much larger change migrating to all the new Block and multiformats interfaces. But it’s premature to consider that given that we haven’t even updated our own stack to these changes.

I’ll join the IPFS implementations call on Monday to discuss these changes in more detail.

mikeal

comment created time in 6 days

push eventmikeal/merge-release

Josejulio Martínez

commit sha 69d0bc4750a97ce4e1c81d4a7c86088f79bd051d

Update workflow name to match the currently used (#30) - The `push.yml` file no longer exists in the project, updating it to use `mikeals-workflow.yml`

view details

push time in 7 days

PR merged mikeal/merge-release

Update workflow name to match the currently used
  • The push.yml file no longer exists in the project, updating it to use mikeals-workflow.yml
+1 -1

1 comment

1 changed file

josejulio

pr closed time in 7 days

pull request commentmikeal/merge-release

Update workflow name to match the currently used

Thanks!

I think we may want to have a simpler sample workflow somewhere because my defaults have gotten pretty complicated, and I’m about to push an even more complicated version soon.

josejulio

comment created time in 7 days

create barnchmikeal/estest

branch : master

created branch time in 8 days

created repositorymikeal/estest

ESM native testing across all JS platforms.

created time in 8 days

push eventmikeal/import-cartographer

Mikeal Rogers

commit sha ae0b43674d467a635db5ee355935a08a99c8e854

fix: build script

view details

push time in 8 days

push eventmikeal/import-cartographer

Mikeal Rogers

commit sha 2fec030ad92e818fdada09b814748d56dff0aa9f

debug: yaml

view details

push time in 8 days

push eventmikeal/import-cartographer

Mikeal Rogers

commit sha 8ef6471e8616eab85fa7c44ca6aca8b8cc311617

debug: subshell

view details

push time in 8 days

push eventmikeal/import-cartographer

Mikeal Rogers

commit sha 556e06e5693c88421beb10d5a6e6d71092a5c453

fix: test should run on machines that aren't mine

view details

push time in 8 days

push eventmikeal/import-cartographer

Mikeal Rogers

commit sha dd7d182058987e8109401e0ae671b4dcc35bd430

build: automated releases

view details

push time in 8 days

push eventmikeal/import-cartographer

Mikeal Rogers

commit sha 24fce17299cfb15355e194ad9b74b12394a700af

fix: full test coverage

view details

push time in 8 days

issue commentipfs/pinning-services-api-spec

Defining API access controls

There’s a lot of good ideas here, but what I think needs to be consider is: how many problems are you willing to take on?

Signing is not a solved problem. There is not a standardized and widely adopted signing spec for this sort of thing, the closest would be OAuth1 and I don’t think anyone wants to do that.

Authorization: Bearer <key> has the benefit of limiting the problems you’re taking on. If you accept it via header and querystring you’d have a fast path to adoption in pretty much any client. This is something trivial to add, and since you’re not defining how tokens are acquired people can write whatever flow they like around it.

Since it’s an opaque token, you can use JWT if you want, or not, we wouldn’t be locking anyone in to any particular decision which is probably the right approach.

lidel

comment created time in 8 days

pull request commentipfs/js-ipfs

feat: store pins in datastore instead of a DAG

one more note in favor of base32, we tend to default to base32 in all of the new IPLD stack and we also tend to cache the representation, so sticking with base32 whenever possible will result in more cache hits.

achingbrain

comment created time in 8 days

issue commentmikeal/bent

How to pipe stream

bent returns a stream by default, but unlike request it doesn’t instrument the pipe() command to set the method and headers on a subsequent http stream, so you’ll need to do that yourself.

robertsLando

comment created time in 8 days

create barnchmikeal/import-cartographer

branch : master

created branch time in 9 days

created repositorymikeal/import-cartographer

Map your imports

created time in 9 days

starteddevsnek/esvu

started time in 9 days

push eventmikeal/brrp

Mikeal Rogers

commit sha b937fcff56715202314ff577c93840786746c37b

build: run linter on test

view details

push time in 9 days

push eventmikeal/brrp

Mikeal Rogers

commit sha 8917bf66fc74a86eda998bdf6770ae827c70c041

fix: make the linter happy

view details

push time in 9 days

issue openedbcoe/c8

bug: file missing from coverage

  • Version: 7.2.0
  • Platform: Linux, Node.js v14.5.0

I’m seeing this error:

file: /root/brrp/src/rollup-iter.js error: TypeError: Cannot read property 'sources' of null
    at V8ToIstanbul.load (/root/brrp/node_modules/v8-to-istanbul/lib/v8-to-istanbul.js:46:34)
    at Report.getCoverageMapFromAllCoverageFiles (/root/brrp/node_modules/c8/lib/report.js:88:25)
    at async Report.run (/root/brrp/node_modules/c8/lib/report.js:59:20)
    at async exports.outputReport (/root/brrp/node_modules/c8/lib/commands/report.js:27:3)
    at async /root/brrp/node_modules/c8/bin/c8.js:39:9

As a result the file src/rollup-iter.js is missing from coverage and c8 passes the full coverage check :(

You can reproduce by running the tests in https://github.com/mikeal/brrp

Sorry for this being a little terse, I plan to return to this and debug a little bit more. It looks like it’s in v8-to-istanbul so I can at least track it down more, maybe even send a fix, but I wanted to get this logged before I forgot about it and moved on :)

created time in 9 days

push eventmikeal/brrp

Mikeal Rogers

commit sha 2d7ee02c7f2bd1515b1eb2ef72d0958fbd63bdba

fix: remove bad warnings

view details

push time in 9 days

CommitCommentEvent

push eventmikeal/brrp

Mikeal Rogers

commit sha 52b7cee5e43d4fba7e41c40670e222b4f3cd75e8

feat: minification

view details

push time in 9 days

push eventmikeal/toulon

Mikeal Rogers

commit sha e928296c79b3669a7043fd643dacd7171c92807d

fix: accept the puppeteer module instead of depending on it

view details

push time in 9 days

startednaugtur/handsfreeyoutube

started time in 11 days

push eventmikeal/brrp

Mikeal Rogers

commit sha a9d52904147579ad61e1f8c1997aa057cb386804

doc: s/ES/ESM

view details

push time in 11 days

push eventmikeal/brrp

Mikeal Rogers

commit sha 8d0ebf2a58d73e1cd9e343ada51b5c19d1934c4e

fix: consistent names, docs for polyfills

view details

push time in 11 days

push eventmikeal/brrp

Mikeal Rogers

commit sha 182394b2c6f82b64ba3428077dfebd3454fc37a8

fix: release from v14

view details

push time in 11 days

push eventmikeal/brrp

Mikeal Rogers

commit sha 880a3bf8899298486ae3254950af815f6be14e9c

fix: disable coverage on v12

view details

push time in 11 days

push eventmikeal/brrp

Mikeal Rogers

commit sha 317e7f0c3d687a0d7bc3a5c76d038d8912d555a9

fix: full coverage and automated releases

view details

push time in 11 days

more