profile
viewpoint
Irakli Gozalishvili Gozala Portland, OR, United States https://gozala.io/ Curios tinkerer at Mozilla that fancies functional paradigm

Gozala/ambiance 76

Ambiance code editor

Gozala/alias-quokka-plugin 18

Quokka plugin to provide module import aliases

Gozala/actor 16

Experimental library implementing scala like actors in javascript.

Gozala/antimutable-array 12

Immutable alternatives to built-in array operators

Gozala/about-downloads-addon 5

Download manager in the tab.

Gozala/ace 4

Ajax.org Code Editor

Gozala/ace-teleported 4

Demo ace package

Gozala/addon-sdk 4

The Add-on SDK repository.

gordonbrander/rocket-bar 3

Experimental live deep search

Gozala/.bash 3

my bash config

issue commentEvidlo/remarkable_printer

Failing to print from MacOS

Here's the output with -debug flag on (I stopped service and run it manually as /home/root/printer.arm -debug)

print.log

Gozala

comment created time in 2 hours

issue openedEvidlo/remarkable_printer

Failing to print from MacOS

Here's output of journalctl -f --unit printer which suggest something is wrong with pdf although I'm not sure why

Jul 10 07:21:26 reMarkable printer.arm[770]: Listening on 0.0.0.0:9100
Jul 10 07:21:46 reMarkable printer.arm[770]: Saving PDF to /home/root/.local/share/remarkable/xochitl/e0d00f7b-0bca-4f65-87d2-5f0189269b3e.pdf
Jul 10 07:21:46 reMarkable printer.arm[770]: Invalid PDF
Jul 10 07:21:46 reMarkable systemd[1]: printer.service: Main process exited, code=exited, status=1/FAILURE
Jul 10 07:21:46 reMarkable systemd[1]: printer.service: Failed with result 'exit-code'.
Jul 10 07:21:47 reMarkable systemd[1]: printer.service: Service hold-off time over, scheduling restart.
Jul 10 07:21:47 reMarkable systemd[1]: printer.service: Scheduled restart job, restart counter is at 3.
Jul 10 07:21:47 reMarkable systemd[1]: Stopped Native printing to reMarkable.
Jul 10 07:21:47 reMarkable systemd[1]: Started Native printing to reMarkable.
Jul 10 07:21:47 reMarkable printer.arm[775]: Listening on 0.0.0.0:9100

created time in 2 hours

issue openedipfs/notes

IPLD with Network & Persintance

Currently we have:

  1. libp2p - networking layer
  2. IPLD - a data layer
  3. IPLD - a file system and a network (with all the above and more)

For years I have being building things with IPFS even though, what I really used was 1. and 2. with the network & persistance of the IPFS. Recently I started applying some of my learning to IPFS in browsers (https://github.com/ipfs/js-ipfs/issues/3022) and talking to the various teams in our community to inform this work. Through this conversations I have noticed that:

  • All prominent users of IPFS (in browser context), that I had a chance to talk, to use:
    • Primarily ipfs.dag API
    • Bits of libp2p (like pubsub) for replication.
  • js-ipfs-lite from Textile basically is IPLD with Network & Persistence

Then when I think about it makes sense that people use js-ipfs (when they really want DAG with network & persistence) because that is the only thing that puts all the pieces together. Which is to suggest that if we had a thing that was just IPLD with persistence and network (of the IPFS) that is what all these teams would use. In fact that is more or less what shared IPFS node https://github.com/ipfs/js-ipfs/issues/3022 ended up.

I think this it is worth considering to break out another layer (just like libp2p and ipld come to be) from the IPFS that is IPLD + Network + Persistence (let's call it a DAGService for now). I think the evidence of the demand is pretty clear, but besides I see few compelling reasons to do so:

  1. It creates an opportunity to reduce the scope of IPFS and focus on FS piece.
  2. Enables us to iterate on the API that fits this demand.
  3. Creates a light reusable component that actually addresses user needs.

created time in 4 hours

issue commentmultiformats/js-cid

Proposal: Let's introduce `asCID` static method

Now that you’re considering a breaking change (deprecating isCID) I should note that there’s a future upgrade to multiformats which is also a breaking change to CID, among other things. If you do go down that path you should probably combine it with a migration to the new interface so that you don’t suffer 2 big refactors.

@mikeal I have couple of questions:

  1. Is there path to changing CID such that it would be future compatible with new multiformats, but does not require switching everything to multiformats ?
  2. Is multiformats (or some pieces of it) ready to be integrated into js-ipfs ?
  3. How does ArrayBufferView approach described in the notes above fit into the multiformats ?
Gozala

comment created time in 9 hours

pull request commentipld/js-ipld-dag-pb

Update dependencies

@vmx I don't have privileges to do the github approval thing, but it looks good to me

vmx

comment created time in 15 hours

issue commentmultiformats/js-cid

Proposal: Let's introduce `asCID` static method

Me @achingbrain and @hugomrdias had a discussion about this on call other day, and I will try to summarize it here.

  • There is a strong preference towards having either CID.asCID or CID.isCID, but not both.
    • My personally opinion is that there is the value in having deprecation phase with warnings of CID.isCID for specific time frame after which CID.isCID could be removed.
    • I think it would be useful to decouple removal of CID.isCID from adopting CID.asCID although I also understand this may be impractical.
  • It became clear that problem that CID.asCID was solving was not very clear, so I'll attempt to break it down in the list:
    • CID.isCID(cid) can return true even if cid is older implementation that lacks some method that CID has, which could lead to errors with if (CID.isCID(cid)) { cid.nonExstingMethod() }
      • cid = CID.asCID(v) is more robust because returned CID will be instanceof CID and there for have all the methods that caller expects.
    • Approach of CID.asCID(v) will work with multiple JS realms, because even though prototype chain is lost returned cid will contain it.
      • In the https://github.com/ipfs/js-ipfs/issues/3022 context it means there will be no need to serialize / deserialize arbitrary structures when moving those across message channel.
  • It was identified that CID.asCID(v) can't actually solve incompatibly problem, because binary representation can also change.
    • It is true, however those are much rarer.
    • We will still retain option to either make two versions incompatible in the future or not
      • If cid_v2 = CID.asCID(cid_v1) is too costly (performance vise or maintenance vice) we can return null.
        • We would however have to take bit more care when doing updates to binary representations so they do actually appear different.
      • We could also choose to make cid_v2 = CID.asCID(cid_v1) work if that would make sense when we do it.
    • We would need to be also careful in ensuring that cid_v1 = CID.asCID(cid_v2) does the right thing which again could be to downgrade or return null.
  • There was strong support towards making CID be a glorified ArrayBuffer view (as in interface ArrayBufferView { byteOffset: number, byteLength: number, buffer: ArrayBuffer }).
    • We've discussed sub-classing Uint8Array
      • It is tempting but
        • 💔 It will no longer fix multi-realm issue which would be a shame.
        • 💔 Having (mutable) methods of Uint8Array on CID doesn't seems right
        • 💚 You could use cid instead of cid.buffer
    • I think we agreed that just ArrayBufferView was a better approach.
    • I would like to decouple CID.asCID from CID being glorified ArrayBufferView.
      • We can get either without the other
      • CID.asCID can be backwards compatible change, while ArrayBufferView can not (because we already have buffer property)
Gozala

comment created time in 15 hours

pull request commentipfs/js-ipfs

feat: non-bufferring multipart body encoder

All tests except the example one (that also fails on master) are passing now. I think this is ready for the review.

Not sure what happend, did someone restarted the job and somehow it's fixed ?

Gozala

comment created time in 16 hours

pull request commentipld/js-ipld-dag-pb

Backwards compatible pure data model API

@vmx that sounds good to me. Thanks

Gozala

comment created time in 20 hours

pull request commentipfs/js-ipfs

feat: non-bufferring multipart body encoder

All tests except the example one (that also fails on master) are passing now. I think this is ready for the review.

Gozala

comment created time in a day

push eventGozala/js-ipfs

Irakli Gozalishvili

commit sha 908d99e45c160877d1e1fe0cdebf796890b5d31f

fix: use native blobs in elector renderer

view details

Irakli Gozalishvili

commit sha bfe012f29c27cabc067597a4d32caa0c9d704d44

fix: prefer native File over polyfill (in elector)

view details

push time in a day

pull request commentipfs/js-ipfs

feat: non-bufferring multipart body encoder

@hugomrdias there one issue that I'm not sure how to resolve. It appears that electron-renderer chooses to load blob.js over blob.browser.js which is causing problems. Is there a way to make it pick up browser overrides ? Otherwise only I other thing I could think of is to do a runtime check.

Gozala

comment created time in a day

push eventGozala/js-ipfs

Irakli Gozalishvili

commit sha 39464aaa5da95acf7751bba1ffe016b253d3e1ac

fix: incorrect header used for nsecs

view details

push time in a day

push eventGozala/js-ipfs

Irakli Gozalishvili

commit sha c9fc232b27316eba1edfd1c20c50eb1a5a1bfee5

chore: write blob tests

view details

push time in a day

push eventGozala/js-ipfs

Irakli Gozalishvili

commit sha 567b738b774159140498387e4ab833f6842e9de8

fix: add \r\n after each part of form-data

view details

push time in 2 days

pull request commentipld/js-ipld-dag-pb

Backwards compatible pure data model API

@vmx any chance you could get to this sometime this week ? I can't make any more progress on ipfs/js-ipfs#3081 without this.

Gozala

comment created time in 2 days

push eventGozala/js-ipfs

Irakli Gozalishvili

commit sha c1c05d00dffff7dbec1f177806246811f94ecfc8

fix: encode filename once

view details

push time in 2 days

push eventGozala/js-ipfs

dependabot-preview[bot]

commit sha 343bd451ce7318751aab9934981e3727c6025234

chore(deps): bump multihashing-async from 0.8.2 to 1.0.0 (#3122) Bumps [multihashing-async](https://github.com/multiformats/js-multihashing-async) from 0.8.2 to 1.0.0. - [Release notes](https://github.com/multiformats/js-multihashing-async/releases) - [Changelog](https://github.com/multiformats/js-multihashing-async/blob/master/CHANGELOG.md) - [Commits](https://github.com/multiformats/js-multihashing-async/compare/v0.8.2...v1.0.0) Signed-off-by: dependabot-preview[bot] <support@dependabot.com> Co-authored-by: dependabot-preview[bot] <27856297+dependabot-preview[bot]@users.noreply.github.com>

view details

Alex Potsides

commit sha a96e3bc9e3763004beafc24b98efa85ffa665622

fix: still load dag-pb, dag-cbor and raw when specifying custom formats (#3132) If we specify a `formats` array as part of the ipld options in in-proc nodes, it replaces the default list of dag-pb, dag-cbor and raw. This change allows the `loadFormat` function to still resolve those formats even if the user has passed a `format` array that does not contain them. Fixes #3129

view details

Vasco Santos

commit sha 0b64c3e3cda6c4c15b31cb5d911e208372675d65

chore: update example with public webrtc servers (#3126) Per #2779 , this PR adds the public webrtc servers to the browser example. This allows users to run easily the example, but also provides information for how users should use a signaling server for production.

view details

Alex Potsides

commit sha 03b17f5e2d290e84aa0cb541079b79e468e7d1bd

feat: store blocks by multihash instead of CID (#3124) Updates the `ipfs-repo` dep to a version that stores blocks by multihash instead of CID to support CIDv1 and CIDv0 access to the same block. New features: - Adds a `--multihash` argument to the cli command `ipfs refs local` which prints the base32 encoded multihash of each block BREAKING CHANGES: - `ipfs.refs.local` now returns a v1 CID with the raw codec for every block and not the original CID by which it was added to the blockstore Co-authored-by: Hugo Dias <hugomrdias@gmail.com>

view details

Alex Potsides

commit sha 65f8b23f550f939e94aaf6939894a513519e6d68

feat: add interface and http client versions to version output (#3125) Adds `interface-ipfs-core` and `ipfs-http-client` versions to the output of the `ipfs version` command, also the git commit id if it's available. Closes #2878

view details

Alex Potsides

commit sha 8cb8c73037e44894d756b70f344b3282463206f9

fix: optional arguments go in the options object (#3118) We have a few older APIs that take multiple optional arguments, which makes our code more complicated as it has to guess the users' intent, sometimes by inspecting properties on the passed args to see if they happen to correspond with properties on the actual options object. The options object was recently added to all API calls and is the right place for optional arguments to go, so the change here is to move all optional arguments into the options object, except where the presence of an optional argument dramatically changes the behaviour of the call (`ipfs.bootstrap` I'm mostly looking at you), in which case the methods are split out into multiple versions that do distinct things. Only the programatic API is affected, the CLI and HTTP APIs do not change. BREAKING CHANGES: - `ipfs.bitswap.wantlist([peer], [options])` is split into: - `ipfs.bitswap.wantlist([options])` - `ipfs.bitswap.wantlistForPeer(peer, [options])` - `ipfs.bootstrap.add([addr], [options])` is split into: - `ipfs.bootstrap.add(addr, [options])` - add a bootstrap node - `ipfs.bootstrap.reset()` - restore the default list of bootstrap nodes - `ipfs.bootstrap.rm([addr], [options])` is split into: - `ipfs.bootstrap.rm(addr, [options])` - remove a bootstrap node - `ipfs.bootstrap.clear([options])` - empty the bootstrap list - `ipfs.dag.get(cid, [path], [options])` becomes `ipfs.dag.get(cid, [options])` - `path` is moved into the `options` object - `ipfs.dag.tree(cid, [path], [options])` becomes `ipfs.dag.tree(cid, [options])` - `path` is moved into the `options` object - `ipfs.dag.resolve(cid, [path], [options])` becomes `ipfs.dag.resolve(cid, [options])` - `path` is moved into the `options` object - `ipfs.files.flush([path], [options])` becomes `ipfs.files.flush(path, [options])` - `ipfs.files.ls([path], [options])` becomes `ipfs.files.ls(path, [options])` - `ipfs.object.new([template], [options])` becomes `ipfs.object.new([options])` - `template` is moved into the `options` object - `ipfs.pin.ls([paths], [options])` becomes `ipfs.pin.ls([options])` - `paths` is moved into the `options` object Co-authored-by: Hugo Dias <hugomrdias@gmail.com>

view details

Marcin Rataj

commit sha 62c1422dec35108cf3f20b7ae9460f984a7101c2

docs(core-api): fix default value column (#3140)

view details

Alex Potsides

commit sha b4d3bf80e7cd5820e2561fc957a9f0f17235df05

feat: add size-only flag to cli repo stat command (#3143) Makes `stats repo` an alias for `repo stat` to make formatting the same across commmands in line with go-ipfs. Also adds `-s` flag to both commands to print just the size info.

view details

Alex Potsides

commit sha 77ecfefd6c6d7df8784cd522a288ea275208ce6c

docs: add docs for test strategy (#3144)

view details

Alex Potsides

commit sha ab3127fec367b15e319cb128804552cb67fff288

chore: upgrade go-ipfs-dep (#3135)

view details

Alex Potsides

commit sha 4309e1004bb77ee276b57228c35a921fb780a227

fix: error when no command specified (#3145)

view details

Alex Potsides

commit sha 4c0c67f023c75bbcb56b0520b31f1334480a5130

fix: unhandledpromiserejection in electron tests (#3146) When this method is async (without any actual async work) the 'should fail to publish if does not receive private key' test experiences an UnhandledPromiseRejection in electron, though the test still passes.

view details

dependabot-preview[bot]

commit sha 5fe9495b39af06819f5ad3fd8d50604fe7177ca8

chore(deps-dev): bump nock from 12.0.3 to 13.0.2 (#3136) Bumps [nock](https://github.com/nock/nock) from 12.0.3 to 13.0.2. - [Release notes](https://github.com/nock/nock/releases) - [Changelog](https://github.com/nock/nock/blob/main/CHANGELOG.md) - [Commits](https://github.com/nock/nock/compare/v12.0.3...v13.0.2) Signed-off-by: dependabot-preview[bot] <support@dependabot.com> Co-authored-by: dependabot-preview[bot] <27856297+dependabot-preview[bot]@users.noreply.github.com>

view details

Alex Potsides

commit sha c9700f78cefc523f6140361a90099c4991b427a7

fix: use post for preloading (#3149) As we don't accept get requests via the http api any more

view details

Alex Potsides

commit sha 335c13d529fc54e4610fc1aa03212126f43c63ec

fix: set error code correctly (#3150) Fixes a typo

view details

Irakli Gozalishvili

commit sha 58b8d2cd137d171468149ff3e3a626686e2c8e0c

Merge branch 'master' into blobity-blob

view details

push time in 2 days

issue commentipfs/go-ipfs

localhost subdomains do not work on Firefox and Safari

Per @rafaelramalho19 it works with default windows browser

image

Gozala

comment created time in 2 days

issue commentipfs-shipyard/ipfs-webui

Safari Support

Do we know if this works in default windows browser ? Can anyone with the windows machine try this ?

Gozala

comment created time in 2 days

issue openedipfs-shipyard/ipfs-webui

Safari Support

https://webui.ipfs.io is unable to IPFS HTTP API, because all requests are blocked as it is considered mixed content.

There is a known Safari bug report to allow requests to looback addresses https://bugs.webkit.org/show_bug.cgi?id=171934

I'm not sure if we can do much until that bug is resolved, but want to have issue so we can at least track this

created time in 2 days

issue commentipfs/go-ipfs

localhost subdomains do not work on Firefox and Safari

Found a corresponding Safari bug https://bugs.webkit.org/show_bug.cgi?id=160504 and posted a comment.

Gozala

comment created time in 2 days

issue commentipfs/go-ipfs

localhost subdomains do not work on Firefox and Safari

Can anyone with windows machine check if default browser there handles this properly ?

Gozala

comment created time in 2 days

push eventGozala/js-ipfs

Irakli Gozalishvili

commit sha e4020183cb2c66020cea687953f836cf98584ee5

fix: multipartRequest so body does not emit blobs

view details

push time in 2 days

issue openedipfs/go-ipfs

localhost subdomains do not work on Firefox and Safari

Version information:

go-ipfs version: 0.6.0 Repo version: 10 System version: amd64/darwin Golang version: go1.14.4

Description:

After 0.6.0 version local gateway started redirecting to localhost subdomains (guessing #651) e.g. going to following address http://localhost:8080/ipfs/QmTQ1PTYhZNt9q7bJ1r6R18ctj6NoHTzvqdLgiq2UzRZHU/ redirects to http://bafybeicle2hoymo7rsqf7w5mjyssntevvwpylixlri63azrppm6lzv7vvm.ipfs.localhost:8080/ which fails in Firefox

image

And in Safari

image

Although it does work in Chrome.

As per @lidel

..this is known problem on some platforms with combo of strict DNS resolver and browser vendors who don't implement "let localhost be localhost" 6.3.3 @ https://tools.ietf.org/html/rfc6761#section-6.3 (upstream bug for Firefox: https://bugzilla.mozilla.org/show_bug.cgi?id=1220810) Until that is resolved, the localhost subdomain fix is to use go-ipfs' Gateway port as HTTP proxy for loading *.localhost websites – that way we avoid using OS DNS resolver that fails to resolve *.localhost hostnames (If you have IPFS Companion installed in Firefox, it will set up proper proxy automatically) (edited)

Despite proxy solution, I think this is a major regression. People installing IPFS for the first time will not know this and even knowing this they may not want to use IPFS as proxy (maybe they don't want to have IPFS running all the time, or have other reasons).

I think only reasonable solution would be not to redirect until known issues are resolved or only redirect requests coming from chrome (until Firefox, Safari support this)

created time in 2 days

pull request commentipfs/js-ipfs

feat: bufferring free multipart body encoder

Reminder to myself to include tests discussed in #3138 here

Gozala

comment created time in 2 days

push eventGozala/js-ipfs

Irakli Gozalishvili

commit sha 96866288211a83b37bd7915cd0659f08fc3bae93

fix: add support for String instances

view details

Irakli Gozalishvili

commit sha 15966093dc8107ccf852a1311d27773f56e591fe

fix: browser module paths overrides

view details

push time in 2 days

push eventGozala/js-ipfs

Irakli Gozalishvili

commit sha 3e7baf77995cebee59e19347519ef3482f54484e

feat: bufferring free multipart body encoder

view details

push time in 2 days

PR opened ipfs/js-ipfs

feat: bufferring free multipart body encoder

attempt to fix #3029

+1001 -302

0 comment

16 changed files

pr created time in 2 days

create barnchGozala/js-ipfs

branch : blobity-blob

created branch time in 2 days

issue commentipfs/pinning-services-api-spec

Defining API access controls

I want to take a step back here. My primary motivation was to following:

  • Design a system where keys / tokens aren't shared across devices
    • So that metadata about pin / origin is preserved
    • Access to specific device could be revoked
  • Improve UX of adding services
    • User sharing key with service provides (arguably) better flow, than user copies keys from the service to the client.
    • User creates pin space (key) where pins are added, that key can be shared with pinning services to improve availability
      • Going other way round implies user wishing to pin to n services, needs to actually do pinning action with n services.
        • IPFS could create notion of pin service groups, but that feels like an afterthought
        • With client issued keys groups are just part of the design.

Most of our discussion ended up around how to authenticate Secret Tokens VS Signing. I recognize that switching from Token based authentication to signing based one (as I have proposed) requires non trivial changes to existing services and seems like a stretch.

Therefor I would like to decouple authentication from the original motivation: Allowing (service) client to generate authentication token (or key) instead of making (service) provider generate a token / key as that addresses some of my original motivation:

  • If client generates token / key that leads to having different keys on different devices
    • Which leads to pin origin being captured
    • Access to specific device could be revoked
    • Key / Token rotation could be automated (e.g. client could issue request to switch to a new token / key)
  • Because client generates token / key it can
    • Share that key with a service provider without manual steps (because service provider has public address)
    • Key becomes "pin space" which can be bonded to multiple services.
lidel

comment created time in 3 days

issue commentipfs/pinning-services-api-spec

Defining API access controls

IIUC, there's nothing about the API_KEY that means it has to be copied from the service to the device and not the other way around.

If service generates API_KEY and not the client, then it has to be copied from server to client otherwise how is server going to let the client know ? If client generates key then it call pass the key to the server because it knows the address for it.

lidel

comment created time in 3 days

issue commentipfs/pinning-services-api-spec

Defining API access controls

  • Pinata's authentication method is currently based off of a "PUBLIC_KEY" / "PRIVATE_KEY" authentication. Essentially users just pass in these keys along with their pin requests and that's what authenticates them.

@obo20 Do PUBLIC_KEY, PRIVATE_KEY correspond to pinata_api_key and pinata_secret_api_key ?

  • Is it going to be tricky to allow users to add their own public key in https://pinata.cloud/account instead ?
  • I am also guessing that difference with proposed model is that instead of passing private key, requests would be signed with it which implies:
    1. Server would have to verify signature via public key (included in payload) instead of current verification (Is it just looking up key in DB and compares them now ?)
    2. It would provide a better security model as private keys will never leave the device.
  • Some services allow you to do "bucket" authentication with multiple API keys.

I am not sure if I'm overlooking something here, but it appears that proposed method would work just as well here.

  • Other services utilize JWTs for authentication.

I don't see how JWTs are incompatible either. It's just they would be signing key would be private key that node generated instead of one that it got from the service.

Simply doing an API_KEY header does allow for a lot of flexibility, but it has following trade-offs:

  1. User has to copy & paste things from the service into node config.
  2. It is too easy to end up sharing keys across devices.

Configuring services using keys from client on the other hand:

  1. Avoids copy & pasting things.
  2. Implies different keys on devices that provides
  • Better security
  • Keeps metadata of which device did what
  1. Provides better UX metaphor
    • User create buckets (places) for pins
    • User can associate multiple services (and/or devices) with the same bucket.
lidel

comment created time in 3 days

issue commentipfs/js-ipfs-unixfs

Add support for Uint8Array's in place of node Buffer's to ipfs-unixfs-importer

@achingbrain is this the right type signature for the importer:

declare function importer(source:AsyncIterable<ImportEntry>, ipld:IPLDResolver, options?:ImporterOptions)

type ImportEntry =
  | ImportFile
  | ImportDir

type ImportFile = {
  path: string|void,
  content: Content,
  mtime?: Mtime,
  mode?: Mode
}

type ImortDir = {
  path: string,
  content?:void,
  mtime?: Mtime,
  mode?: Mode
}


type Content =
  | Iterable<string>
  | Iterable<ArrayBuffer>
  | Iterable<ArrayBufferView>
  | AsyncIterable<string>
  | AsyncIterable<ArrayBuffer>
  | AsyncIterable<ArrayBufferView>

type Mode = string | number
type MTime = number | Date | UnixFSTime | HRTime
type UnixFSTime = {
  secs: number,
  nsecs?: number 
}
type HRTime = [number, number]

type IPLDResolver = {
 // ...
}

type ImporterOptions = {
  // ...
}
Gozala

comment created time in 3 days

issue commentipfs/js-ipfs-unixfs

Add support for Uint8Array's in place of node Buffer's to ipfs-unixfs-importer

Actually I think I have overlooked chunk validation phase that seems to normalize each chunk

https://github.com/ipfs/js-ipfs-unixfs/blob/8e8d83d757276be7e1cb2581abd4b562cb8209e2/packages/ipfs-unixfs-importer/src/dag-builder/validate-chunks.js#L8-L19

Although I'm also starting to suspect that ipfs.add([ new ArrayBuffer(1024) ]) would cause content was invalid error because ArrayBuffer because ArrayBuffers seem to be passed through although they do not have .length property.

Gozala

comment created time in 3 days

issue openedipfs/js-ipfs-unixfs

Add support for Uint8Array's in place of node Buffer's to ipfs-unixfs-importer

I glanced through the implementation and it seems that dependence is inherited from use of following libraries

  • bl
  • https://github.com/multiformats/js-multihashing-async/issues/33

There are few lines that also use Buffer directly but there Uint8Array would have done just fine as well.

Trying to actually remove here might be difficult, however we could probably just remove it from the API surface and turn Uint8Arrays into Buffers internally where needed.

created time in 3 days

issue commenttc39/proposal-record-tuple

Deeply immutable binary data

@littledan I think we might be talking past each other here. I'll try to elaborate on specific use case I had in mind. https://ipfs.io/ uses cryptographic content identifiers (CID) to address content in the network (spec, js implementation).

As name implies it's identifier derived from content (hash) so by definition it's immutable, also being identifier it is used all over the place and handful of utility functions e.g. toString() are implemented that do caching to avoid unnecessary computations. However as I've alluded to in original comment there is no good way to represent such CIDs in JS because:

  1. ArrayBuffer aren't immutable, even though CID make use of them as "immutable by agreement". That means they can get corrupt if underlying data is mutated (intentionally or not).
  2. As we attempt to push things off to worker thread we're finding that "immutable by agreement" is really fragile, because if ArrayBuffer is transferred any "immutable by agreement" data structures (like CIDs) get corrupt as buffer gets cleared on the main thread.
    • This can often happen by accident when larger buffer is allocated and different "immutable by agreement" structures point to range with-in it. Some temporary structures may appear good candidates for transfer, as shared buffer with other structures is overlooked.

Have some form of immutable shared array buffer could address outlined problems in systems like ipfs, or many other distributed / p2p systems for that matter.

Gozala

comment created time in 3 days

issue commentipfs/pinning-services-api-spec

Consider GraphQL over REST API

Would be good to run this idea by (a) Pinning Services (b) go-ipfs/js-ipfs

I think there are some relevant questions asked here https://github.com/ipfs/pinning-services-api-spec/issues/2#issuecomment-652597003

Quick take: In this specific domain (simple CRUD for managing pins) I don't believe there will be use case for mixing operations (any examples?), but I agree that batching status checks would be useful (#1). Also, pagination is missing from v0.0.1 of the spec – filled #12 for tracking that separately.

Once scheduler is introduced into the system batching of queued operations becomes relevant. For instance once node goes offline it may accumulate bunch of pin, delete, maybe even status requests and ability to fulfill them in a single request would be fairly convenient. However if each operation allowed batching of sorts win probably will be negligible, however nice thing about GraphQL is that it's all composable so you get batching for free with an ability to query specific fields.

The old thread includes some feedback from @bonedaddy about representing this API over gRPC (Temporal looked into using go-swagger + grpc-gateway). I see people were successful with querying graphQL API over gRPC

I'm not familiar with gRPC and from a glance I do not really see a reason for creating gRPC layer just to add GraphQL over it, that is unless system already had a gRPC layer and GraphQL introduced a querying mechanism.

but it is hard for me to tell if it makes things easier or harder for non-HTTP representations – are there any case studies we could look at?

There are some case studies here https://www.graphql.com/case-studies/ I also have good experience building both GraphQL server and client, would not have raised this otherwise.

Could you elaborate on what you mean by "easier or harder for harder for non-HTTP representations"

GraphQL services are typically served over HTTP, that is not strictly necessary but I suspect pool of libraries to use with other transports to be smaller.

Gozala

comment created time in 3 days

issue commentipfs/pinning-services-api-spec

Defining API access controls

On the second thought I'm not even sure CLI should open a browser tab. Seems like printing instructions would be enough. Web UI on the other hand I think should do that.

lidel

comment created time in 3 days

issue commentipfs/pinning-services-api-spec

Defining API access controls

@Gozala how would authorization flow look like from CLI?

I imagine something along the lines of git remote maybe ipfs service {name} {url} ? E.g. Running ipfs service add pinata https://pinata.cloud would

  1. Will associate pinata name for pinning service
  2. Will open system default browser with a URL https://pinata.cloud?authorize={pubKey}

User can login-in (or sign-up) if not already logged-in and authorize owner of the corresponding private key to do the remote pinning. Service provide could decide what the UI should look like.

Just like with ipns -k --key could be passed to use non default peer id key.

I imagine pinning service providers would also have a way in settings to paste public key without to do anything from CLI.

lidel

comment created time in 3 days

fork Gozala/it

A collection of utilities for making working with iterables more bearable

fork in 4 days

push eventGozala/js-ipfs

Irakli Gozalishvili

commit sha 7b2c7f23e3e8d7babf4fa3ee4eb5b64c52ba65d7

fix: link to core API docs Co-authored-by: Marcin Rataj <lidel@lidel.org>

view details

push time in 4 days

push eventGozala/js-ipfs

Irakli Gozalishvili

commit sha 574d30a70234ce160f426c547cc3778a198c5202

fix: typo Co-authored-by: Marcin Rataj <lidel@lidel.org>

view details

push time in 4 days

push eventGozala/js-ipfs

Irakli Gozalishvili

commit sha cbb3e5bc80aa65218c4e0733c3fd09b051a0ecec

fix: typo Co-authored-by: Marcin Rataj <lidel@lidel.org>

view details

push time in 4 days

push eventGozala/js-ipfs

Irakli Gozalishvili

commit sha dc5f10db700d77090b625e1b00cefa26fc2350fe

fix: link to the core APIs Co-authored-by: Marcin Rataj <lidel@lidel.org>

view details

push time in 4 days

Pull request review commentipfs/js-ipfs

feat: share IPFS node between browser tabs

+# ipfs-message-port-client <!-- omit in toc -->++[![](https://img.shields.io/badge/made%20by-Protocol%20Labs-blue.svg?style=flat-square)](http://protocol.ai)+[![](https://img.shields.io/badge/project-IPFS-blue.svg?style=flat-square)](http://ipfs.io/)+[![](https://img.shields.io/badge/freenode-%23ipfs-blue.svg?style=flat-square)](http://webchat.freenode.net/?channels=%23ipfs)+[![Travis CI](https://flat.badgen.net/travis/ipfs/js-ipfs)](https://travis-ci.com/ipfs/js-ipfs)+[![Codecov branch](https://img.shields.io/codecov/c/github/ipfs/js-ipfs/master.svg?style=flat-square)](https://codecov.io/gh/ipfs/js-ipfs)+[![Dependency Status](https://david-dm.org/ipfs/js-ipfs/status.svg?path=packages/ipfs-message-port-client)](https://david-dm.org/ipfs/js-ipfs?path=packages/ipfs-message-port-client)+[![js-standard-style](https://img.shields.io/badge/code%20style-standard-brightgreen.svg?style=flat-square)](https://github.com/feross/standard)++> A client library for the IPFS API over [message channel][]. This client library provides (subset) of [IPFS API](https://github.com/ipfs/js-ipfs/tree/master/docs/api) enabling applications to work with js-ipfs running in the different JS e.g. [SharedWorker][].+++## Lead Maintainer <!-- omit in toc -->++[Alex Potsides](https://github.com/achingbrain)++## Table of Contentens <!-- omit in toc -->++- [Install](#install)+- [Contribute](#contribute)+- [License](#license)++## Install++```bash+$ npm install --save ipfs-message-port-client+```++## Usage++This client library works with IPFS node over the [message channel][] and assumes that IPFS node is provided via `ipfs-message-port-server` on the other end.++It provides following API subseset:++- [`ipfs.dag`](https://github.com/ipfs/js-ipfs/blob/master/docs/core-api/DAG.md)+- [`ipfs.block`](https://github.com/ipfs/js-ipfs/blob/master/docs/core-api/BLOCK.md)+- [`ipfs.add`](https://github.com/ipfs/js-ipfs/blob/master/docs/core-api/FILES.md#ipfsadddata-options)+- [`ipfs.cat`](https://github.com/ipfs/js-ipfs/blob/master/docs/core-api/FILES.md#ipfscatipfspath-options)+- [`ipfs.files.stat`](https://github.com/ipfs/js-ipfs/blob/master/docs/core-api/FILES.md#ipfsfilesstatpath-options)++Client can be instantiated from the [`MessagePort`][] instance+++```js+const IPFSClient = require('ipfs-message-port-client')+++const main = async () => {+  const worker = new SharedWorker(IPFS_SERVER_URL)

It depends on how you bundle things I'm afraid. I will add a section about it in the readme, because there's a lot to be said there.

Gozala

comment created time in 4 days

pull request commentipfs/js-ipfs

feat: share IPFS node between browser tabs

Can you removed the test skips please? Master is a lot more stable than it was a couple of weeks ago.

To be clear those changes never meant for landing, but rather to get around intermittent failures in this pull.

Gozala

comment created time in 4 days

Pull request review commentipfs/js-ipfs

feat: share IPFS node between browser tabs

+# ipfs-message-port-server <!-- omit in toc -->++[![](https://img.shields.io/badge/made%20by-Protocol%20Labs-blue.svg?style=flat-square)](http://protocol.ai)+[![](https://img.shields.io/badge/project-IPFS-blue.svg?style=flat-square)](http://ipfs.io/)+[![](https://img.shields.io/badge/freenode-%23ipfs-blue.svg?style=flat-square)](http://webchat.freenode.net/?channels=%23ipfs)+[![Travis CI](https://flat.badgen.net/travis/ipfs/js-ipfs)](https://travis-ci.com/ipfs/js-ipfs)+[![Codecov branch](https://img.shields.io/codecov/c/github/ipfs/js-ipfs/master.svg?style=flat-square)](https://codecov.io/gh/ipfs/js-ipfs)+[![Dependency Status](https://david-dm.org/ipfs/js-ipfs/status.svg?path=packages/ipfs-message-port-server)](https://david-dm.org/ipfs/js-ipfs?path=packages/ipfs-message-port-server)+[![js-standard-style](https://img.shields.io/badge/code%20style-standard-brightgreen.svg?style=flat-square)](https://github.com/feross/standard)++> A library for providing IPFS node over [message channel][]. This library enables+applications running in the different JS context to use [IPFS API](https://github.com/ipfs/js-ipfs/tree/master/docs/api) (subset) via `ipfs-message-port-client`.+++## Lead Maintainer <!-- omit in toc -->++[Alex Potsides](https://github.com/achingbrain)++## Table of Contentens <!-- omit in toc -->++- [Install](#install)+- [Contribute](#contribute)+- [License](#license)++## Install++```bash+$ npm install --save ipfs-message-port-server+```++## Usage++This library can wrap JS IPFS node and expose it over the [message channel][].+It assumes `ipfs-message-port-client` on the other end, however it is not+strictly necessary anything compling with wire protocol will do.++It provides following API subseset:++- [`ipfs.dag`](https://github.com/ipfs/js-ipfs/blob/master/docs/core-api/DAG.md)+- [`ipfs.block`](https://github.com/ipfs/js-ipfs/blob/master/docs/core-api/BLOCK.md)+- [`ipfs.add`](https://github.com/ipfs/js-ipfs/blob/master/docs/core-api/FILES.md#ipfsadddata-options)+- [`ipfs.cat`](https://github.com/ipfs/js-ipfs/blob/master/docs/core-api/FILES.md#ipfscatipfspath-options)+- [`ipfs.files.stat`](https://github.com/ipfs/js-ipfs/blob/master/docs/core-api/FILES.md#ipfsfilesstatpath-options)++Server is designed to run in the [SharedWorker][] (although it is possible to+run it in the other JS contexts). Example below illustrates running js-ipfs+node in [SharedWorkr][] and exposing it to all connected ports++```js+const IPFS = require('ipfs')+const { IPFSService } = require('ipfs-message-port-server')+const { Server } = require('ipfs-message-port-server/src/server')++const main = async () => {+  const ports = []+  // queue connections that occur while node was starting.+  self.onconnect = ({ports}) => connections.push(...ports)

Oh that was an error, it meant to push into ports above. Thanks for catching I'll fix it.

Gozala

comment created time in 4 days

issue commentipfs/js-ipfs

Quirks of `ipfs.add` API

If the input is an (async)iterable we could watch the type of the contents of the and throw if it changes or take some other action but so far no-one has asked for this and if they did, we'd certainly recommend that they pass homogenous input instead.

My problem is that dealing with homogenous input would have streamlined the logic, however current implementation would work even if inputs are homogenous and I'm guessing that is by a chance because all the Buffer.from uses, which I am trying to rip out in browser scenario.

If we're ok with changes that would assume homogenous inputs and start breaking when they are not please let me knew, in that case I would like to reflect that both in code and docs.

Gozala

comment created time in 7 days

issue commentipfs/js-ipfs

Quirks of `ipfs.add` API

I'm not massively against this, it's just trivial to accomplish the same thing with:

It is trivial, if you import library and will be even more so once JS engines provide a built-in equivalnts... However having to use library just to get a result of the API call seems like an incidental complexity.

To me this feels like if async function were returning Array<Promise<T>> instead of Promise<T>, it is trivial to to de-structure and get a result you want, but then again why not just return actual thing ?

Another sign of this complexity is that collection can be empty, which should never happen here, but good APIs make impossible states impossible and not considering empty case is both discomforting & something that type-checkers (if you use one) will point out every time.

Even if implementation under the hood would just do (input) => last(ipfs.addAll(input)) that would be a great improvement because it removes burden from user to:

  • Pull in extra library.
  • Makes pedants and type checkers happy by avoiding empty collection case.
  • Allows alternative more optimal implementation without increasing complexity of already complex implementation.
  const added = await all(ipfs.add(blobs))

Or if you really just want an array of CIDs:

const cids = await all(map(ipfs.add(blobs), added => added.cid))

Problem I'm hinting on is not that it is hard to get a result you want, but rather that API is tailored to a specific use case and all others (more common) use cases suffer with that extra bit of added complexity (be it importing a library, doing other mapping, etc...).

It is also shows in the implementation. In #3022 I end up using different encoding strategies for each of these use case, now I'm facing very same issue with #3029 and logic to differentiate between them is really complex. Implementation complexity alone would not be a good argument, however if you consider that users also need to do little bit extra to go from async collections back to single result (in use case 1) or a sync collection (in use case 2) I really don't get what is the value.

Gozala

comment created time in 7 days

issue commentipfs/js-ipfs

Implementation bug in normaliseInput

I'm working on the patch that adds tests and fixes.

Gozala

comment created time in 7 days

issue commentipfs/js-ipfs

Implementation bug in normaliseInput

Also it appears that parenthesis is in the wrong place here

https://github.com/ipfs/js-ipfs/blob/8cb8c73037e44894d756b70f344b3282463206f9/packages/ipfs-core-utils/src/files/normalise-input.js#L52

Gozala

comment created time in 7 days

issue openedipfs/js-ipfs

Implementation bug in normaliseInput

I am pretty sure following lines would throw TypeError: iterator is not iterable because yield * expression expects iterable and not an iterator.

https://github.com/ipfs/js-ipfs/blob/8cb8c73037e44894d756b70f344b3282463206f9/packages/ipfs-core-utils/src/files/normalise-input.js#L73-L82

https://github.com/ipfs/js-ipfs/blob/8cb8c73037e44894d756b70f344b3282463206f9/packages/ipfs-core-utils/src/files/normalise-input.js#L114-L122

https://github.com/ipfs/js-ipfs/blob/8cb8c73037e44894d756b70f344b3282463206f9/packages/ipfs-core-utils/src/files/normalise-input.js#L191-L199

It is also worrying that no tests seem to catch that.

created time in 7 days

issue commentipfs/js-ipfs

Quirks of `ipfs.add` API

I think ipfs.add tries to do several things under the same API:

  1. Add a single file to the IPFS (I have a file I want CID back, no streams no nothing).
  2. Add batch of files to the IPFS (I have these n files (synchronously) and I want n CIDs back).
  3. Import some files into IPFS (I want to import content as I crawl / traverse some source & don't want to let importer drive the process).

I think current API is great for the case 3, but at the expense of 1 and 2. Same code path also makes it really difficult to special case and optimize cases 1 and 2 because to decide the code path input needs to be probed, sometimes asynchronously, so it gets all normalized into import style input.

Gozala

comment created time in 7 days

issue openedipfs/js-ipfs

Quirks of `ipfs.add` API

I'm working on #3029 and having a really hard time making changes to ipfs.add without changing it's current behavior. In the process I'm discovering some quirks that I wanted to point out & hopefully find a way to address:

So it takes arbitrary input and normalizes it. However rules for single vs multiple files are of particular interest here. https://github.com/ipfs/js-ipfs/blob/8cb8c73037e44894d756b70f344b3282463206f9/packages/ipfs-core-utils/src/files/normalise-input.js#L30-L38

So all AsyncIterators represent input with multiple files, except if it is iterator of ArrayBufferView|ArrayBuffer. This leads to some interesting questions:

  • Is this supposed to produce single file with hi\nbye content or two files ?

    ipfs.add(asyncIterable([Buffer.from('hi\n'), 'bye')])
    

    According to docs AsyncIterable<Bytes> is interpreted as single file, however AsyncIterable<string> is interpreted as multiple files. However implementation only checks first the rest are just yielded from content, so I'm guessing it would produce single file.

  • And how about if we switch those around ?

    ipfs.add(asyncIterable(['hi\n', Buffer.from('bye'))])
    

    According to the documentation AsyncIterable<string> is interpreted as multiple, files however since only first chunk is checked I'm guessing this one would produce two files.

  • Even more interesting would be if we did this:

    ipfs.add(asyncIterable([Buffer.from('hi\n'), {content: 'bye' })])
    

    Which would produce error, although one might expect two files.

Maybe this is not as confusing as I find it to be, but I really wish there was more way to differentiate multiple file add from single file add so implementation would not need to probe async input to decide.

I think things would be so much simpler for both user and implementation if ipfs.add just always worked with multiples, that would mean that user would have to wrap Bytes|Blob|string|FileObject|AsyncIterable<Bytes> into array [content] but it would make API much cleaner also AsyncIterable<*> as result would make more sense as it's many to many.

Alternatively there could be ipfs.add and ipfs.addAll APIs to deal with single files and multiple files respectively. That would also reduce complexity when adding a single file const cid = (await ipfs.add(buffer).next()).value.cid

created time in 7 days

PR opened ipfs/team-mgmt

Add notes for core impl weekly sync 2020-06-29
  • @vasco-santos
  • @jacobheun
  • @aschmahmann
  • @hugomrdias
  • @petar
+247 -0

0 comment

1 changed file

pr created time in 7 days

create barnchipfs/team-mgmt

branch : core-dev-2020-06-29

created branch time in 7 days

issue openedipfs/pinning-services-api-spec

Pining API over libp2p ?

I think it is worth considering whether pinning API over libp2p would be a better option than HTTP based pinning API. Here are some of my thoughts on this subject:

Pinning API over libp2p

  1. 👍 This would essentially turn any IPFS node into pinning service (if they choose to provide it). Symmetry here is really compelling because I could authorize my phone to pin anything on my laptop.
  2. 👍 Transport agnostic. 3.👍 If I am asking to pin content from my laptop with libp2p I already have a open channel with the node so it could be leveraged to get that content from me.
  3. 👎 It's far more involved than a simple HTTP request, but then again if assumption is that this is used from IPFS node, this doesn't really change much.
  4. 👍 It would be trivial to subscribe to pin changes instead of having to poll for updates.

created time in 7 days

issue commentipfs/pinning-services-api-spec

Multidevice use case

I think there is yet another reason to prefer device level granularity over user level. When you query for all user pins it would be useful to:

  1. To query for pins for this device or pins for other / all devices.
  2. In the web UI it would be useful to differentiate pins on my laptop vs pins on my phone (e.g. Dropbox already does something along those lines although semantics are different there).
Gozala

comment created time in 7 days

issue commentipfs/pinning-services-api-spec

Pinning/Unpinning Policies

I think there is some overlap between this and #7 issue. On the policies side one thing I have being wondering is what happens when some content in MFS is edited / deleted. Is previously pinned content going to get unpinned ? I imagine user wanting to keep that around as a backup. Which gets us into for how long do you want keep such backup and how much of it also relevant question.

aschmahmann

comment created time in 7 days

issue commentipfs/pinning-services-api-spec

Defining API access controls

As I've brought up in #7 I think 1:1 association of user to device is flawed. In fact single device might operate multiple service "buckets" and user might have multiple devices that operate single bucket. I think it would make much more sense to rethink the whole flow and instead of copying endpoints + token into WebUI or / and IPFL CLI it would make a lot more sense to navigate from those tools to the pinning service endpoint an perform the authorization there instead.

This would imply that instead of obtaining token from the pinning service and configuring an application, pinning service would need to be configured to authorize a key from the program. Turning things around removes complexity from the web-ui and leaves it up to pinning service to choose right complexity based on service it provides (e.g. if service has buckets it could provide a bucket selector, if it's device in my closet it doesn't even need to authorize me, etc...)

lidel

comment created time in 8 days

issue commentipfs/pinning-services-api-spec

Consider GraphQL over REST API

E.g. #1 is perfect example of needing to batch which GraphQL supports out of the box.

Gozala

comment created time in 8 days

issue commentipfs/pinning-services-api-spec

Consider GraphQL over REST API

It is also worth considering how IPLD selectors fit or whether they could be a better solution than REST or / and GraphQL

Gozala

comment created time in 8 days

issue openedipfs/pinning-services-api-spec

Consider GraphQL over REST API

Simplicity of REST API is great until:

  1. Responses are large enough that getting only subsets becomes important.
  2. Round-trips start to matter and some batching strategy is necessary.

Both lead to custom & non-composable solutions. For these reasons I would like to propose to consider https://graphql.org/ based API because:

  1. You can query exactly what application needs / expects.
  2. All operations are composeable and can be bundled into single request as necessary.
  3. There is great tooling available for it.

created time in 8 days

issue commentipfs/pinning-services-api-spec

Multidevice use case

One extra thought that would go along with a general sentiment here:

  • Each node already has a peer ID backed by PKI.
  • Devices have unique IPFS nodes (two devices won't share same peer id or will run into problems).

It might be vice to embrace existing PKI and sign API requests with the peer key. That would remove need for secret tokens (that users need to enter), although users would need to authorize specific IPFS node with a pinning service. Although that could be fairly simple, webui or cli would just have to pass peer id to the pinning service endpoint, where pinning service could perform necessary authorization (if necessary) onboarding etc...

Gozala

comment created time in 8 days

issue openedipfs/pinning-services-api-spec

Multidevice use case

From what I understand currently API user token is going to be used to identify who requested a pin. However user may have multiple devices and sharing same token across those has few problems:

  1. Pins could be added removed from different devices and who wins is unclear:

    • Device A pins CID-A.
    • Device B pins CID-A.
    • Device A unpins CID-A.

    Does that mean CID-A should be removed or does that mean it should not because device B still holds a pin ?

  2. If access token is shared across multiple devices it would be impossible to audit which device added / removed pins or revoke access to that specific device.

For above reason I think it would be vice to move away from manual endpoint + token entry and instead perform device link / unlink flow similar to how e.g keybase does this. While under the hood that could still use tokens (although signing requests would be better option IMO) it could provide a better solution for above listed problems and provide better UX as described below:

  1. If each device pins / unpins with unique token / key associated with it then pinning service could keep pin as long as one authorized service still has active pin. While service could still choose to provide alternative policy, it would have enough information to implement the other policy.
  2. Since each device will have unique token / key it would be possible to audit all the API calls and identify which device those came from. Additionally revoking access from lost or compromised device would not require re-authorizing all other devices.
  3. Authorizing device could have much better UX that doesn't involve copy & pasting things in webui (at least). Device authorization could be performed e.g. via custom protocol handler that webui could react to.

created time in 8 days

issue commentipfs/js-ipfs

Running `lerna bootstrap` messes with repository and files with in it.

Do you still see this problem?

I think it's still there and can be reproduced by killing the process.

Gozala

comment created time in 8 days

issue commentipfs/js-ipfs

Intermittent failures: returns 400 if no node is provided

Is this still a problem?

I'm not sure, since I have not had any pushes since. I think for all the intermittent failures we could put just anotate tests with pointers to issues and close the issues. If they show again in CI it would be easy enough to reopen.

Gozala

comment created time in 8 days

pull request commentipfs-shipyard/ipfs-webui

File upload without buffering

I did bit more investigation with my node server and it appears that

Chrome uploads 2gigs file in 108237 chunks (~19kb per chunk) curl uploads same file in 32945 chunks (65kb per chunk)

This leads me believe that difference in chunking could lead to observed throughput difference. I am unaware if there is any way to control chunking of XHR upload, so I'm out of ideas.

Gozala

comment created time in 9 days

pull request commentipfs-shipyard/ipfs-webui

File upload without buffering

Gozala

comment created time in 9 days

pull request commentipfs-shipyard/ipfs-webui

File upload without buffering

Wrote a simple node server that just calculates number of bytes received and writes back chunks back { Bytes: bytes, Name: "2g.bloat" }

Which leads to upload times between 10s to 12s

image

With that 19s to do actual IPFS add doesn't seem all that bad. I also see from server output that browser is streaming data in chunks, that being said it is still unclear why it is taking this long.

Gozala

comment created time in 9 days

issue openedipfs-shipyard/ipfs-webui

Replace constant stat polling with server-sent events

At the moment webui is constantly polling following HTTP API endpoints:

This is has few unfortunate consequences:

  1. Increases pressure on network IO limits that browsers have.
  2. Inceases overhead on the HTTP API of IPFS node (draining battery).
  3. Harms dev experience due to noise in the network panel.

Describe the solution you'd like

EventSource is a standard web API that provides interface to server-sent-events as replacement for ong-polling pattern (that browsers can optimize to discard consumed input).

Describe alternatives you've considered

Alternatively we could use fetch body to read updates from ReadableStream, but that is less widely supported than EventSource API. Also unlike EventSource that is push based, ReadableStreams are push & pull which could lead to buffering on the client if not actively consumed, which would not be the case with EventSource.

Additional context I am realizing this conversation reaches beyond webui into IPFS HTTP API, but I'm going to fill relevant issues there as well and interlink those.

created time in 9 days

push eventGozala/js-ipfs-lite-http-client

Irakli Gozalishvili

commit sha 66ef08a39aa8ba9bda4406d32583ec3c80850e8a

fix: multipart encoding and main module expose

view details

push time in 9 days

pull request commentipfs-shipyard/ipfs-webui

File upload without buffering

Checked also with curl which takes around ~3secs, which is kind of throughput I would like to see with webUI.

time curl -X POST -F file=@2g.bloat http://127.0.0.1:5001/api/v0/add\?stream-channels\=true\&pin\=false\&progress\=true\&wrap-with-directory\=false
{"Name":"2g.bloat","Bytes":2147483648}
{"Name":"2g.bloat","Hash":"QmTbKT35oQzKgR34LUHaPdtrFrphjxiQJLrz6QMyU4kQNC","Size":"2147994442"}
curl -X POST -F file=@2g.bloat   0.60s user 2.92s system 12% cpu 28.167 total
Gozala

comment created time in 9 days

pull request commentipfs-shipyard/ipfs-webui

File upload without buffering

Contrast with adding via CLI is massive extra 43 seconds

time ipfs add 2g.bloat --quiet
QmTbKT35oQzKgR34LUHaPdtrFrphjxiQJLrz6QMyU4kQNC
ipfs add 2g.bloat --quiet  0.71s user 1.49s system 13% cpu 16.138 total

I do not exactly know what is the problem here, but clearly this needs investigation.

Gozala

comment created time in 9 days

pull request commentipfs-shipyard/ipfs-webui

File upload without buffering

Here is the timing I observe for 2gig file add

image

Which is ~45 seconds, that feels like a lot (especially when there is no visible UI feedback through all this time).

Gozala

comment created time in 9 days

pull request commentipfs-shipyard/ipfs-webui

File upload without buffering

I can successfully upload files with multiple gigs in size, it does take quite a bit time and I see no progress reports. ipfs-lite-http-client does report progress updates and I can see those being called however I can't seem to figure out where dispatch of FILES_WRITE_UPDATED leads to. I suspect handler of that action doesn't get what it expects and that is why progress doesn't get reported till the very end. Help here would be welcome.

per @olizilla import / add button on the files view listened for FILES_WRITE_UPDATED events to show a progress bar-ish animation, but that no longer seems to be the case.

@rafaelramalho19 any idea what am I doing wrong in my pull request, that causes the import view not to show until upload is fully complete ?

Gozala

comment created time in 9 days

startedapprenticeharper/DeDRM_tools

started time in 9 days

PR opened ipfs-shipyard/ipfs-webui

File upload without buffering

This pull request explores a ways to upload files without unnecessary buffering. High level overview of changes below:

  • ipfs.add has a complex API that can take AsyncIterable of files with AsyncIterable contents. Optimize such API has several issues:

    1. Optimizing such API is very complex, as all the current input normalization would have to be thrown out of the window and each input type would require separate specialized code path. Simply put it would increase implementation complexity quite a bit.
    2. Even if optimizations are made (despite introduced complexity) it would be really easy to fall of happy path by mixing inputs that can't be optimized.

    For above reasons this draft pull in experimental ipfs-lite-http-client providing much more simplified ipfs.add functionality that only accepts inputs optimal for web. It is not here to stay, but here to make a case that ipfs.add API is inadequate for use cases like this.

  1. filesToStreams function used to turn DOM File into structure with contents of it represented as a stream. Which in turn than had to be buffered by an ipfs-http-client. This function is gone now and File instances are used instead. **However I'm not confident there is no code paths that assume old file like structures instead of File instances.

  2. I can successfully upload files with multiple gigs in size, it does take quite a bit time and I see no progress reports. ipfs-lite-http-client does report progress updates and I can see those being called however I can't seem to figure out where dispatch of FILES_WRITE_UPDATED leads to. I suspect handler of that action doesn't get what it expects and that is why progress doesn't get reported till the very end. Help here would be welcome.

might fix #1529

+56 -262

0 comment

10 changed files

pr created time in 9 days

create barnchGozala/ipfs-webui

branch : web-files

created branch time in 9 days

push eventGozala/js-ipfs-lite-http-client

Irakli Gozalishvili

commit sha 20461b980ec9ec70016a100fe30a7a70695f23d9

remove js from the name

view details

push time in 10 days

startedfilecoin-project/slate

started time in 10 days

create barnchGozala/js-ipfs-lite-http-client

branch : default

created branch time in 10 days

issue commenttc39/proposal-record-tuple

Interaction with Structured Clone algorithm

@Gozala I imagine that's an implementation detail. Since records are immutable and can only hold primitive values, whether they're cloned or copied makes no difference as to how they're used.

@obedm503 I think there is a bit more nuance to this than that. While it may not affect how they are used, it would definitely affect what they can be used for.

To put that into more context let me offer two examples:

Performant data transfer across threads.

React previously has explored idea of moving it's logic into web workers, however that experiment failed because overhead of structural cloning end up being prohibitive of the strategy.

At Mozilla we have explored alternative approach that represented virtual DOM and diffs to it as a binary changelists encoded in ArrayBuffer. Which allowed transferring diffs across threads instead of copying and offered a great performance and overhead reduction on main thread. However representing DOM as binary has it's drawbacks.

That is to suggest that immutable records & tuples would have being much better match than representing all in binary format, but only if they could be moved across threads without copying, because copying would make their use prohibitive due to performance overhead that lead to use of binary representation. I would also like to call out that in this specific case transferring ArrayBuffers worked fine because once diff was computed worker thread had no use of it, but that often is not the case as my next example will attempt to illustrate.

Performant sharing across threads.

https://ipfs.io project and namely https://github.com/ipfs/js-ipfs represents content (like files) are represented via Merkle Trees where each nodes is immutable (by agreement) and identified by it's hash. It is also not uncommon to have shared nodes (Yei structural sharing!).

I am leading an effort to move things off the web worker, however unlike previous example data structures are put together on one thread but they can't be fully transferred onto other because some nodes (shared ones) may still be referenced. In fact JS (unlike say Rust) has no good solution here as there is no way to tell where things are safe to transfer and where they should be copied.

This is a great example of what immutable records and tuples could unlock if there was a way of sharing those across threads without copying, but would not really make a difference otherwise.


I hope this examples help to illustrate that whether immutable records / tuples are sharable vs cloneable have a profound implications on what they can and can not be used for.

guybedford

comment created time in 15 days

created repositoryGozala/js-ipfs-lite-http-client

JS lightweight HTTP client library for IPFS

created time in 15 days

fork Gozala/ipfs-webui

A frontend for an IPFS node.

https://webui.ipfs.io

fork in 15 days

issue commentipfs-shipyard/ipfs-webui

Improve file add so it can handle gigs of data

@Gozala I assigned you! Do you mind adding labels as appropriate?

@jessicaschilling I'm afraid I don't have access right to do that either. Also I'd need some guidance on appropriate labeling.

Gozala

comment created time in 15 days

issue commentipfs-shipyard/ipfs-webui

Improve file add so it can handle gigs of data

Some notes (mostly for myself). Here's where dropped files are handled

https://github.com/ipfs-shipyard/ipfs-webui/blob/be316e50353d378778a610958af6643ff835946a/src/App.js#L109-L116

Which invokes addFiles

https://github.com/ipfs-shipyard/ipfs-webui/blob/be316e50353d378778a610958af6643ff835946a/src/App.js#L42-L53

Where array of web Files is converted to array of entries with content feilds are turned into streams of file contents. (This where we fall of optimal path and will end up having to buffer).

https://github.com/ipfs-shipyard/ipfs-webui/blob/be316e50353d378778a610958af6643ff835946a/src/lib/files.js#L4-L18

And then doFilesWrite uses ipfs.add to get those into IPFS via ipfs-http-client

https://github.com/ipfs-shipyard/ipfs-webui/blob/be316e50353d378778a610958af6643ff835946a/src/bundles/files/actions.js#L170-L207

Gozala

comment created time in 15 days

issue commentipfs-shipyard/ipfs-webui

Improve file add so it can handle gigs of data

I don't seem to have right privileges to assign it to myself. But I do intend to drive this.

Gozala

comment created time in 15 days

issue openedipfs-shipyard/ipfs-webui

Improve file add so it can handle gigs of data

This is related to https://github.com/ipfs/js-ipfs/issues/3029, however changes on the webui end would also be required. So this is the placeholder issue for that piece of work.

created time in 15 days

issue commentipfs/js-ipfs

Embracing web native FormData / File where possible

I will try to drive this effort as my time allows (right now I'm blocked on reviews). With that I intend to do following:

  1. Add this item to the agenda of IPFS Core Implementations Weekly meeting.
  2. Schedule a Design Review Proposal
    • We need a decision on what if any changes could be done to HTTP API to make it compatible with browser native FormData.
    • Get a consensus on how ipfs.add / ipfs.file.write should behave here.
    • How do we tests this ? I'm hesitant to adding 4gig of data to our tests.

As with #3022 I remain very skeptical of keeping ipfs.add API as is in a hope that provided input is going to allow for optimizations described. That is because it is just too easy to pass an input that can defeat optimizations described here.

Gozala

comment created time in 15 days

pull request commentipld/js-ipld-dag-pb

WIP: backwards compatible pure data model API

@achingbrain I believe I've addressed all your comments and comments from @rvagg. @vmx I hope you agree with my arguments to decouple this from switch to https://github.com/mapbox/pbf, I do not know what errors may arise from that change & I do not want to block my primary work on that.

Please let me know if there is anything else to be done, or if this can land.

Thanks

Gozala

comment created time in 15 days

Pull request review commentipld/js-ipld-dag-pb

WIP: backwards compatible pure data model API

 module.exports = (repo) => {       })        const node2 = new DAGNode(someData, l2)-      expect(node2.Links).to.eql([l1[1], l1[0]])+      expect(node2.Links).to.containSubset([l1[1], l1[0]])

https://github.com/moxystudio/js-class-is/issues/27

Gozala

comment created time in 15 days

issue openedmoxystudio/js-class-is

[Symbol.toStringTag] getter seems to interfer with chai's deep.equal check

As per https://github.com/ipld/js-ipld-dag-pb/pull/184#discussion_r436880297 it appears that chai will deep equality check with duck typed counterpart of the instance because of the [Symbol.toStringTag] getter this library adds.

I have created pull request to illustrate this https://github.com/moxystudio/js-class-is/pull/26, although this library uses jest instead of chai same issue appears to manifest. Unwapped class does not seem to exhibit this behavior.

created time in 15 days

PR opened moxystudio/js-class-is

Faling test

As per @achingbrain https://github.com/ipld/js-ipld-dag-pb/pull/184#discussion_r436880297 chai's deep.equal check instance created via class-is should match it's duck-typed counterpart. It appears that [Symbol.toStringTag] field prevents that from happening. This pull request adds failing test that illustrates the issue.

+7 -0

0 comment

1 changed file

pr created time in 15 days

create barnchGozala/js-class-is

branch : deep-quality

created branch time in 15 days

fork Gozala/js-class-is

Enhances a JavaScript class by adding an is<Class> property to compare types between realms.

fork in 15 days

push eventGozala/js-ipld-dag-pb

Irakli Gozalishvili

commit sha a8f448890311beaf4b86c441b2ab5fffdda43c9c

chore: update protons to release with Uint8Array

view details

Irakli Gozalishvili

commit sha 541faea061b23d009be43619885554757311402d

chore: change formatting as per review feedback

view details

push time in 15 days

issue commentipfs/js-ipfs

Embracing web native FormData / File where possible

Breaking changes to HTTP API /api/v0 are something we should avoid, if possible. (People still struggle with switch to POST-only API, slowing down lib upgrades etc)

If support for opt-in mode and mtime are the only reason to change HTTP API, I'd go as far as to say that we could simply not support those in browser context and nobody would care. AFAIK it is a niche feature used in CLI/daemon contexts.

I should point out that changes I mention are not breaking changes, it would just allow passing those options in an alternative way. If users choose to not pass those, or pass them in an old fashion it should still work.

Gozala

comment created time in 16 days

push eventGozala/js-ipfs

Irakli Gozalishvili

commit sha 98db58b294d94dac6d6bb81586c25198e817292e

fix: regression introduced by c487207

view details

push time in 17 days

issue commentwhatwg/html

Structured cloning of Error .name

Just to clarify originally I was proposing that derived error types were treated differently, but since I have recognized @domenic's argument for not doing that. Since I have attempted to shift discussion towards just treating name property same as message is treated regardless of class hierarchy, my arguments being:

  1. That would enable fairly common pattern in JS to be used (only other alternative would be to rely on messages)
  2. name is mutable property, in fact other methods even respect it's mutation (so it's unclear why it needs to be normalized)
    e = Object.assign(new Error('boom!'), {name:'Boom'})
    e.toString() // Boom: boom!
    
Gozala

comment created time in 17 days

more