profile
viewpoint
Jimmy Wärting jimmywarting Sverige http://jimmy.warting.se Do most NodeJS developing nowdays

eligrey/FileSaver.js 13830

An HTML5 saveAs() FileSaver implementation

dfahlander/Dexie.js 5069

A Minimalistic Wrapper for IndexedDB

cfinke/Typo.js 371

A client-side JavaScript spellchecker that uses Hunspell-style dictionaries.

jhiesey/videostream 177

Play html5 video when from a file-like object

jhiesey/range-slice-stream 2

Get parts of a stream

jimmywarting/abortcontroller 1

AbortController for node

jimmywarting/33-js-concepts 0

📜 33 concepts every JavaScript developer should know.

jimmywarting/abortcontroller-polyfill 0

Browser polyfill for the AbortController DOM API (stub that calls catch, doesn't actually abort request).

jimmywarting/acorn-node 0

the acorn javascript parser, preloaded with plugins for syntax parity with recent node versions

issue commentjimmywarting/StreamSaver.js

iOS saves any file as .html

dc0c9da may have solved the issue for iOS but brought an other issue that it now uses the memory to build blobs.

We could maybe still use the streaming approach but changed the way thing where trigger. perhaps instead of trigger the download with a iframe, maybe we could simply just open the temporary link or changed some headers. but over the years with streamsaver i have come to learn that iframes are better ways to trigger downloads - just can't remember what they where

prodigy2m

comment created time in 16 hours

issue openedjimmywarting/StreamSaver.js

Other ways to build blobs.

Do some of you remember the BlobBuilder where you could append chunks a bit at the time? It might have been better at the time when you want to build large Blobs but was replaced by the Blob constructor for some reason.


Here is a document describing how blobs work in older chrome version. (don't know how outdated it is) https://docs.google.com/presentation/d/1MOm-8kacXAon1L2tF6VthesNjXgx0fp5AP17L7XDPSM/edit

I wrote a answer on stackoverflow about how you could potentially write large blob with pointers https://stackoverflow.com/questions/39253244/large-blob-file-in-javascript meaning you write chunks to IndexedDB and then assemble all chunks into one large one.

I then later wrote a thing (PR #18) that would cache all chunks into IndexedDB and do all of this. but it later got abandoned for some reason. maybe didn't what to make streamsaver more hacky then what it already is.

IndexedDB isn't the nicest or the fastest to work with.


Now I got two other theories of how you can build large blobs without using much memory. First one is a bit simpler.

The first one is that if you fetch something and call response.blob() then you wouldn't necessary have to have everything in memory - it could as well be a pointer to a temporary file on the disk if it is very large.

It all started of from this question https://stackoverflow.com/questions/16846382/memory-usage-with-xhr-blob-responsetype-chrome (but yet again it's about chrome - not safari)

Now safari have support for fetch + readableStream so you could do something like this:

var rs = new ReadableStream({
  start (ctrl) {
    ctrl.enqueue(new Uint8Array([97,98,99])) // abc
    ctrl.close()
  }
})
var blob = await new Response(rs).blob()

Could this be a way to offload some of the memory to a temporary place on the disk? i don't know.

Now if that dose not solve it, what about using the Cache storage?

// Cache storage, second best storage for files (using request/response)
var temp = await caches.open('tmp')
var rs = new ReadableStream({
  start (ctrl) {
    ctrl.enqueue(new Uint8Array([97,98,99])) // abc
    ctrl.close()
  }
})
var req = new Request('filename.txt')
var res = new Response(rs)

// Save it to the cache
temp.put(req, res).then(() => {
  // done saving
  var res = await temp.match(req)
  var blob = await res.blob()
}, error => {
  // how to recover from this?
})

Will this do it? maybe, maybe not.

The second approach have two caveats. 1) browser has a storage limitation of how much you can store. 2) how would you recover from something that may fail?

Other resources suggest that the OS may paging memory to disk when memory runs out.

Paging is a method of writing and reading data from a secondary storage(Drive) for use in primary storage(RAM). When a computer runs out of RAM, the operating system (OS) will move pages of memory over to the computer's hard disk to free up RAM for other processes.

So is it really something we have to worry about? Guess we need to test with really large data first before trying to implement something. I know for a fact that my Mac OS is paging memory so i may not be able to crash the browser with lots of memory. Only way to find out what works best is to test things

created time in 17 hours

issue commentjimmywarting/StreamSaver.js

iOS saves any file as .html

2.0.4 released

prodigy2m

comment created time in 18 hours

push eventjimmywarting/StreamSaver.js

Jimmy Wärting

commit sha 6d0bca4d6854913565c30ec1648a92ecb9ad56a4

Update README.md

view details

Jimmy Wärting

commit sha bb27cd3ea6a2a2e1e94f1026268e4ae6ae44b66c

Update README.md

view details

Jimmy Wärting

commit sha 4795cc5a5935bec4c511c992453fa1bf9c95beae

Update README.md

view details

Jimmy Wärting

commit sha fc64085fc5d9848ad7b09cd01ac1c9dc9227844b

Update saving-multiple-files.html

view details

Jimmy Wärting

commit sha 3a041ab2ab57520331ad828f6a5835b984162a1e

Update README.md

view details

Jimmy Wärting

commit sha ff0ddc3e84a7e5ad78cd53d8ad58d7101bb44894

Create FUNDING.yml

view details

Jimmy Wärting

commit sha 6f23e519bdbe42e234ac04050dc4ee5c423a5e2f

Update FUNDING.yml

view details

Robert Pethick

commit sha 9c172bd78a13ebc690f42c6b72b4e966d4e71aff

Spelling fix (#110)

view details

Oreki S.H

commit sha 314e64b8984484a3e8d39822c9b86a345eb36454

mv var out of if statement (#116) * mv var out of if statement * mv var out of if statement

view details

Jimmy Wärting

commit sha 8028114b235061c683341e6b4bb326d8b594361d

don't use leading slash in names windows gets confused. #119

view details

Jimmy Wärting

commit sha ea1560e37edce447925081e6276a44549f79744a

added a title

view details

Jimmy Wärting

commit sha 81ce06b6d9376ebed0feb0c88173f1b3a137f80f

Delete FUNDING.yml

view details

Jimmy Wärting

commit sha a1dbb4c753a1fdb2aa054a52fc094b86ed4c7415

Create FUNDING.yml

view details

Jimmy Wärting

commit sha 56d2f876c3d16b541f4faf42087ea17392fe5737

Delete FUNDING.yml

view details

Jimmy Wärting

commit sha 95eac8f1a8949d7303a1858707a017b602cd36f7

Create FUNDING.yml

view details

Jimmy Wärting

commit sha 5cc924c199fc6bb5f26a051d46f8da90eab9e297

Delete FUNDING.yml

view details

Jimmy Wärting

commit sha b3263aca0ed90632ed8f331970f8ddd2c5a68eb4

Create FUNDING.yml

view details

Jimmy Wärting

commit sha aaf2301af41ef14371b80d62c2adaf076d5878cb

Update FUNDING.yml

view details

Jimmy Wärting

commit sha 2f6c53eb63adbed503651341be2987dd1cab53b1

Merge branch 'gh-pages' into master

view details

Jimmy Wärting

commit sha e84292218500e81751163405f5387b57cf7c2d39

Update media-stream.html closes #143

view details

push time in 18 hours

created tagjimmywarting/StreamSaver.js

tag2.0.4

StreamSaver writes stream to the filesystem directly asynchronous

created time in 18 hours

release jimmywarting/StreamSaver.js

2.0.4

released time in 18 hours

push eventjimmywarting/StreamSaver.js

Jimmy Wärting

commit sha a72227c1950dcc88088aebcf9e6b664ae81992ea

bump minor (use blob fallback)

view details

push time in 18 hours

issue closedjimmywarting/StreamSaver.js

iOS saves any file as .html

I've tried everything but even all of the samples you guys provide saves any file as test.txt.html even though the file has been transferred. So if you rename the file it works.

closed time in 18 hours

prodigy2m

issue commentjimmywarting/StreamSaver.js

iOS saves any file as .html

closed via dc0c9da

prodigy2m

comment created time in 18 hours

push eventjimmywarting/StreamSaver.js

Jimmy Wärting

commit sha dc0c9da642eeb5c3aa53a63b9e015dbed54ab67f

Use blob fallback on safari on iOS closes #135

view details

push time in 18 hours

PR closed jimmywarting/StreamSaver.js

support ios13
+2 -1

1 comment

1 changed file

AmilKey

pr closed time in 18 hours

issue openedsidorares/node-mysql2

Replace promise wrapper & callbacks for native async/await?

it's not an issue with the current promise wrapper... But to me it seems like it just add overhead/complexity to the architecture to mix both callbacks and promises - now i haven't looked much at the source code yet to have any say at it.

maybe it's just too large/complex to make any changes or maybe someone is using an extra callback parameters. Maybe it's due to some more lower level api that makes reading streams not doable or you are supporting older node versions? or someone is using a own custom promise lib like bluebird. or you simply just use a dependency that only works with callbacks.

And changing it could be a breaking change

there could be many reason for closing this issue since it's not really an issue. But maybe this could be a long term goal?

if it's reading packages from stream and adding listeners to when you receive data maybe a replacement could be to use the new asyncIterator that's available in newer node versions?

I could imagen a lower level api to work something like this:

const connection = connect({ config })
const iterator = connection[Symbol.asyncIterator]()

async function query (query, params) {
  ...
  connection.write( build(query, params) )
  nextPackage = await iterator.next()
  const result = parsePackage( nextPackage )
  ...
  return result
}

created time in 2 days

pull request commentwebtorrent/create-torrent

Replace dead trackers

Don't feel comfortable just having only 1 wss tracker

btorrent and openwebtorrent is a bit unstable with many connections perhaps but i can still connect to them from time to time. beside it don't heart to have "dead" trackers in there.

fastcast was down at the moment.

mikedamm

comment created time in 2 days

issue commentjimmywarting/StreamSaver.js

iOS saves any file as .html

could you tell me how the file is downloaded? there is two way it could have been saved - either throught streams or generating a blob and then save

prodigy2m

comment created time in 3 days

issue commentSheetJS/sheetjs

Support for spreadsheets with images

To bad. guess my choice will become ExcelJS then...

ovari

comment created time in 3 days

issue commentSheetJS/sheetjs

Support for spreadsheets with images

ExcelJS, google sheet, MacOS Numbers all support image in cells

ovari

comment created time in 3 days

issue commentexceljs/exceljs

Using exceljs in a browser

Exceljs seems fine and dandy, can parse & generate excel and other stuff but is there any visual editor that allows visitor to edit files? similar to google sheet or x-spreadsheet?

x-spreadsheet didn't seem to support image in cell's https://github.com/myliang/x-spreadsheet/issues/223

BeginnerJS

comment created time in 3 days

issue commentpaulhodel/jexcel

feature request: numeric input as <input type='number'>

+1 for number input

daonsh

comment created time in 3 days

issue commentpaulhodel/jexcel

About the date picker. Is it possible to make the week start on Monday?

Would be nice with a native picker such as <input type="date" placeholder="yyyy-mm-dd"> that is all localized to the users own preferences

also for time and datetime-local

agradecido

comment created time in 3 days

startedsveltejs/svelte

started time in 3 days

issue commentmholt/PapaParse

Replace streams/callbacks with async iterator

to decode uint8array chunk's you could use

str = new TextDecoder().decode(uint8, {stream: !done})
jimmywarting

comment created time in 6 days

issue commentcalvinmetcalf/async-iter-stream

probably should deprecate this

jimmywarting

comment created time in 6 days

CommitCommentEvent

issue openedcalvinmetcalf/async-iter-stream

probably should deprecate this

node streams are async iterable so...

created time in 6 days

issue openedMedable/mdctl

.

async-iter-stream isn't used. so you can remove that...

https://github.com/Medable/mdctl/blob/032d159a2f1e055e266f4936491ed95c3567de39/packages/mdctl-api-driver/package.json#L32-L37

node streams are async iterable, so no point in using it either

created time in 6 days

issue commentmholt/PapaParse

Replace streams/callbacks with async iterator

as for the csv builder

const iterator = new papa.Builder({ meta (header, delimiter, etc) data: [].values() })

const stream = stream.fromIterator(iterator) // browser const blob = await new Response(stream).blob() // node stream.pipeTo(fs.createWriteStream(path))

jimmywarting

comment created time in 6 days

issue commentmholt/PapaParse

Replace streams/callbacks with async iterator

Another idea would be to have something that returns everything also.

class Parser {
  async values () {
    // return all rows
  }
  async iterator() {
    // yield rows  
  }
  [Symbol.asyncIterator]() {
    return this.iterator()
  }
}

new Papa.Parser(iterable[, config]).values().then(console.log, console.error)
jimmywarting

comment created time in 6 days

issue openedmholt/PapaParse

Replace streams/callbacks with async iterator

I see you are trying to be compatible with both node and the browser... But to be better at it I think that you should remove all node & browser specific code to make it more compatible by using similar methods and also smaller in size

Meaning: getting rid of the following:

  • Node streams (pipe's)
  • FileReader and Blobs
  • And even the download url compatibility.

Browser has new streaming compatibilities (namely: whatwg streams). it is somewhat similar to node streams if you choose to iterate over it using @@asyncIterator that both will yield Uint8Array's

the new api would look something like this:

// somewhere in papa parser
Papa.Parser = class Parser {
  constructor (data) {
    this.data = data
  }

  async values * () {
    let rest = new Uint8Array()
    for await (let uint8 of this.data) {
      // 1) parse previus and current chunk
      const { rows, remaning } = parse(rest, uint8)
      rest = remaning
      // yield all parsed rows to flush data
      yield * rows
    }
  }

  [Symbol.asyncIterator]() {
    return this.values()
  }
}

// this is a few ways you could get a async iterator
async function iterator * () {
  yield uint8array
}
var iterable = require('fs').createReadStream(path)
var iterable = new Response(csvData).body
var iterable = blob.stream()
var iterable = iterable()

const parser = new Papa.Parser(iterable[, config])

for await (const row of parser) {
  console.log(row)
}

created time in 6 days

issue commentjimmywarting/StreamSaver.js

Cannot abort a streamsaver WriteStream

// don't
ws = new WritableStream()
writer = ws.getWriter() // locks the stream
ws.abort() // Failed to execute 'abort' on 'WritableStream': Cannot abort a locked stream

// do
ws = new WritableStream()
writer = ws.getWriter() // locks the stream
writer.abort()

// or
ws = new WritableStream()
writer = ws.getWriter() // locks the stream
writer.releaseLock() // releases the lock
ws.abort()

as for the browser ui when they cancel, it don't properly propegate back to the service worker when it's aborted. so yea, it's related to #13

if you provide them with a abort button then you could perhaps cancel both the fetch and the writeable stream with AbortController/AbortSignal

other (perhaps easier) solution could have you tried just using a link <a download="filename" href="url">download</a> instead of emulating what a server dose with streamsaver, just provide a content-disposition attachment response header from the backend.

jat255

comment created time in 8 days

issue commentjimmywarting/StreamSaver.js

iOS saves any file as .html

A tiny bit confused of what you are saying...

Safari desktop works cuz it builds up the file in the memory and then download the file due to this condition being true

let useBlobFallback = /constructor/i.test(window.HTMLElement) || !!window.safari

just tested it on my Mac with newest Safari. Service Worker are not involved at all in safari on desktop...


iOS may be a different story. I'm not sure if it's using service worker or not anymore. /constructor/i.test(window.HTMLElement) is a legacy thing that safari have removed. And I believe window.safari is only available on desktop cuz it's where the webkit push notification lives. My guess is that iOS don't have webkit push and window.safari? so the condition useBlobFallback must be false? A simple test to see if it's using web worker could be by pasting this into the url:

javascript:alert(/constructor/i.test(window.HTMLElement)||!!window.safari)

iOS safari gives that extension problem but the file is correct. We renamed files that were mp4 as well as pngs all worked after renaming

that's odd. I thought iOS safari had the same problem as desktop Safari. (that it couldn't download content being generated by service workers) if now that's the case if it's using service worker after all.

https://bugs.webkit.org/show_bug.cgi?id=182848

I'm trying to saving a ReadableStream with Service Worker and Content-Disposition attachment header but it's not working, is this really fixed, patched and released yet?

This patch is not fully fixing the issue, it only allows the network process to redo the download by > triggering another network request. Any service worker generated content will not be downloaded.

That's the reason why i think it's odd... and that the reason why Safari must use the blob fallback and consume lots of memory until it's finish.


So to even tackle this and to possible resolve this then i first need to know

  1. is it using service worker or if it's using the blob fallback in iOS?
  2. if it is using Service Workers then: dose it trigger another network request? i presume not since you are saying you got the right content and you could use the file by just renaming it. So it must have been able to download things generated by a service worker.

(...I'm so confused right now)

I think the reason why .html was added was b/c it used the iframe to download the content. hence why it also must have used service worker instead of the blob fallback: <a download="name.mp4" href="url">. cuz i'm pretty sure a[download] would save the file with the right filename. but i could be wrong too.

I'm sorry I can't help you debug this and fix it. The only think I can do is to help you troubleshoot it and hope that you submit a working PR.

prodigy2m

comment created time in 12 days

issue commentjimmywarting/StreamSaver.js

iOS saves any file as .html

also i don't think it would be as simple as renaming the file. if you look at the file content it may be a total other thing. (webkit switch modes and just download the url as if the service worker wasn't involved and results in a 404 file)

prodigy2m

comment created time in 12 days

issue commentjimmywarting/StreamSaver.js

iOS saves any file as .html

I don't have an iOS device - so hard for me to test it.

There are two ways to download things in streamsaver

  • Downloading using iframes + service worker with content-attachment header
  • And building a blob all in memory and use a[download] href[bloburl] as an fallback

.html is probably added since it's using the first option... since safari have a problem downloading content generated by the service worker (#69) it should fallback to the second option but probably don't do that. (related to #135)

I wanted to try and make this changes to streamsaver

- let useBlobFallback = /constructor/i.test(window.HTMLElement) || !!window.safari
+ let useBlobFallback = /constructor/i.test(window.HTMLElement) || !!window.safari || !!window. WebKitPoint

...instead of user agent sniffing. but don't know if it would work for all browsers in iOS

prodigy2m

comment created time in 12 days

issue closedjimmywarting/StreamSaver.js

some doubts about the configuration part of readme.md

Is it here to configure compatibility?The code should be:?

streamSaver.WritableStream = WebStreamsPolyfill.WritableStream;
streamSaver.TransformStream = WebStreamsPolyfill.TransformStream

closed time in 15 days

hushiyun1994

issue commentjimmywarting/StreamSaver.js

some doubts about the configuration part of readme.md

if you load web streams polyfill in any other way but the cdn then yes.

if you include this

<script src="https://cdn.jsdelivr.net/npm/web-streams-polyfill@2.0.2/dist/ponyfill.min.js"></script>

and the window.WebStreamsPolyfill becomes available streamsaver can detect it and use it automatically

https://github.com/jimmywarting/StreamSaver.js/blob/a6ec1df37593c29a4b172ebc85d4038aae812c9c/StreamSaver.js#L15

hushiyun1994

comment created time in 15 days

issue closedjimmywarting/StreamSaver.js

[Poll] do you use http or https?

As the tile says:

Do you use StreamSaver.js on secure (https) or insecure (http) in production websites?

  • click 🎉 for https (secure)
  • click 👀 for http (insecure)

I'm just curious if folks are using StreamSaver's popup hack that is required by insecure pages. I could include google analytics on gihub pages but i'm not going to do that as i value everyones privacy

<p align="center"><b>▼Vote with reaction▼</b><br><sup>(you can choose both)</sup></p>

closed time in 15 days

jimmywarting

issue commentjimmywarting/StreamSaver.js

Questions about compatible firefox browser

Thanks for giving it stars. Take the one you prefer.

regarding the backpressur - there is room for improvement of it in streamsaver - here is a more recent discussion about it: #145

hushiyun1994

comment created time in 15 days

issue commentjimmywarting/StreamSaver.js

Questions about compatible firefox browser

Would also be cool if you tried this https://github.com/jimmywarting/native-file-system-adapter also instead. with it you can write more types of chunks (including blob's file's arraybuffer, arraybufferviews and strings)

hushiyun1994

comment created time in 15 days

issue closedjimmywarting/StreamSaver.js

Questions about compatible firefox browser

I am writing a web version to transfer files.After the sender reads the file by streaming, the data is sent to the receiver using webrtc, and the receiver downloads the file by streaming.TransformStream and ReadableStream.pipeTo() is not supported in firefox.What should i do?Thank you very much.

const {readable, writable} = new TransformStream({  
  transform: (chunk, controller) => {  
    controller.enqueue(new Uint8Array(chunk))  
  }  
});  
const writer = writable.getWriter();  
readable.pipeTo(streamSaver.createWriteStream(fileName);  
//Write received chunk  
writer.write(chunk);

closed time in 15 days

hushiyun1994

issue commentjimmywarting/StreamSaver.js

Questions about compatible firefox browser

Load a web stream polyfill version and do something like this:

<script src="https://cdn.jsdelivr.net/npm/web-streams-polyfill@2.0.2/dist/ponyfill.min.js"></script>
<script>
const ponyfill = window.WebStreamsPolyfill || {}
const TransformStream = window.TransformStream || ponyfill.TransformStream
</script>

And also

hushiyun1994

comment created time in 15 days

issue commentmozilla/standards-positions

Picture-in-Picture

Just discovered pip was available in FF (chrome user here) recently read How we built Picture-in-Picture in Firefox Desktop - (grate article btw)

  • From one perspective I like that you have made pip available for all the videos out there but as a developer I would like to style/control it myself.
  • I think a better approach would have been to develop something like the chrome pip extension that detects videos and lives outside of the DOM instead. and interacting with the OS instead like the touch bar for example.

It's sad to see 3 different implementation that all behaves differently. I understand it's good to be competitive and design new things to come up with something better

but i hope later that all can agree on one unify spec:ed version later - (whichever it may be)


I would not mind either if you kept it like the way it's right now. But if i detect that it looks bad on my website I would like to disable it and reimplement it myself - hence why i think a API would be useful

One thing i like about chrome + safari's implementation is that i can first detect when it enter/leaves the pip mode so i can style the wrapper element and have it look the same. for instance i set the video poster as a background (still have the video element on top with a opacity: 0 so they can still control it with the context menu also.)

(Partial support for the Picture-in-Picture Web API don't sound that bad either)

Google Dev blog had a good usecase also: Show canvas element in Picture-in-Picture window

const video = document.createElement('video');
video.muted = true;
video.srcObject = canvas.captureStream();
video.play();

// Later on, video.requestPictureInPicture();

what if it's a offscreen video element and the only visible is the canvas element? then there is no way to enter pip mode.

beaufortfrancois

comment created time in 16 days

startedgbentaieb/pip-polyfill

started time in 16 days

issue commentnode-fetch/node-fetch

Response does not support Blob as a body

  1. Your example fails in the browser also. - 'content' isn't valid json.
  2. Your Blob implementation better have the new reading methods
class Blob {
  async arrayBuffer () {
    return arrayBuffer
  }
  async text () {
    return text
  }
  stream () {
    return stream.Readable.from(x)
  }
}
uzer-ua

comment created time in 17 days

issue commentMattiasBuelens/web-streams-polyfill

Example how to use with node-fetch?

btw, there is an todo in node-fetch to remove the Buffer from the body and replacing it with a stream. so that this piece becomes unnecessary.

  // If you also want to support a Buffer body:
  if (body instanceof Buffer) {
    return new ReadableStream({
      start(controller) {
        controller.enqueue(body);
        controller.close();
      }
    })
  }
pkieltyka

comment created time in 17 days

issue commentwhatwg/streams

ReadableStream.from(X)

I saw that node have already added Readable.from

maybe would be grate if they could be similar (isomorphic)?

ricea

comment created time in 17 days

issue closednode-fetch/node-fetch

ReadableStreams, consistent with fetch api standard

Hi there, thanks for making node-fetch, it's awesome and super useful!

One question though.. the purpose of node-fetch is to "bring fetch to nodejs", which is wonderful, but then shouldn't the goal be to always have api parity with the Web-standards in browsers?

for this reason, I can't understand why in node-fetch v3.x's upcoming version, the response body api for readable streams isn't consistent with the Web versions. Specifically, response.body.getReader() should function the same as in a Web browser. By doing so, this would allow http streaming apis to work with node-fetch as they do in Web browsers with normal fetch apis.

for some info on how to convert between formats, perhaps this would help as some example code between casting which would be very nice to have incorporated directly into node-fetch https://github.com/gwicke/node-web-streams

closed time in 17 days

pkieltyka

issue commentnode-fetch/node-fetch

ReadableStreams, consistent with fetch api standard

v3 have became better at unify the .body to always be the same than it was before.

My recomendation for reading both whatwg/node streams are with async iterators

// for web - simplefied inperfect polyfill that solves most of your problem
if (!ReadableStream.prototype[Symbol.asyncIterator]) {
  ReadableStream.prototype[Symbol.asyncIterator] = async function* () {
    const reader = this.getReader()
    while (1) {
      const chunk = await reader.read()
      if (chunk.done) return chunk.value
      yield chunk.value
    }
  }
}

// later
for await (let chunk of response.body) {
  console.log(chunk) // Uint8Array
}

closing this since there already is existing descussion around whatwg vs node streams

#647 #387 nodejs/node#22352

pkieltyka

comment created time in 17 days

issue commentnode-fetch/node-fetch

"Bring your own stream"

bitinn

comment created time in 17 days

issue commentStuk/jszip

RangeError - when adding just over 2,400 files

Have kinda left jszip for my own streaming implementation now. That fiddle is also very old.

  • https://github.com/jimmywarting/StreamSaver.js
  • https://jimmywarting.github.io/StreamSaver.js/examples/saving-multiple-files.html

better streaming support, less ram usage. Made for downloading/saving

Also helped out to build https://github.com/transcend-io/conflux/ that have support for both reading and creating zip files using whatwg streams

hashitha

comment created time in 19 days

issue commentnode-fetch/node-fetch

fetch doesn't allow for AbortSignal custom impl with constructor name other then exactly AbortSignal

Bundlers may minify classes and minify names but if you add Symbol.toStringTag to your class that's equal to AbortSignal then you may be fine with minimizing it.

Also i don't think speced polyfilled api's should mangle the class names to something short.

Veetaha

comment created time in 21 days

issue commentpillarjs/multiparty

Promise grammar support.

Would like to have a async iterator... kinda best of both worlds awaitable + event based (can yield one entry at the time)

const form = new multiparty.Form()

for await (let entry of form.parse(req)) {
  console.log(entry.headers) // [ [key, value], ... ]
  console.log(entry.name) // field name
  console.log(entry.fileName) // filename

  // entry.value is an async iterator also
  for await (let chunk of entry.value) {
    // do something with chunk
  }

  // or
  stream.Readable.from(entry.value).pipe(fs.createWriteStream(entry.filename))
}

but it seems farfetched

Vallista

comment created time in 22 days

issue commentnodejs/node

minor request on logging classes

honestly I wouldn't rely on it that much. I just don't see classes as a function so i don't think they should be named as such to begin with.

classes are extensible program-code-template for creating objects, providing initial values for state (member variables) and implementations of behavior (member functions or methods).

i.e: not a function that you can call and expect an output. it's something you instantiate.

it's like @BridgeAR said: it's no guarantee, just an indicator to help you distinguish things out from the mass. and helping you figure out what things are a little bit earlier.

--

@BridgeAR, do you mind sharing your local implementation that solves this?

jimmywarting

comment created time in 23 days

issue commentnodejs/node

minor request on logging classes

It's a start...

typeof v === 'function'
  && v.prototype // undefined if arrow fn
  && v.constructor?.name !== 'GeneratorFunction'
  && /^class\s/.test(Function.prototype.toString.call(v))

maybe should be some native builtin way to handle it.

on a side note i notice that chrome devtool also prints f* name(){} for generator functions - that * is also useful

jimmywarting

comment created time in 23 days

issue commentdenoland/deno

and new blob reading methods

i would rather have this method than the FileReader

jimmywarting

comment created time in 23 days

issue openedsendgrid/sendgrid-csharp

get message id /w domain

I would like to use message://<message-id> to help user quickly open up our verification email for an easier onboarding

When I send a mail I with the api get a response header including x-message-id: xxxxx but the link don't work unless i have the origin domain in there also.

i tried adding @ismtpd0004p1lon1.sendgrid.net at the end also but it keeps changing subdomain all the time. would it be possible to also get the slave-domain of which origin it was sent from (or going to be sent from) also?

created time in 24 days

pull request commentwebtorrent/webtorrent

use native Set instead of uniq library

Another way of doing it, instead of Array.from

[...new Set()]

heck why not even use Set instead of an array? Would it be best if it could stay uniq all the time? guess it's more problematic to just change something from Array to a Set

(Just Code Golf - either way LGTM)

KayleePop

comment created time in 25 days

issue openednodejs/node

minor request on logging classes

When logging object, functions and classes you sometimes see function and classes being printed as such:

Skärmavbild 2020-03-14 kl  21 28 43

I wish they where a bit more distinguish if it is a function or a class so you know if you have to call the "function" with the new keyword and expect something back kinda like chrome's devtool:

Skärmavbild 2020-03-14 kl  21 32 30

with the prefixed class and f in the beginning

maybe with an other color (same as the number color - orange) or like my atom editor: Skärmavbild 2020-03-14 kl  21 38 41

it's not so much about styling, (the above are just some suggestions) the request feature is just more about distinguish classes from functions in the terminal

thanks in advance

created time in 25 days

issue commentnode-fetch/node-fetch

v3 Roadmap

Tried beta-4 now. at least i can load fetch now but felt a bit weird that i had to do:

import nodeFetch from 'node-fetch'
const fetch = nodeFetch.default
xxczaki

comment created time in 25 days

issue closedsendgrid/sendgrid-nodejs

Sorry to say this.

...but i uninstalled sendgrid. Reasons:

  1. Many deprecated stuff (due to some backwards compability)
  2. Multiple code styles (promise + callback = larger) (just use async/await)
  3. No good error responses
  4. No good data validation either that can throws on wrong data being sent
  5. inconsistent with naming conventions
  6. took one hour to try and get a template to arrive to my email but still failed (seems like bad/outdated documentation)

Simple http api calls outweights all this dependencies: https://npm.anvaka.com/#/view/2d/%2540sendgrid%252Fmail

And it took just 5-10 minutes to figure out how to work with the api instead.

closed time in 25 days

jimmywarting

issue openedsendgrid/sendgrid-nodejs

Sorry to say this.

...but i uninstalled sendgrid. Reasons:

  1. Many deprecated stuff (due to some backwards compability)
  2. Multiple code styles (promise + callback = larger) (just use async/await)
  3. No good error responses
  4. No good data validation either that can throws on wrong data being sent
  5. inconsistent with naming conventions
  6. took one hour to try and get a template to arrive to my email but still failed (seems like bad/outdated documentation)

Simple http api calls outweights all this dependencies: https://npm.anvaka.com/#/view/2d/%2540sendgrid%252Fmail

And it took just 5-10 minutes to figure out how to work with the api instead.

created time in 25 days

issue commentsendgrid/sendgrid-nodejs

warning > request@2.88.2 deprecated

I vote for node-fetch

ArtashMardoyan

comment created time in 25 days

issue commentnode-fetch/node-fetch

v3 Roadmap

  • require isn't avalible any longer when i have specified

package.json

{
  "type": "module",
}

so i have a really hard time to load node-fetch right now

xxczaki

comment created time in a month

issue commentnode-fetch/node-fetch

v3 Roadmap

just tried using node-fetch@beta.3 in a new fresh project using node v13 with ESM module

many modules have been working fine except for node-fetch.

import fetch from 'node-fetch'
(node:18717) ExperimentalWarning: The ESM module loader is experimental.
internal/modules/esm/resolve.js:61
  let url = moduleWrapResolve(specifier, parentURL);
            ^

Error: Cannot find main entry point for ~/project/node_modules/node-fetch/ imported from ~/project/app.js

import fetch from 'node-fetch/src/index.js'
(node:18709) ExperimentalWarning: The ESM module loader is experimental.
~/project/node_modules/node-fetch/src/index.js:9
import http from 'http';
^^^^^^

SyntaxError: Cannot use import statement outside a module

I would rather have preferred to load the real deal from src and not some transpiled assets 😞

xxczaki

comment created time in a month

issue commentjimmywarting/StreamSaver.js

Browser crashes (OOM?) when using a custom `ReadableStream`

Would you like to help take a stab at improving StreamSaver with a PR to maybe help solve this back pressure issue (including sending back a cancel event #13)

And potentially include an option for making transferable an option

if (option.transfer) {
   channel.port1.postMessage(chunk, [ chunk ])
}
adamkewley

comment created time in a month

issue commentnode-fetch/node-fetch

"Bring your own stream"

FYI, i saw that node implemented Readable.from in v12.3

const { Readable } = require('stream');

async function * generate() {
  yield 'hello';
  yield 'streams';
}

const readable = Readable.from(generate());

readable.on('data', (chunk) => {
  console.log(chunk);
});

kinda what i expected to land eventually :)

maybe this old generator/iterator combined with async iterator will be the new norm of writing cross node/web platform features hopefully

bitinn

comment created time in a month

issue commentjimmywarting/StreamSaver.js

Browser crashes (OOM?) when using a custom `ReadableStream`

Non-goals

  • Avoiding cloning the chunks is not a goal at this stage; see "future work".
  • As such, Transfer-only objects (such as ImageBitmap) will not be supported yet; only serializable > objects and the built-in types supported by the structured serialization algorithm.

as expected - did't read it all.

adamkewley

comment created time in a month

issue commentjimmywarting/StreamSaver.js

Browser crashes (OOM?) when using a custom `ReadableStream`

Now that when i think of it... it may possible to use MessageChannel rather than transferable streams

i expect transferable streams clones the data but i haven't measured it

adamkewley

comment created time in a month

issue commentjimmywarting/StreamSaver.js

Browser crashes (OOM?) when using a custom `ReadableStream`

correct, you can't implement some custom transferable on a custom classes.

the readable stream chunks the Response expects a byte stream of Uint8Array chunks and nothing else so all chunks needs to be written as such.

but it is also possible to transfer arraybuffer views as well.

var chunk = new Uint8Array(20)
console.log(chunk.length) // 20
mc.port1.postMessage(chunk, [chunk.buffer])
console.log(chunk.length) // 0

so if you would like to go fancy and have some zip prototcol or something you can have multiple views that uses the same shared buffer

const buffer = new ArrayBuffer(20)
const payload = {
  header: new Uint8Array(buffer, 0, 5)
  body: new Uint8Array(buffer, 5, 15)
  footer: new Uint8Array(buffer, 15)
}
postMessage(payload, [buffer]) 
adamkewley

comment created time in a month

issue closedjimmywarting/StreamSaver.js

Windows Defender RAM usage when using StreamSaver

Our project uses streamSaver to download huge tiff files (sometimes with 9gb). Some users have been reporting that, while the download is ongoing, Windows Defender starts to build up his RAM usage, sometimes reaching 8gb or more. When download is finished, RAM is released. Other users have no problem whatsoever. Probably this has to do with some Antivirus policy but is weird that withing the same Company structure different results are happening. Has anyone found something like this?

Our implementation is almost identical to your fetch example.

I also have a question that has nothing to do with the topic above. Is it possible to detect if the dowload fails for some reason (fail on backend during the download)? Our filesystem is not of the most reliable, specially with these large files, and some times the download fails. We would like to show some message if this occurs.

closed time in a month

pcezar-i9

issue commentjimmywarting/StreamSaver.js

Browser crashes (OOM?) when using a custom `ReadableStream`

ps. another performance trick would be to also make all chunks that you are sending also transferable so browser don't have to copy over the data.

https://github.com/jimmywarting/StreamSaver.js/blob/a6ec1df37593c29a4b172ebc85d4038aae812c9c/StreamSaver.js#L263-L266

channel.port1.postMessage(chunk, [ chunk ])

But doing so can have side effects in the main thread if you plan to reuse the chunk.

adamkewley

comment created time in a month

issue commentjimmywarting/StreamSaver.js

Browser crashes (OOM?) when using a custom `ReadableStream`

The streamsaver isn't a silver bullet when it comes to pulling/pushing data to the service worker over MessageChannel. it's sort of half implemented.

MessageChannel is used when transferable stream isn't available, dose it flushes the queue quite instantaneous.

The writable/readable Strategy where not implemented before i learned that transferable stream became avalible.

It's possible to emulate this behaviour by using postMessage(). The service worker would have to listen on pulls and request the main thread for more data instead of data flowing in from main thread. postmessage-transferable-stream-wrapper or remote-web-streams are two Existing techniques for moving work off the main thread often resemble a subset of this functionality.


Browser are not a silver bullet when it comes to pulling data also. They are atomic. Meaning they write to a temporary file and starts pulling data and writes to a temporary file immediately even before you have selected a place/name to write to. When the stream closes then it moves/rename the temporary file to where you want your file to end up.

Still doe you are able to pause the stream and it stops pulling data - this is possible in canary or with some experimental flags enabled. but that is b/c it uses transferable streams instead of MessageChannel

In most scenarios you don't have all data buffered up in the browser at one single point. and haven't been any issue for someone yet. often you wait on some data to arrive, so maybe one "dirty" solution would be just to delay the writes

    async pull (controller) {
        await sleep(100)
        controller.enqueue(data)
    }
adamkewley

comment created time in a month

issue commentModernizr/Modernizr

file input webkitdirectory on android

i figured maybe it wouldn't be possible to alter the filelist somehow if the webkitdirectory was added

function FileListItem(a) {
  a = [].slice.call(Array.isArray(a) ? a : arguments)
  for (var c, b = c = a.length, d = !0; b-- && d;) d = a[b] instanceof File
  if (!d) throw new TypeError("expected argument to FileList is File or array of File objects")
  for (b = (new ClipboardEvent("")).clipboardData || new DataTransfer; c--;) b.items.add(a[c])
  return b.files
}

var files = [
  new File(['content'], 'sample1.txt'),
  new File(['abc'], 'sample2.txt')
];
input.webkitdirectory = true
input.files = new FileListItem(files)
input.files.length === 2 // detection fail
jimmywarting

comment created time in a month

issue openedModernizr/Modernizr

file input webkitdirectory on android

detecting directory support is flaky on mobile.

var input = document.createElement('input')
input.type = 'file'
alert('webkitdirectory' in input)

my chrome browser on android yields true - but selecting a folder isn't possible. however it lets me pick files instead.

I would have been fine with a grateful degration or a progressive enhancement. But upon selecting files the input.files.length is still 0. So the change event won't even fire.

anyone have any suggestions?

created time in a month

issue commentmholt/PapaParse

💡PapaParse 6.0

if you would like to write cross node/browser maybe you should consider abending streams and use async iterator instead

dboskovic

comment created time in a month

issue commentmholt/PapaParse

💡PapaParse 6.0

es7 + jsdoc and/or d.ts

wish to be able to use native import in browser without having to bundle or transpile the scripts. typescript is not javascript and dose not work in browser.

dboskovic

comment created time in a month

startedRalim/TC66C

started time in a month

issue commentjimmywarting/StreamSaver.js

MITM Not Receiving MessageChannel on writer.close()

Hmm, not sure what it is. can't reproduce.

  • localhost is considered secure. so you shouldn't see any popup.
  • safari don't use workers at all cuz it can't download content generated on the client side, it switches to just download the url behind it and avoiding the hole respondWith. instead it generates a blob and use a[download] #69
  • the MITM shouldn't receive any such event - it should just accept a postmessage event, pass it along to the service worker and be done with it. the rest of the communication happens throught MessageChannel.

if the download is doing just fine then maybe you don't have anything to worry about. it could be that something else is posting messages to any iframe that exist in the DOM tree. you could try putting a breakpoint at the message event at look what the message data look like.

henkejosh

comment created time in a month

issue commentWICG/native-file-system

Getting file metadata

I have been thinking about it also. Long time ago i made a tool that would calculate/visualize what takes up most space in a directory with a pie/sun chart. that meant i had to loop over each file/directory and get all file sizes.

Now i don't know what a File dose under the hood if it's just a references to the actual file on the disk. but i would not want to create a clone/snapshot of the file just to get the file's size

jespertheend

comment created time in a month

issue commentWICG/native-file-system

Ability to get a ReadStream from a FileSystemFileHandle

feels like a duplex read/write stream is in order? Have been thinking about it also. just like the seek method on writable stream i wish it did exist on some readable stream as well.

for stuff where you have to read part of the file. again and again

srgraham

comment created time in a month

PR opened dropbox/dropbox-sdk-js

removed dependencies

Buffer where just used for one thing to base64 encode usr/psw

this removes the hole buffer module

closes #176

+14 -26

0 comment

3 changed files

pr created time in a month

create barnchjimmywarting/dropbox-sdk-js

branch : remove-dependencies

created branch time in a month

push eventjimmywarting/dropbox-sdk-js

Jimmy Wärting

commit sha 372bc0e2286bef691b8243fe76cb62141d3aefdc

removed dependencies

view details

push time in a month

pull request commentnode-fetch/node-fetch

refactor: Replace `url.parse` with `new URL()`

@jimmywarting I can remove the refactoring commits and create a separate PR for that. However, since this is probably the last PR before the 3.x release, I wanted to include them in it.

you don't have to but just remember that till next time

xxczaki

comment created time in a month

issue openeddropbox/dropbox-sdk-js

import with full path

if you are going to allow es6 import/export than you should really require extensions on your path. node handle it just fine but browsers dose not.

i can't simply just do this in the browser without having to bundle it first

<script type="module">
import dropbox form 'https://cdn.jsdelivr.net/npm/dropbox@4.0.30/es/index.es6.js'
</script>

it tries to load https://cdn.jsdelivr.net/npm/dropbox@4.0.30/es/team/dropbox-team when it should be loading https://cdn.jsdelivr.net/npm/dropbox@4.0.30/es/team/dropbox-team.js

Ryan Dahl regrets the hole package.js and modules resolver. this is not how browser dose things.

https://youtu.be/M3BM9TB-8yA?t=584

created time in a month

issue commentdropbox/dropbox-sdk-js

Builds do not include proper license copies of dependencies

Can we remove the buffer dependency?

pcworld

comment created time in a month

pull request commentnode-fetch/node-fetch

refactor: Replace `url.parse` with `new URL()`

Feels like this PR is getting little out of context and being rather large and doing unrelated changes towards just changing parse with url... more to review.

xxczaki

comment created time in a month

issue openedlittledan/serializable-objects

async support maybe?

One historical problem with the old indexedDB implementation was that you couldn't store blobs. So folks had to test whether or not it fist could store it. otherwise they had to convert it to base64 url back and forth.

import { Serializable, register } from "@std/structured-clone";

class Person extends Serializable {
  async serialize() {
  }
  static async deserialize(payload) {
  }
}
register(Person);

created time in a month

push eventjimmywarting/native-file-system-adapter

Jimmy Wärting

commit sha 5bf8756c050fc63158a32d25ff0c8d574697ae64

Update README.md

view details

push time in a month

push eventjimmywarting/native-file-system-adapter

Jimmy Wärting

commit sha 232d4f73a2b6658bb828ca522ee710d37d5b2360

Update README.md

view details

push time in a month

pull request commentnode-fetch/node-fetch

refactor: Replace `url.parse` with `new URL()`

I also made an attempt to get in the new URL long time ago. but i also stumble upon some failing tests then.

It would be an nice addition to get in the new URL for sure. Even if it meant that we got some new failing test and possible some breaking changes. like throwing an error for constructing request with a relative url. But that's maybe something that we can fix arbitrary URL after this.

xxczaki

comment created time in a month

push eventjimmywarting/native-file-system-adapter

Jimmy Wärting

commit sha c9adbe98ad6f77681a503cd13c0f81b005536f27

Update test.html

view details

push time in a month

push eventjimmywarting/native-file-system-adapter

Jimmy Wärting

commit sha e0f44a0a83a033556d2e92642ca411735b5e6da7

Add files via upload

view details

push time in a month

push eventjimmywarting/StreamSaver.js

Jimmy Wärting

commit sha a6ec1df37593c29a4b172ebc85d4038aae812c9c

Update README.md

view details

push time in a month

push eventjimmywarting/StreamSaver.js

Jimmy Wärting

commit sha e84292218500e81751163405f5387b57cf7c2d39

Update media-stream.html closes #143

view details

push time in a month

issue closedjimmywarting/StreamSaver.js

Media stream example not working

Great stuff!

Just noticing that the media stream example wants there to be a button with the id "$close" which would prompt the stream to get written. Seems like the live example won't work without it.

This is the file: https://jimmywarting.github.io/StreamSaver.js/examples/media-stream.html

closed time in a month

steveellis

push eventjimmywarting/StreamSaver.js

Jimmy Wärting

commit sha e762bf3cd98110cfd8be9357872adf0588b241b4

Update media-stream.html

view details

push time in a month

issue commentjimmywarting/StreamSaver.js

Media stream example not working

https://github.com/jimmywarting/StreamSaver.js/blob/2f6c53eb63adbed503651341be2987dd1cab53b1/examples/media-stream.html#L39-L43

yea... about that timeout... i had it before also. it should work to close the stream immediately i think all chunks that are left should be piped... reading this hole stream should still yield abc... but i have no clue what's going on there.

steveellis

comment created time in a month

issue openedWICG/native-file-system

asyncIterator for getEntries

have experimented with the API a bit, could not help but feel like FileSystemDirectoryHandle is missing Symbol.asyncIterator that maps to getEntries

for await (const entry of dirHandle) { ... }

created time in a month

issue commentjimmywarting/StreamSaver.js

Media stream example not working

hi @steveellis, Maybe try

- writable.close()
+ writer.close()
steveellis

comment created time in a month

more