profile
viewpoint

felixge/cakephp-authsome 125

Auth for people who hate the Auth component

felixge/debuggable-scraps 79

MIT licensed code without warranty ; )

felixge/couchdb-benchmarks 10

some benchmark scripts for testing CouchDB performance

felixge/can 5

Nothing to see here yet.

felixge/commander.js 5

node.js command-line interfaces made easy

felixge/ack 3

A replacement for grep for programmers

felixge/berlinjs.org 3

The official BerlinJS website

felixge/depmake 3

A collection of bash functions and conventions for creating applications that can bundle all their code and dependencies inside a tar file.

felixge/cakephp 2

CakePHP: The Rapid Development Framework for PHP - Official Repository

starteddanistefanovic/build-your-own-x

started time in 4 hours

issue commentnodejs/node

Repeated socket data events can block timers, etc

@bnoordhuis ok, fair enough. I understand you didn't intend the 32 event count to become a scheduler, but for better or worse, that's the emergent behavior of it.

I also understand that the node core is probably more concerned with e.g. http req/s throughput performance than tail latencies since there are available workarounds.

That being said, tail latencies are important for microservice architectures where a single user request may trigger many internal requests. Even when those are parallelized, a single latency outlier will slow down everything else.

Anyway, I think microservices are mistake in most cases, and I'm also not using node or node-mysql these days, so I'm unlikely to send a patch. However, I appreciate your responses and hopefully somebody else will feel inspired to tackle this in the node core at some point. If it's feasible, it'd be nice to see node I/O scheduling to be more tail-latency friendly under high concurrency.

logidelic

comment created time in a day

issue commentmysqljs/mysql

Driver blocking node event loop

@logidelic any sort of attempt to implement user-land scheduling will be a tradeoff between latency and throughput.

That being said, I suspect your proposal will have a higher overhead than mine (as every callback gets deferred, even if the event loop hasn't be blocked much), and also seems to allow unbound memory usage which might could cause significant issues for people dealing with very large query results.

dgottlie

comment created time in 2 days

startederthalion/postgres-bcc

started time in 2 days

issue commentnodejs/node

Repeated socket data events can block timers, etc

From the linked issue I infer that mysqljs consumes data in "fire hose" mode and doesn't apply back-pressure.

Yes, that's currently the case, and I also agree that node-mysql could use stream.pause(), setImmediate, and stream.resume() as shown here to mitigate the problem.

I just think that's it's a kludge to have to coerce the I/O scheduling from user-land like this. IMO it'd be much better if the count circuit breaker in libuv could be controlled on a per-socket level, e.g. stream.setEventLimit(). This should not decrease latency for existing code, and is probably far more efficient than the workaround outlined above.

Anyway, I don't have a horse in this race, so I'm not expecting any changes. I just found this to be an interesting issue, and hope my suggestions might be useful.

logidelic

comment created time in 2 days

issue commentmysqljs/mysql

Driver blocking node event loop

@logidelic I replied to the node thread. I think node has a scheduling problem and the suggested workaround seems unreasonable for many scenarios, including node-mysql. Let’s see the discussion continues

dgottlie

comment created time in 2 days

issue commentnodejs/node

Repeated socket data events can block timers, etc

@bnoordhuis the rules for async programming make sense. But I still don’t understand how that justifies the current scheduling behavior? Why 32 events?

If my callback computes something for 1ms, do you call that “blocking”? Where is the threshold? It hardly is worth moving 1ms of computation into a worker from an overhead perspective. What are you suggesting in that case? Accept up to 32ms time delays for a single socket? It might be 320ms for 10 sockets right?

logidelic

comment created time in 2 days

pull request commentmixn/carbon-now-cli

language-map: add support for plain text (.txt)

@josinalvo thanks for your suggestion, but isn't that exactly what this PR is doing? : )

felixge

comment created time in 4 days

push eventfelixge/dump

Felix Geisendörfer

commit sha 3a38f35a88046830701da821b3f72ab4f360ecc8

add react-fetch-hook

view details

push time in 5 days

create barnchfelixge/dump

branch : master

created branch time in 6 days

created repositoryfelixge/dump

created time in 6 days

pull request commentfelixge/go-xxd

fix: failed "undefined: xxd" on go test

thanks @aphroteus

aphroteus

comment created time in 6 days

push eventfelixge/go-xxd

Paul Huang

commit sha 74a1b2e4fb1877bc8a9c84a137c4cf2489ddb51c

fix: failed "undefined: xxd" on go test The xxd function name is lower case in xxd.go but XXD caller is upper case in xxd_test.go it's case sensitive

view details

Felix Geisendörfer

commit sha 0492f878d1e3bf12cf97248aee6edc313cd0420c

Merge pull request #7 from aphroteus/master fix: failed "undefined: xxd" on go test

view details

push time in 6 days

PR merged felixge/go-xxd

fix: failed "undefined: xxd" on go test

The xxd function name is lower case in xxd.go but XXD caller is upper case in xxd_test.go it's case sensitive

+2 -2

0 comment

1 changed file

aphroteus

pr closed time in 6 days

startedminio/simdjson-go

started time in 7 days

issue commentgolang/go

cmd/go: error: pointer is missing a nullability type specifier when building on catalina

I just upgraded to Catalina and also had issues with clang throwing warnings.

Turns out this was caused by having installed clang via homebrew. brew remove llvm fixed the issue.

So perhaps https://go-review.googlesource.com/c/go/+/205457/ is only needed to support users who have installed their own version of clang?

Tochemey

comment created time in 8 days

startedgo-llvm/llgo

started time in 8 days

startedjremmen/vim-ripgrep

started time in 8 days

issue commentmysqljs/mysql

Driver blocking node event loop

Nice! It would be very interesting to know if this impacts the wall clock of large query results, but fixing scheduling lock ups is more important than raw throughput for sure. Hopefully it doesn't cause the query parsing time as a total against the wall clock much impact.

Yeah, that could be a concern.

One could combat this by having each connection instance setup a setInterval of 1ms which updates a this._lastTimer property. If this._lastTimer gets too far behind (e.g. 10ms), then the connection can call socket.pause() and setImmediate().

That being said, one should certainly question the wisdom of implementing an I/O scheduler inside of a database client library. It'd be much better to seek solutions for this in the node core ...

dgottlie

comment created time in 10 days

issue commentmysqljs/mysql

Driver blocking node event loop

@dougwilson are you sure?

I just modified my example to use socket.pause() and setImmediate and was able to reduce the timer starvation from ~1800ms at a time to ~80ms:

https://gist.github.com/felixge/d16ee6b128af7256862bf83fe2f34d8d#file-blocked-by-net-js

vs

https://gist.github.com/felixge/d16ee6b128af7256862bf83fe2f34d8d#file-blocked-by-net-improved-js

The flamegraph obviously shows why the callback is eating a lot of CPU time, but it doesn't show scheduling issues. By using setImmediate we're able to coerce the scheduler from it's default behavior of executing queued I/O events up to the internal callback limit to instead schedule evenly between I/O and timer events.

See https://nodejs.org/en/docs/guides/event-loop-timers-and-nexttick/

dgottlie

comment created time in 10 days

issue commentmysqljs/mysql

Driver blocking node event loop

@logidelic yeah, they should help. I.e you should be able to see better timer latencies in exchange of slightly lower overall throughput.

dgottlie

comment created time in 10 days

issue commentmysqljs/mysql

Driver blocking node event loop

I'm no longer involved with this project, but decided to take a quick look since it seems interesting.

As far as I can tell, my theory from 2012 is probably correct and there seems to be a scheduling issue in the node core, as demonstrated by the code here:

https://gist.github.com/felixge/d16ee6b128af7256862bf83fe2f34d8d#file-blocked-by-loop-js https://gist.github.com/felixge/d16ee6b128af7256862bf83fe2f34d8d#file-blocked-by-net-client-txt

Firing a 100 million loop iteration every 1s seems to block the event loop for ~60ms. That seems reasonable.

Firing the same loop in a busy socket's 'data' callback however blocks the event loop for up to 1800ms. This seems unreasonable because the network callback gets scheduled 30x in a row while the timer has to wait.

So if your application requires fair scheduling between network and timer events, I'd suggest you raise an issue with the node.js core to get their feedback on this.

It might also be possible to implement some sort of cooperative scheduling in node-mysql by yielding back to the event loop after a certain amount of time. But if that's what it takes to get reasonable scheduling in node, I'd rather ditch it for Go 🤪.

dgottlie

comment created time in 10 days

startedColinEberhardt/cla-bot

started time in 16 days

startedkingluo/pgcat

started time in 17 days

startedwagoodman/dive

started time in 17 days

issue commentnode-formidable/node-formidable

RFC: Convert to monorepo & meta stuff (tidelift)

True. But sponsoring/paying for Open Source is needed.

Agreed!

The Tidelift model is a good one because every party is winning (in any sense) and has the motivation to continue doing their job as best as possible.

Sure, looking forward to see how it works out.

tunnckoCore

comment created time in 20 days

startedrxhanson/Rectangle

started time in 21 days

issue commentnode-formidable/node-formidable

RFC: Convert to monorepo & meta stuff (tidelift)

@tunnckoCore thanks for clarifying. This all sounds good to me. A small note in the README as suggested by @GrosSacASac is fine. No need to notify the current commiters when removing them beyond what GH might do automatically.

I initially thought you were going to join a company named Tidelift as an employee. If they're just some sort of way to crowdfund open source, then I don't have much concerns. I'm happy if there are any opportunities for those working on node-formidable to get some compensation for it.

I've just had bad personal experiences with corporations taking over open source projects in the past, including node.js itself ; ). But I trust you to make the right decisions.

Thanks again for continuing to maintain this package!

tunnckoCore

comment created time in 21 days

issue commentnode-formidable/node-formidable

RFC: Convert to monorepo & meta stuff (tidelift)

My two cents:

  • Removing inactive contributors is fine as long as it's clearly communicated and anybody who feels wronged by it has a chance to regain their commit bits without too much fuzz.
  • Moving to a mono repo is also fine by me.
  • I don't know Tidelift but have no issues with corporate support this project in general. I would however ask for discussion if there are any plans to register trademarks or turn the README into an advertisement of some kind.

Thanks for all your work on this project @tunnckoCore

tunnckoCore

comment created time in 21 days

PR opened vitalk/vim-simple-todo

remove 'a' suffix from (simple-todo-new-list-item)

Expected Behavior

Triggering the normal mode flavor of <Plug>(simple-todo-new-list-item) should produce:

- [ ] 

Actual Behavior

- [ ] a

It looks like the a may have been a typo.

+1 -1

0 comment

1 changed file

pr created time in 21 days

push eventfelixge/vim-simple-todo

Felix Geisendörfer

commit sha cb08db74b6964d2359773e3f68127c8757e876e8

remove 'a' suffix from (simple-todo-new-list-item)

view details

push time in 21 days

startedtodotxt/todo.txt-cli

started time in 21 days

issue commentsatya164/react-simple-code-editor

how to limit height?

@hemavidal I was able to make this work by passing a minHeight attribute to the Editor props that has the same value as the height passed to the outer div as suggested by @satya164 .

jschuler

comment created time in 22 days

startedsatya164/react-simple-code-editor

started time in 22 days

issue commentdocker-library/postgres

Debug Symbols

but several are then not installable due to version mismatches, which would be due to Debian security updates (which AFAIK, still do not publish debug symbol packages anywhere):

FWIW, having full symbols for postgres itself and perhaps libc is half the battle IMO. The other libs would be nice, but if it's super hard to get correct symbols for them, it might be reasonable to consider this out-of-scope for now.

felixge

comment created time in 24 days

issue commentdocker-library/postgres

Debug Symbols

Debugging the database itself is not what most users of this image are trying to achieve. I think a separate image and thus an opt-in approach is more appropriate.

Well, arguably almost nobody wants to debug their database ... but when your production DB has an issue, you might have a come-to-jesus when it comes to image size vs debuggability.

You'll be under time pressure, and restarting your db container on a new debug-image while praying that it won't corrupt your data due to incompatible libc collations or similar may not seem like an attractive option ...

That being said, as somebody who already has those scars and stories to tell, I'll end up with a better setup either way. I'm just trying to save others from future PTSD inducing adventures ; )

felixge

comment created time in 24 days

issue commentdocker-library/postgres

Debug Symbols

@tianon thank you so much for your research on this!

What about a dedicated debug image as suggested by @otbutz ? Would you consider to include something like this in this repo?

FWIW, I'd still prefer including symbols by default, even if it blows up the image size, and providing non-symbol images as the alternative. But I don't feel too strongly about it, especially if the README tells people about the tradeoffs that are available.

felixge

comment created time in 24 days

startedmachyve/xhyve

started time in 24 days

starteddocker-library/postgres

started time in 25 days

issue openeddocker-library/postgres

Debug Symbols

Problem

The postgres binaries shipped with the docker images in this repo seem to be stripped:

root@0d42c3d8cd06:/# postgres --version
postgres (PostgreSQL) 12.1 (Debian 12.1-1.pgdg100+1)
root@0d42c3d8cd06:/# file $(which postgres)
/usr/lib/postgresql/12/bin/postgres: ELF 64-bit LSB shared object, x86-64, version 1
(SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux
3.2.0, BuildID[sha1]=e775eb7dcaa0306329f0624056df0fa892d48f03, stripped
-----------------------------------------------------------------^

This makes it difficult to use tools such as gdb or perf to debug problems as the resulting stack traces will often be useless.

See my tweet here for a practical example of what can be done when symbols are present.

Proposal

It'd be great if this image would start to ship with debug symbols by default, e.g. by compiling with ./configure CFLAGS="-fno-omit-frame-pointer -ggdb".

If not, it'd be great to document how to derive an image that includes symbols. My first attempts at doing so below have failed. I still get the same output for file $(which postgres) as above, and readelf -s $(which postgres) | wc -l gives me the same ~9k symbols contained in this image, rather than the ~30k symbols usually available for a non-stripped postgres binary.

FROM postgres
RUN apt-get update
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y \
  debian-goodies \
  binutils
RUN apt-get install -y $(find-dbgsym-packages $(which postgres))

I'm also happy to try to help with creating a PR to improve the debugging/symbol situation once the direction is clear.

From my PoV the only downside to including symbols is image size, so maybe they shouldn't go into the alpine flavor of this image.

created time in 25 days

startedjustincormack/nsenter1

started time in a month

startedhanwen/termite

started time in a month

startedtycho/clockperf

started time in a month

startedlaurenz/pgreplay

started time in a month

startediovisor/bcc

started time in a month

push eventfelixge/tweets

Felix Geisendörfer

commit sha 30e7a9b12dc80a9376544904938a5ab405b6a642

delete bak dir

view details

Felix Geisendörfer

commit sha 779fda408e4224aa3f6bad9c1526037eb43bf1d6

Link twitter thread from repo

view details

push time in a month

push eventfelixge/tweets

Felix Geisendörfer

commit sha d72edc583f1ea85c43f25d784f223cf8944cae89

small edits

view details

push time in a month

push eventfelixge/tweets

Felix Geisendörfer

commit sha 6faa10120b23ebee8f666d246234ed6033e690e9

Add tweet about hiring

view details

push time in a month

create barnchfelixge/tweets

branch : master

created branch time in a month

created repositoryfelixge/tweets

created time in a month

PR opened mixn/carbon-now-cli

language-map: add support for plain text (.txt)

I was just trying to render some plain text files in carbon-now and the auto language detection decided on some weird coloring that doesn't make sense to me. Hopefully 'text' is a better default for .txt files.

+1 -0

0 comment

1 changed file

pr created time in a month

push eventfelixge/carbon-now-cli

Felix Geisendörfer

commit sha 18b4172606595633e3af162bb3fe2cc7c32d1695

language-map: add support for plain text (.txt) I was just trying to render some plain text files in carbon-now and the auto language detection decided on some weird coloring that doesn't make sense to me. Hopefully 'text' is a better default for .txt files.

view details

push time in a month

fork felixge/carbon-now-cli

🎨 Beautiful images of your code — from right inside your terminal.

fork in a month

startedarkanis/syscall-benchmark

started time in a month

issue commentuuidjs/uuid

Support for comb v4 UUIDs

@ctavan my comment was actually a warning against throwing the baby out with the bathwater ; ). Additionally, IIRC, PostgreSQL doesn't care how your UUID looks and if certain version related bits are set. Any 128 bit value is accepted.

So not sure if a new id format is needed. There is also a lot of prior art in this space that should be considered.

kael-shipman

comment created time in a month

issue commentuuidjs/uuid

Support for comb v4 UUIDs

This in turns leads to wanting to store it in binary format in memory and on disk, but then needing to show it in a human readable format for users, urls and debug, but then if you do this in a database you'd have to implement this display logic there, etc. - it sort of becomes a mess.

Depends on the database. PostgreSQL turns text UUIDs into binary and back transparently to the point that application developers don't have to worry about it much.

In fact, using anything other than UUID will force you to reimplement this text<->binary conversion yourself.

(I have no horse in this race, just wanted to point this out)

kael-shipman

comment created time in a month

issue commentuuidjs/uuid

Support for comb v4 UUIDs

I'd like to give some more insights on what's going on with PostgreSQL when indexing UUID values.

Like most relation databases, PostgreSQL defaults to using a B-Tree data structure for indexes. Below is an example B-Tree from the linked article showing integer values being indexed.

image

Each B-Tree node (the grey boxes in the graphic above) in PostgreSQL is a 8 KB page that can contain a variable number of keys. If you're indexing UUIDs, this means you can theoretically cram up to 512 uuids into one node/page (8192/16). In reality it's quite a bit less, because each entry in the B-Tree is also containing pointers to rows (aka tuples) into the tables (aka heap) and to other B-Tree pages. For simplicity, let's assume 100 UUIDs fit into a page.

Now let's imagine an indexed uuid table will 1M rows. Assuming our leaf nodes are 50% full (i.e. containing 50 UUIDs each), this mean we'll have 20K leaf nodes.

Let's now imagine we want to insert 20K new v4 UUIDs. Given the random distribution of those values, on average we'd expect each UUID to be inserted into a different leaf node, so we'd end up with 51 UUIDs on each node. Eventually PostgreSQL has to write this data to disk (via checkpointing, but that's another topic), and this is done by writing out each page that has been changed in full. In our case that's 20K*8KB= 156.25 MB.

You might protest saying that it's really stupid to write the full 8KB of each page, when only 1% of it (~82 bytes) has changed. So yeah, in theory PostgreSQL could make much smaller writes, but in reality that would be even worse because SSD disks have a minimum write size of 4KB, and trying to issue a 82 byte write would actually force the SSD to first perform a 4KB read, then update the 82 bytes you're trying to write, and then issue a 4KB write to perform the update, which would likely double the latency of the overall operation.

So as you can see, using random UUIDs is pretty much the worst case scenario for any B-Tree implementation and might cause over 100x write amplification.

If our indexed values instead would have been sequential, we might simply have allocated 200 new pages as our values wouldn't have been randomly distributed across the nodes.

So yeah, while modern DBs might optimize a few things (e.g. storing UUIDs in binary rather than as text), random UUIDs will always present a worst-case scenario for B-Trees.

And sure, some databases (e.g. LevelDB, Cassandra, RocksDB) use LSM-Trees rather than B-Trees, but they're basically just giving you higher write throughput at the expense of read throughput, so they're unlikely to replace B-Trees in the near future. But I don't have much LSM experience, and I'd have to do some more research on how they'd perform with the scenario outlined above.

kael-shipman

comment created time in a month

startedhouqp/sqlvet

started time in a month

pull request commentmysqljs/mysql

Update Amazon Root 2019 RDS certificate

@dougwilson sounds good, I'll let you handle it : ).

That being said, I'm kinda disappointed. I was hoping for some nice conspiracy. This could have been the beginning of the uprising against Jeff Bezos! TBH I'd probably support that ; ).

Anyway, thanks for all the work on this project! The node community is lucky to have you.

codebykenny

comment created time in a month

pull request commentmysqljs/mysql

Update Amazon Root 2019 RDS certificate

@dougwilson hey, I just saw a few people complain about this on node-mysql@googlegroups.com. I generally am happy and grateful to letting you run this project as you see fit these days, but I'm wondering what's going on here.

I understand that users can deal with this issue in their applications, but is there anything wrong with just merging this PR? And what's up with all the deleted comments and restricting the issue to repo contributors?

Anyway, you don't owe anybody any timely action on this, but I just wanted to check if everything is okay : ). If hitting merge & cutting a release is all that's needed to make people happy, I'd also be willing to help with it.

codebykenny

comment created time in a month

startedporsager/postgres

started time in 2 months

startedsegmentio/encoding

started time in 2 months

pull request commentfelixge/go-xxd

Improve hex encoding performance

Cool, thanks for the patch : )

wamuir

comment created time in 2 months

push eventfelixge/go-xxd

wamuir

commit sha 0366d67ca3cd52e8514c6c7772e5301830611698

go faster: improve hex encoding performance

view details

Felix Geisendörfer

commit sha 17253d2a557adc6fa815a1d7a20d351935dd8ec8

Merge pull request #6 from wamuir/master Improve hex encoding performance

view details

push time in 2 months

PR merged felixge/go-xxd

Improve hex encoding performance

Nice project. PR improves hex encoding performance by approximately 8.5%, impacting the -i and -p flags. Currently, the functions cfmtEncode and hexEncode spin up an iterator for each byte, which is not needed. While these functions are fed a byte slice, there can only be one byte in the slice... length can be no more (to iterate on) and can be no less (no panic). This PR removes the iterators.

+6 -8

0 comment

1 changed file

wamuir

pr closed time in 2 months

startedTypeStrong/typedoc

started time in 2 months

issue commentfelixge/nodeguide.com

nodeguide.com not up anymore

I don't think I'll have time to get the site up and running myself soon, but if somebody was to put it up somewhere, I'd be happy to point DNS to it : )

btoptas

comment created time in 2 months

startedmdaines/viz.js

started time in 2 months

PR opened iovisor/bcc

Reviewers
fix -h output for --sort option

This fixes a regression from c2b371d56b8bbaf9c7d01b88830193ecd1ee4e12 where the author forgot to update the help output.

+1 -1

0 comment

1 changed file

pr created time in 3 months

push eventfelixge/bcc

Felix Geisendörfer

commit sha 68057a93ffb9969a8329dee2e1907fc5783380a8

fix -h output for --sort option This fixes a regression from c2b371d56b8bbaf9c7d01b88830193ecd1ee4e12 where the author forgot to update the help output.

view details

push time in 3 months

fork felixge/bcc

BCC - Tools for BPF-based Linux IO analysis, networking, monitoring, and more

fork in 3 months

starteddotless-de/vagrant-vbguest

started time in 3 months

issue commentPostgresApp/PostgresApp

PostgreSQL and dtrace.

I agree, it'd be great to have Postgres.app ship with --enable-dtrace.

dijit

comment created time in 3 months

startedsastraxi/pgsh

started time in 3 months

startedtj/go-naturaldate

started time in 3 months

startedsourcegraph/lsif-jsonnet

started time in 3 months

startedmitchellh/reflectwalk

started time in 3 months

issue commentmicrosoft/TypeScript

Variable isn't narrowed within a capturing closure

@RyanCavanaugh the makeAdder example is exactly the situation in which I ran into this problem. I also had a function with an optional argument that was given a default value in the function body.

I ended up working around the issue by simply assigning the default value in the function declaration:

function makeAdder(n: number = 0) {
  return (m: number) => n + m;
}
felixge

comment created time in 3 months

issue commentmicrosoft/TypeScript

Closure assumes wrong type

assuming the function is meant to be callable

Yes, it's meant to be callable. I just omitted that from my example for the sake of brevity.

felixge

comment created time in 3 months

issue commentmicrosoft/TypeScript

Closure assumes wrong type

@j-oliveras which section of the linked issue are you referring to? I'm okay with closing this as a dupe if it's covered by the other issue, but I couldn't find my case on first glance.

Anyway, I understand your point. I was hoping the compiler would realize that i doesn't get modified after the assignment of 0 as @fatcerberus suggests. But if that's not in the cards, I can easily deal with it.

felixge

comment created time in 3 months

issue openedmicrosoft/TypeScript

Closure assumes wrong type

TypeScript 3.7.2 Playground link

Compiler Options:

{
  "compilerOptions": {
    "noImplicitAny": true,
    "strictNullChecks": true,
    "strictFunctionTypes": true,
    "strictPropertyInitialization": true,
    "strictBindCallApply": true,
    "noImplicitThis": true,
    "noImplicitReturns": true,
    "useDefineForClassFields": false,
    "alwaysStrict": true,
    "allowUnreachableCode": false,
    "allowUnusedLabels": false,
    "downlevelIteration": false,
    "noEmitHelpers": false,
    "noLib": false,
    "noStrictGenericChecks": false,
    "noUnusedLocals": false,
    "noUnusedParameters": false,
    "esModuleInterop": true,
    "preserveConstEnums": false,
    "removeComments": false,
    "skipLibCheck": false,
    "checkJs": false,
    "allowJs": false,
    "experimentalDecorators": false,
    "emitDecoratorMetadata": false,
    "target": "ES2017",
    "module": "ESNext"
  }
}

Input:

let i: number | undefined;
i = 0;
let j:number = i+1; // works
(k: number) => k === i+1; // error: Object i is possibly undefined

Output:

"use strict";
let i;
i = 0;
let j = i + 1; // works
(k) => k === i + 1; // error: Object i is possibly undefined

Expected behavior:

The compiler should not complain about the last i+1 because i clearly a number type after assigning 0 to it.

I suspect the closure uses the type of i from the let statement and ignores it being narrowed down later on.

created time in 3 months

startedlinkedin/goavro

started time in 3 months

startedgolang/gddo

started time in 3 months

startedsoheilhy/cmux

started time in 3 months

startedgoogle/nixery

started time in 3 months

pull request commentspiermar/d3-flame-graph

fix: hidden root node

@spiermar thx, I'll try out the new version later and report back if I'm still seeing issues.

felixge

comment created time in 4 months

starteddavidkpiano/xstate

started time in 4 months

starteddense-analysis/ale

started time in 4 months

issue closedfelixge/debuggable-scraps

Is this dead?

Is this project not maintained anymore?

Is it still compatible with CakePHP 3.x?

closed time in 4 months

str

issue commentfelixge/debuggable-scraps

Is this dead?

this is not maintained anymore, i doubt any code still runs with current versions of cakephp

str

comment created time in 4 months

issue commentfelixge/httpsnoop

Related approach just FYI

https://github.com/prometheus/client_golang/blob/master/prometheus/promhttp/delegator.go

The prometheus implementation didn't exist when I wrote this library. I wonder if they were aware of my implementation and decided against using it. But it seems like this wasn't the case.

https://github.com/golang/go/issues/18997#issuecomment-314736760

I agree with this thread. Having to do this kind of hacking like my lib is terrible, it'd be nice if there were better solutions!

mikelnrd

comment created time in 4 months

pull request commentfelixge/node-ar-drone

Moved FTRIM into Calibration function and included it in the readme.md.

Sorry for the delay to merge this : )

Skyguy92

comment created time in 4 months

pull request commentfelixge/node-ar-drone

Moved FTRIM into Calibration function and included it in the readme.md.

@Skyguy92 ok, makes sense.

Skyguy92

comment created time in 4 months

push eventfelixge/node-ar-drone

Skyguy92

commit sha 957845ceada89009bd08e778f2f629e39d68d6cb

Moved FTRIM into Calibration function and included it in the readme.md.

view details

Skyguy92

commit sha b1af120f941233916034befad12b7cd36625c09c

fix build

view details

Felix Geisendörfer

commit sha 11667d4640a55111a6ad0206336685e69c7323fb

Merge pull request #164 from Skyguy92/calibrate Moved FTRIM into Calibration function and included it in the readme.md.

view details

push time in 4 months

PR opened spiermar/d3-flame-graph

fix: hidden root node

Thanks for this library : ), PTAL at this bug fix:

This fixes a regression from 894b2c33f5fef6636317216a8fc20d0a069203bb that caused the root node of all graphs to be hidden by being rendered outside of the visible SVG area.

Before this Patch

image

After this Patch

Untitled 2

+2 -2

0 comment

1 changed file

pr created time in 4 months

push eventfelixge/d3-flame-graph

Felix Geisendörfer

commit sha c56f3e4e0c7a1607174305a78c08b469ee86b2fa

fix: hidden root node This fixes a regression from 894b2c33f5fef6636317216a8fc20d0a069203bb that caused the root node of all graphs to be hidden by being rendered outside of the visible SVG area.

view details

push time in 4 months

fork felixge/d3-flame-graph

A D3.js plugin that produces flame graphs from hierarchical data.

fork in 4 months

startedkokes/pg_flame.js

started time in 4 months

more