profile
viewpoint
Ondrej Kokes kokes Prague, Czech Republic Economist/data wrangler/table maker/chart producer

kokes/nbviewer.js 221

Client side rendering of Jupyter notebooks

HlidacStatu/Volicsky-Prukaz 17

Webová aplikace, která připraví žádost o volební průkaz pro volby (aktuálně do Evropského parlamentu v 2019). Vygenerovanou žádost si volič může stáhnout podepsat a stahnout jako PDF.

kokes/knod 8

Katalog nejen otevřených dat

kokes/cedr 4

Zpracování dat k dotacím

kokes/cedr-n3 2

Extrakce dat z centrální evidence dotací (CEDR)

kokes/jManipulate 2

A JavaScript tool to edit CSS in a WYSIWYG fashion.

kokes/demagog-kviz 1

Kvíz na základě ohodnocených výroků ze serveru Demagog.cz

PublicEvent

issue openedkokes/kb

Yet another talk on http 3

https://youtu.be/OOyo3gxKe8k

Ive seen a few by the curl author. A good overview

created time in 4 days

issue commentgolang/go

proposal: cmd/go: support embedding static assets (files) in binaries

I read through the proposal and scanned the code, but couldn't find an answer to this: Will this embedding scheme contain information about the file on disk (~os.Stat)? Or will these timestamps get reset to build time? Either way, these are useful pieces information that gets used in various places, e.g. we can send a 304 for unchanged assets based on this.

Thanks!

bradfitz

comment created time in 12 days

issue openedkokes/od

[volby] ukazka vypoctu tesnych senatnich voleb

with hlasy as (
	SELECT
		datum,
		obvod,
		jmeno || ' ' || prijmeni AS jmeno,
		hlasy_k2,
		rank() over(partition by datum, obvod order by hlasy_k2 desc) as poradi
	FROM
		volby.senat_kandidati
	WHERE hlasy_k2 > 0
), naskoky as (
	select
	*, hlasy_k2 - lag(hlasy_k2) over(partition by datum, obvod order by poradi desc) as naskok
	from hlasy
)

select * from naskoky where naskok is not null order by naskok asc

created time in 21 days

push eventkokes/blog

Ondrej Kokes

commit sha f37fe9e0de2301be8481998d04e91a44c42e10be

go data loss: time

view details

push time in 22 days

push eventkokes/blog

Ondrej Kokes

commit sha b6ff27097ae2bdc20caa67048187a04b9d6bd0c8

go data loss: time

view details

push time in 22 days

push eventkokes/blog

Ondrej Kokes

commit sha ee12ae03d72bc940069d084b7ef7199137b451ca

go data loss: time

view details

push time in 22 days

push eventkokes/blog

Ondrej Kokes

commit sha 672fcc6c0f706812b0c34869572fde2cc5ca161b

go data loss: time

view details

push time in 22 days

push eventkokes/blog

Ondrej Kokes

commit sha 5a9354753953fa67309996271ffc89fd5bc887de

go data loss: time

view details

push time in 22 days

issue commentkokes/od

[volby] priklad vysledku jedne strany v ramci kraje

Místo příkladu by to chtělo view.

kokes

comment created time in 22 days

issue commentkokes/od

[volby] priklad vysledku jedne strany v ramci kraje

A kdybychom to chtěli i s pořadím

with vsechny_strany as (
	SELECT
		datum, okres, obec, max(nazevobce) as nazevobce,
		kstrana, max(nazevcelk) as nazevcelk,
		sum(poc_hlasu) as poc_hlasu,
		sum(vol_seznam) as vol_seznam,
		sum(vyd_obalky) as vyd_obalky,
		sum(odevz_obal) as odevz_obal,
		sum(pl_hl_celk) as pl_hl_celk,
		(100*sum(poc_hlasu)::numeric / sum(pl_hl_celk))::numeric(5,2) as pct_hlasu
	FROM
		volby.kraje_okrsky_hlasy
		inner join volby.kraje_okrsky_prehled using(datum, okres, obec, okrsek)
		inner join volby.kraje_strany_cr using(datum, kstrana)
		inner join volby.kraje_obce using(datum, okres, obec)
	where 1=1
	and date_trunc('year', datum) = '2016-01-01'
	and kraj = 2100
	-- 	and obec = 529303
	group by datum, okres, obec, kstrana
	limit 100000
), vc_poradi as (
	select
	*, rank() over(partition by datum, okres, obec order by pct_hlasu desc) as poradi
	from vsechny_strany
)

select * from vc_poradi
where nazevcelk = 'ANO 2011'
kokes

comment created time in 22 days

push eventkokes/od

Ondrej Kokes

commit sha 005ab596edcb42d73aa51fdbf482755561db9d80

[volby] ciselniky k obcim u krajskych voleb (#102)

view details

push time in 22 days

issue openedkokes/od

[volby] priklad vysledku jedne strany v ramci kraje

SELECT
	datum, okres, obec,
	sum(poc_hlasu) as poc_hlasu,
	sum(vol_seznam) as vol_seznam,
	sum(vyd_obalky) as vyd_obalky,
	sum(odevz_obal) as odevz_obal,
	sum(pl_hl_celk) as pl_hl_celk,
	(100*sum(poc_hlasu)::numeric / sum(pl_hl_celk))::numeric(5,2) as pct_hlasu
FROM
	volby.kraje_okrsky_hlasy
	inner join volby.kraje_okrsky_prehled using(datum, okres, obec, okrsek)
	inner join volby.kraje_strany_cr using(datum, kstrana)
where nazevcelk = 'ANO 2011'
and date_trunc('year', datum) = '2016-01-01'
and okres::text like '2%' -- stredocesky kraj
-- 	and obec = 529303
group by datum, okres, obec
limit 10000

created time in 22 days

issue openedkokes/od

[volby] chybi ciselniky k obcim

Je to hodne uzitecny pro lepsi zorientovani v obcich nez jejich sestimistny kody.

U komunalek to mame, ke krajskym to pujde snadno (COCO.xml)

created time in 22 days

push eventkokes/blog

Ondrej Kokes

commit sha a19ff1f36056709298a9a2221c76f8ef86bbc6d4

tiny maps in go

view details

push time in 23 days

push eventkokes/blog

Ondrej Kokes

commit sha 7f4f2634c0282372ab11b38160d86782d21163dc

tiny maps in go

view details

push time in 23 days

push eventkokes/blog

Ondrej Kokes

commit sha 0178e2e079636a416bf8f6bdf8bae899e7f9b1dd

intel nuc

view details

push time in a month

issue closedkokes/nbviewer.js

error:Join is not a function

Hi kokes i use nbviewer tips: Join is not a function

code line : 304

function handle_mdown(cell) {
        console.log(cell)
        console.log(cell.source)
        var el = d.createElement('div');
        var source
        if(cell.source === Array){
            source = cell.source.join('');
        }else{
            source = cell.source
        }
        
        var latexed = source.replace(/\$\$([\s\S]+?)\$\$/g, latexer); // block-based math
        latexed = latexed.replace(/\$(.+?)\$/g, latexer); // inline math
        el.innerHTML = marked(latexed);

        return el;
    }

this is my ipynb file: heijing.ipynb

I think the problem is because source node

i change handle_mdown func

function handle_mdown(cell) {
        var el = d.createElement('div');
        var source
        if(cell.source === Array){
            source = cell.source.join('');
        }else{
            source = cell.source
        }

        var latexed = source.replace(/\$\$([\s\S]+?)\$\$/g, latexer); // block-based math
        latexed = latexed.replace(/\$(.+?)\$/g, latexer); // inline math
        el.innerHTML = marked(latexed);

        return el;
    }

My JS is very bad, I don't know if there will be potential errors in this modification

Finally, thank you for your efforts . Your project solved my problem

closed time in a month

songlin51

issue commentkokes/nbviewer.js

error:Join is not a function

@songlin51 thanks for the report, I have fixed it in https://github.com/kokes/nbviewer.js/commit/ad90deda489393a64456754861c2f6d5f1e8e1b0, this should be live in our hosted viewer already.

Just out of curiosity - what versions of your tools (Python, Jupyter, operating system) did you use to produce this notebook? I'm used to seeing markdown entries as arrays, not strings, that's why it failed. Thanks!

songlin51

comment created time in a month

issue openedkokes/nbviewer.js

Handle all string/array enums correctly

There are quite a few fields that may be plain strings or arrays of strings - we handle them in various ways - either via typeofs or Array.isArray. We should have a helper that consumes a variable of type string|array[string] and outputs a string. That should solve pretty much all of these cases.

Last time this surfaced was in #44.

created time in a month

push eventkokes/nbviewer.js

Ondrej Kokes

commit sha ad90deda489393a64456754861c2f6d5f1e8e1b0

handling non-array markdown source

view details

push time in a month

push eventkokes/vaclavaky

Ondrej Kokes

commit sha 16f78b6eeea95c5ae69cfc828419b5f7f7332886

padesatikoruny

view details

push time in a month

startedmbloch/mapshaper

started time in a month

issue openedkokes/od

[justice] chybi firmy

Naposledy mi tam neco proti aresu neco chybelo, myslim ico 515.

created time in a month

push eventkokes/vaclavaky

Ondrej Kokes

commit sha d4eb0d4b01ca6c432cce0f5c3e083e084e07a895

nový formát dat Umožní nám pár věcí: 1) Bohatší systém propriet - budeme moci přidávat další vlastnosti k objektům, aniž bychom museli mít na paměti offset každé vlastnosti. 2) Snadnější přesun k JSON formátu pro konverze - stačí už jen přemigrovat komentáře do nějakého nového klíče. 3) Lepší čitelnost, byť velikost je nyní vyšší.

view details

push time in a month

startedjordanlewis/gcassert

started time in a month

push eventkokes/od

Ondrej Kokes

commit sha e3f6c08cd0451fc89549df3d19b0d5876c88b22a

[szif] data za 2019

view details

Ondrej Kokes

commit sha 97a5e4f4713b3fc9f8a7495357ae1971c3f743a7

[volby] senatni volby z cervna 2020

view details

push time in a month

issue closedkokes/vaclavaky

https na www

Nefunguje https na www.vaclavaky.cz, bez www je vse v pohode.

Trochu si hraju s DNS (cekam na zapis CNAME), prozatim si to tu necham jako problem.

https://stackoverflow.com/questions/9082499/custom-domain-for-github-project-pages/9123911#9123911

closed time in 2 months

kokes

issue commentkokes/vaclavaky

https na www

Funguje to, stacilo:

  • CAA na vaclavaky.cz (uz jsem mel)
  • A zaznamy jen na v.cz (uz jsem mel)
  • CNAME pro www.vaclavaky.cz -> kokes.github.io (takhle bez path)
  • github pages prepnout na www.v.cz (bylo v.cz)

V tuhle chvili fungujou vsechny varianty (s/bez www, s/bez https), novej default je sice s www, ale to je ok, stejne to lidi nevidi

kokes

comment created time in 2 months

push eventkokes/vaclavaky

Ondrej Kokes

commit sha ecc3c881d93585ca08a14f00b8a0acf51820e937

Update CNAME

view details

push time in 2 months

issue commentkokes/od

[volby] 2013 psp číselníky jsou dvakrát

Součástí tohodle issue by mohlo bejt nastavení constraintů u všech těhle tabulek, ať se nemusíme bát násobení.

kokes

comment created time in 2 months

issue openedkokes/vaclavaky

https na www

Nefunguje https na www.vaclavaky.cz, bez www je vse v pohode.

Trochu si hraju s DNS (cekam na zapis CNAME), prozatim si to tu necham jako problem.

https://stackoverflow.com/questions/9082499/custom-domain-for-github-project-pages/9123911#9123911

created time in 2 months

push eventkokes/vaclavaky

Ondrej Kokes

commit sha 03b672dd8d02e6ebb8c3cddebc7d239119a07999

Update CNAME

view details

push time in 2 months

push eventkokes/vaclavaky

Ondrej Kokes

commit sha 3fb89e5dc361cc27df0964f1a20364b3a8938f9d

Update CNAME

view details

push time in 2 months

issue openedkokes/od

[volby] priklad dotazu mezi volbama

Výsledky voleb do PSP podle senátních obvodů. Ještě je tam potřeba join na přehled hlasů v daný obci, abychom měli procenta.

with obce as (
	SELECT
		distinct obec
	FROM
		volby.senat_okrsky
	WHERE
		obvod = 25
		AND datum = '2010-10-15'::date
)

select datum, okres, obec, sum(poc_hlasu) from volby.psp_okrsky_hlasy
inner join volby.psp_strany using(datum, kstrana)
where obec in (select * from obce) and datum = '2010-05-29' and nazevcelk = 'Věci veřejné'
group by 1, 2, 3
limit 10000;

created time in 2 months

issue openedkokes/od

[volby] 2013 psp číselníky jsou dvakrát

PSKRL.dbf je ve dvou souborech (reg a ciselniky myslim), takže pak v psp_strany máme duplikáty - můžem tam dát constraint, ale pak přestane fungovat copy.sh

Constraint to určitě chce, ať se vyhnem duplikátům při joinu, ale jak to udělat pěkně, to teď nevim

created time in 2 months

push eventkokes/vaclavaky

Ondrej Kokes

commit sha e72f8eb001feb903589ea847c80bc6e842413b32

tisicovka

view details

push time in 2 months

push eventkokes/vaclavaky

Ondrej Kokes

commit sha 35854f57e25887fade3af7a058e8bd57db7bd0f2

tisicovka

view details

push time in 2 months

push eventkokes/blog

Ondrej Kokes

commit sha e2dcbe5db30cd59ffafedfa4fb3c7e292331f5cc

spark s3 post

view details

push time in 2 months

push eventkokes/nbviewer.js

Ondrej Kokes

commit sha b5de6c24eaceac50315dbc5a365c588fc248bbb9

better contrast; contact info The instructions weren't as visible as I'd like, so I tweaked the colours (and margins) a bit. I also added a bit about the author and ways to contact me.

view details

push time in 2 months

push eventkokes/vaclavaky

Ondrej Kokes

commit sha 3f028e4631f2ed909c5144c28b998fa75708499e

drazsi cheeseburger

view details

push time in 2 months

push eventkokes/vaclavaky

Ondrej Kokes

commit sha 30819ed539b3319536fab8ae430e3ed86ff3b6fa

drazsi cheeseburger

view details

push time in 2 months

startedFiloSottile/mkcert

started time in 2 months

push eventkokes/blog

Ondrej Kokes

commit sha e56877daec174bc5b6e58cc306f9041de238e64f

needless schema inference

view details

push time in 2 months

push eventkokes/blog

Ondrej Kokes

commit sha f56d934d89f01c523c93aa65ada20cfc2faa9a58

high perf talk

view details

push time in 2 months

push eventkokes/blog

Ondrej Kokes

commit sha 7e8eca3691aaa7c667940babd7fb43df4da0b71d

high perf talk

view details

push time in 2 months

push eventkokes/blog

Ondrej Kokes

commit sha 971b5789c4574f6594c98ca6448523984ab30bf3

high perf talk

view details

push time in 2 months

push eventkokes/blog

Ondrej Kokes

commit sha fdcf2499d307a2386a63666d166bd929c7287efc

high perf talk

view details

push time in 2 months

push eventkokes/blog

Ondrej Kokes

commit sha 165a967d3c20ced22d5bd097c6881836791b87c9

high perf talk

view details

push time in 2 months

push eventkokes/blog

Ondrej Kokes

commit sha d9e7a27a146842aa70235deff5a42ecd69b897a2

high perf talk

view details

push time in 2 months

push eventkokes/blog

Ondrej Kokes

commit sha fe742fe324bfef08e14b8c284eb1b8a7f0718153

high perf talk

view details

push time in 2 months

push eventkokes/blog

Ondrej Kokes

commit sha a80740c6fe0083d7f44d5337886a235e61fc0f3e

high perf talk

view details

push time in 2 months

issue commentgolang/go

encoding/csv: skipping of empty rows leads to loss of data in single-column datasets

That's too bad, because two of the existing knobs are actually going against the "standard" - lazyquotes and comments, while this suggested knob actually tries to bring Go closer to standard compliance (while keeping backward compatibility).

Anyway, where can I register interest in changing this in Go 2? No knobs, just changing the default - blank lines are perfectly fine and should be interpreted as data.

kokes

comment created time in 2 months

pull request commentgolang/go

encoding/csv: allow for interpretation of empty lines

@googlebot I signed it!

kokes

comment created time in 2 months

PR opened golang/go

encoding/csv: allow for interpretation of empty lines

Empty lines are ignored by default, this change introduces a switch that changes this behaviour - empty line is then interpreted as a single empty field. This is especially important when dealing with single-column CSVs, where an empty line is actually data (an empty field).

Fixes #39119.

+35 -10

0 comment

2 changed files

pr created time in 2 months

push eventkokes/go

Ondrej Kokes

commit sha 2767091f21f4e93c09247e5ae6336739a92a3ee7

encoding/csv: allow for interpretation of empty lines Empty lines are ignored by default, this change introduces a switch that changes this behaviour - empty line is then interpreted as a single empty field.

view details

push time in 2 months

fork kokes/go

The Go programming language

https://golang.org

fork in 2 months

issue commentgolang/go

encoding/csv: skipping of empty rows leads to loss of data in single-column datasets

Here's a patch, including tests. If this is something that would be acceptable, I can go through the usual gerrit mechanism. Also, if this patch goes through, I'd suggest for Go 2 to have the default reverted - interpret blank lines by default and either add a SkipBlankLines switch or remove this functionality altogether.

From 2767091f21f4e93c09247e5ae6336739a92a3ee7 Mon Sep 17 00:00:00 2001
From: Ondrej Kokes <ondrej.kokes@gmail.com>
Date: Fri, 12 Jun 2020 15:10:33 +0200
Subject: [PATCH] encoding/csv: allow for interpretation of empty lines

Empty lines are ignored by default, this change introduces a switch that
changes this behaviour - empty line is then interpreted as a single empty
field.
---
 src/encoding/csv/reader.go      | 10 +++++++---
 src/encoding/csv/reader_test.go | 35 ++++++++++++++++++++++++++-------
 2 files changed, 35 insertions(+), 10 deletions(-)

diff --git a/src/encoding/csv/reader.go b/src/encoding/csv/reader.go
index c40aa506b0..f8e08ad6da 100644
--- a/src/encoding/csv/reader.go
+++ b/src/encoding/csv/reader.go
@@ -16,8 +16,8 @@
 //
 // Carriage returns before newline characters are silently removed.
 //
-// Blank lines are ignored. A line with only whitespace characters (excluding
-// the ending newline character) is not considered a blank line.
+// Blank lines are ignored by default. A line with only whitespace characters
+// (excluding the ending newline character) is not considered a blank line.
 //
 // Fields which start and stop with the quote character " are called
 // quoted-fields. The beginning and ending quote are not part of the
@@ -142,6 +142,10 @@ type Reader struct {
 	// By default, each call to Read returns newly allocated memory owned by the caller.
 	ReuseRecord bool
 
+	// If InterpretBlankLines is true, blank lines are interpreted as a single empty field,
+	// the default is to skip these lines.
+	InterpretBlankLines bool
+
 	TrailingComma bool // Deprecated: No longer used.
 
 	r *bufio.Reader
@@ -268,7 +272,7 @@ func (r *Reader) readRecord(dst []string) ([]string, error) {
 			line = nil
 			continue // Skip comment lines
 		}
-		if errRead == nil && len(line) == lengthNL(line) {
+		if !r.InterpretBlankLines && errRead == nil && len(line) == lengthNL(line) {
 			line = nil
 			continue // Skip empty lines
 		}
diff --git a/src/encoding/csv/reader_test.go b/src/encoding/csv/reader_test.go
index 5121791cb3..6d9c782ccb 100644
--- a/src/encoding/csv/reader_test.go
+++ b/src/encoding/csv/reader_test.go
@@ -20,13 +20,14 @@ func TestRead(t *testing.T) {
 		Error  error
 
 		// These fields are copied into the Reader
-		Comma              rune
-		Comment            rune
-		UseFieldsPerRecord bool // false (default) means FieldsPerRecord is -1
-		FieldsPerRecord    int
-		LazyQuotes         bool
-		TrimLeadingSpace   bool
-		ReuseRecord        bool
+		Comma               rune
+		Comment             rune
+		UseFieldsPerRecord  bool // false (default) means FieldsPerRecord is -1
+		FieldsPerRecord     int
+		LazyQuotes          bool
+		TrimLeadingSpace    bool
+		ReuseRecord         bool
+		InterpretBlankLines bool
 	}{{
 		Name:   "Simple",
 		Input:  "a,b,c\n",
@@ -78,6 +79,25 @@ field"`,
 			{"a", "b", "c"},
 			{"d", "e", "f"},
 		},
+	}, {
+		Name:  "BlankLineInterpreted",
+		Input: "a,b,c\n\nd,e,f\n\n",
+		Output: [][]string{
+			{"a", "b", "c"},
+			{""},
+			{"d", "e", "f"},
+			{""},
+		},
+		InterpretBlankLines: true,
+	}, {
+		Name:   "BlankLineSingleColumn",
+		Input:  "a\nb\n\nd",
+		Output: [][]string{{"a"}, {"b"}, {"d"}},
+	}, {
+		Name:                "BlankLineInterpretedSingleColumn",
+		Input:               "a\nb\n\nd",
+		Output:              [][]string{{"a"}, {"b"}, {""}, {"d"}},
+		InterpretBlankLines: true,
 	}, {
 		Name:  "BlankLineFieldCount",
 		Input: "a,b,c\n\nd,e,f\n\n",
@@ -400,6 +420,7 @@ x,,,
 			r.LazyQuotes = tt.LazyQuotes
 			r.TrimLeadingSpace = tt.TrimLeadingSpace
 			r.ReuseRecord = tt.ReuseRecord
+			r.InterpretBlankLines = tt.InterpretBlankLines
 
 			out, err := r.ReadAll()
 			if !reflect.DeepEqual(err, tt.Error) {
-- 
2.17.2 (Apple Git-113)
kokes

comment created time in 2 months

issue commentClickHouse/ClickHouse

Import csv data double quotes escaped by backslash | CSV

@shivakumarss You can tell Spark to escape quotes using a second quote, instead of the non-standard backslash. That way you'll get correctly exported data, which you can then import to CH as well as other tools. I wrote about this and recommended some options for df.read and df.write. Hope it helps.

shivakumarss

comment created time in 2 months

MemberEvent

pull request commentpyvec/naucse.python.cz

Rework text on exceptions

Not sure if fits within the scope of this lesson, but I would maybe mention that exception-driven development can be oftentimes avoided and doing so can lead to more legible and understandable code.

Things like

try:
  foo = bar['bak']
except KeyError:
  ...

instead of foo.get are all too common and I've even seen bugs introduced this way (e.g. by including more things in the try block). You somewhat alluded to this in your "we can check if the string is parseable to an int, but it's best to run int directly" - but this is not always the case and we can prevent try/except blocks in many cases.

encukou

comment created time in 2 months

issue commentRaRe-Technologies/smart_open

Investigate building wheels for smart_open

So, here is all needed changes #492 (1 line)

That's incorrect, the PR is not sufficient. I'm not terribly familiar with the project's layout, but from the get go, there are at least two things to add:

  1. Tests only cover sdist builds
  2. The PyPI upload script never uploads the wheel, it only covers the source distribution.

(I'm not advocating wheels or sdists, just pointing out it's not a one-liner.)

mpenkov

comment created time in 2 months

push eventkokes/wikidata-politici

Ondrej Kokes

commit sha b40359921f21b7d047b24cf9ca7cefd091bc358c

odebrana zavislost na requests

view details

push time in 2 months

push eventkokes/vaclavaky

Ondrej Kokes

commit sha 9ab9c78046992cfc2f9c7f8b5797347364cdc113

ceska televize

view details

push time in 2 months

startedzserge/metric

started time in 2 months

push eventkokes/vaclavaky

Ondrej Kokes

commit sha 2f4cab16a6e3096fee126bac1e26551c6e01a1fe

kviz: lepsi zobrazeni hodnot, ktere nejsou desetinne

view details

Ondrej Kokes

commit sha 4a0cf99502b828b41f404f03cae3eff7956c5343

kviz: lepsi prace bez mysi - autofocus + trigger pres enter

view details

push time in 2 months

issue openedianozsvald/dtype_diet

nullable ints interpreted as floats

While pandas supports nullable ints via extension arrays, they are still not the default when reading data in. So you can easily get float64 for nullable int series, so you could save data by converting floats to these nullable int types.

It sort of depends on being able to accurately detect that these floats can be converted to ints without loss (well, without much loss, few floats map to ints precisely) - though you already lost data by the automatic conversion to floats in the first place, so we're effectively reverting this loss.

In [8]: data = list(range(1000)) + [None]                                                                                                              
In [9]: s = pd.Series(data)                                                                                                                            
In [10]: s2 = s.astype('Int16')                                                                                                                        

In [11]: s.memory_usage()                                                                                                                              
Out[11]: 8136

In [12]: s2.memory_usage()                                                                                                                             
Out[12]: 3131

created time in 2 months

startedbxcodec/faker

started time in 2 months

startedjoke2k/faker

started time in 2 months

push eventkokes/blog

Ondrej Kokes

commit sha ab8f747c59b8b47f7a83ba8136715738eb9815e2

fixed url

view details

push time in 3 months

push eventkokes/blog

Ondrej Kokes

commit sha 7dcdbfe746e0963bf33806b5feebeede0870a5ad

fixed url

view details

push time in 3 months

push eventkokes/blog

Ondrej Kokes

commit sha 8b2b5a868a29bdd09c2e2a9bfedb07158f57272b

assorted tech links 5

view details

push time in 3 months

issue openedkokes/nbviewer.js

Incorporate changes from Microsoft's fork

Microsoft forked this for their Azure offering. Go through their changelog and see if there are things we could merge in.

They kept the file structure mostly intact, so we could just diff and see it right there.

created time in 3 months

issue openedkokes/od

[psp] view na interpelace

neco na zpusob

with osoby as (
	select
		id_osoba,
		jmeno || ' ' || prijmeni as jmeno
	from psp.poslanci_osoby
)
SELECT
		pr.*, os.jmeno as tazatel, os2.jmeno as dotazovany, los.*, org.*
	FROM
		psp.interp_poradi pr
		inner join osoby os on os.id_osoba = pr.id_poslanec
		inner join osoby os2 on os2.id_osoba = pr.id_ministr
		INNER JOIN psp.interp_los_interpelaci los on los.id_los = pr.id_losovani
		inner join psp.poslanci_organy org on org.id_organ = los.id_org

created time in 3 months

pull request commentnteract/commuter

Add Dockerfile

Is there some reason for installing tini explicitly? Docker has had it for a few years now (PR 1, PR 2), so it's only a matter of --init.

groodt

comment created time in 3 months

issue openedgreat-expectations/great_expectations

Unreadable rendering of failed expectations

I had a multi-column expectation that failed, but the generated docs weren't readable.

To Reproduce Steps to reproduce the behavior:

  1. Create a dataset of two or more columns.
  2. Use expect_column_pair_values_A_to_be_greater_than_B in a way that it will fail
  3. Generate docs

Expected behavior In the "failed values" table, I'd expect either a table column per column of data or at least some visual separation of the pair of values involved in the comparison (comma, dash, pipe, ...)

Environment (please complete the following information):

  • OS: macOS
  • GE Version: 0.10.12

Additional context

I included a screenshot of the issue. The zero in the table cell is the value of column A and the non-zero value is the other column's value. Both values are in separate spans, but they are not styled in any way to differentiate between them. Screen Shot 2020-05-22 at 12 41 04 PM

created time in 3 months

push eventkokes/blog

Ondrej Kokes

commit sha 758030c688cb1a343d8d54fad94daf0ae215700e

postgres column store post

view details

push time in 3 months

issue commentgolang/go

encoding/csv: skipping of empty rows leads to loss of data in single-column datasets

Which suggests that this is a bug in the reader code.

However, I'm not going to be surprised if fixing this breaks lots of users due to Hyrum's law and we're forced to just document the errant behavior. Even though the package tries to follow RFC 4180, CSV files are one of those formats with many strange variants that do not folllow any specific grammar.

If we're worried about breaking existing code, we could add a boolean flag, which defaults to the current behaviour and then we could discuss flipping it (and potentially removing it) for Go 2.


If I understand the implementation correctly, then one cannot just initialise a Reader struct, one has to go through NewReader, because the underlying io.Reader is unexported. In that case, we can enforce our default in the constructor. (Or we could flip the boolean flag to mean e.g. ParseBlankLines, which has the desired default value.)

Something along the lines of this (haven't tested it, just sketching):

diff --git a/src/encoding/csv/reader.go b/src/encoding/csv/reader.go
index c40aa506b0..cd2b0ccfc1 100644
--- a/src/encoding/csv/reader.go
+++ b/src/encoding/csv/reader.go
@@ -16,8 +16,8 @@
 //
 // Carriage returns before newline characters are silently removed.
 //
-// Blank lines are ignored. A line with only whitespace characters (excluding
-// the ending newline character) is not considered a blank line.
+// Blank lines are ignored by default. A line with only whitespace characters
+// (excluding the ending newline character) is not considered a blank line.
 //
 // Fields which start and stop with the quote character " are called
 // quoted-fields. The beginning and ending quote are not part of the
@@ -142,6 +142,9 @@ type Reader struct {
 	// By default, each call to Read returns newly allocated memory owned by the caller.
 	ReuseRecord bool
 
+	// If SkipBlankLines is true (default), rows with no data are skipped.
+	SkipBlankLines bool
+
 	TrailingComma bool // Deprecated: No longer used.
 
 	r *bufio.Reader
@@ -169,8 +172,9 @@ type Reader struct {
 // NewReader returns a new Reader that reads from r.
 func NewReader(r io.Reader) *Reader {
 	return &Reader{
-		Comma: ',',
-		r:     bufio.NewReader(r),
+		Comma:          ',',
+		SkipBlankLines: true,
+		r:              bufio.NewReader(r),
 	}
 }
 
@@ -268,7 +272,7 @@ func (r *Reader) readRecord(dst []string) ([]string, error) {
 			line = nil
 			continue // Skip comment lines
 		}
-		if errRead == nil && len(line) == lengthNL(line) {
+		if r.SkipBlankLines && errRead == nil && len(line) == lengthNL(line) {
 			line = nil
 			continue // Skip empty lines
 		}
kokes

comment created time in 3 months

pull request commentgreat-expectations/great_expectations

[BUGFIX] quantile boundaries can be zero now

(rebased on the current develop branch, hence the force push)

kokes

comment created time in 3 months

push eventkokes/great_expectations

Brendan Alexander

commit sha f43644182142e4e2d5083341535433bc4d5a589a

Allow config substitutions to be passed to DataContext

view details

Brendan Alexander

commit sha 0cb5a73b07e42ebc3f0376bfe36ab71d5ea51110

Unset env variable

view details

Brendan Alexander

commit sha 917495ee59264bc562c04e017808909dfc6645dd

Fix some lines too long for PEP8

view details

Brendan Alexander

commit sha 74931fed2a7606ece8626d46f19e059b4169dccb

Ridiculously short lines for PEP8

view details

Brendan Alexander

commit sha fe3f7cd595c5ed7b6187489938b106011c583b52

Send dictionary of env vars to substitute_config_variable method

view details

Brendan Alexander

commit sha 2ad68872e68968a09753e5cfdc0e868fc4cfc3fd

Break up lines so black is happy

view details

Brendan Alexander

commit sha 5788b06d8fccd1175e2266a5598bfc38d347fe7d

Remove trailing white space

view details

Brendan Alexander

commit sha 97611a8a0cb0080c581aa8f1eb0ae183683e288e

Change config dictionary param to runtime_environment; use black to reformat files

view details

Brendan Alexander

commit sha afd653bb1838b0230a54ddd5024c6ce04b985399

Add change log message and update docs

view details

WilliamWsyHK

commit sha 2551698effd09c0c6314a671849bf5ecf0cab7e3

[FEATURE] Support expect_multicolumn_values_to_be_unique on Spark (#1294) * Add multicolumn_map_expectation wrapper method * Add expect_multicolumn_values_to_be_unique * expect_multicolumn_values_to_be_unique for Spark is now implemented * Add schema for Spark to avoid breaking test cases * Update docs for implemented expect_multicolumn_values_to_be_unique * Change the way how boolean_mapped_skip_values is set * Apply `black` * Add description to change log Co-authored-by: James Campbell <james.p.campbell@gmail.com>

view details

rexboyce

commit sha 4efe6b5b88cdead80ea9f16a75d92e823f9816c7

[BUGFIX] fix extra expectations included by BasicSuiteBuilderProfiler #1422 (#1445) * fix issue where extra expectations included by BasicSuiteBuilderProfiler * ran linter * Update changelog Co-authored-by: James Campbell <james.p.campbell@gmail.com>

view details

Taylor Miller

commit sha 95dc5335eb2186dff98bb9a57eae8eda83da35ab

better checkpoint docstring (#1436) Co-authored-by: James Campbell <james.p.campbell@gmail.com>

view details

Ondrej Kokes

commit sha d1526781119b635cf60ee25eb27c82cb4f9b0cc1

quantile boundaries can be zero now When setting quantile boundaries to zero, they'd be interpreted as "any" (implemented as +-infty). This commit fixes this by evaluating zero boundaries as zeroes and only None/null ones as +-infty.

view details

Ondrej Kokes

commit sha a1b66fc97af1c4b38b025a60fdd86c99cfe663c0

minor python style fixes - native iteration over multiple collections (zip) - leveraging dictionary getter with a default fallback

view details

push time in 3 months

PR opened great-expectations/great_expectations

[BUGFIX] quantile boundaries can be zero now

When setting quantile boundaries to zero, they'd be interpreted as "any" (implemented as +-infty). This commit fixes this by evaluating zero boundaries as zeroes and only None/null ones as +-infty.

I found this bug when I used expect_column_quantile_values_to_be_between and set my quantile to be equal to zero ([0,0]) and when I checked the rendered docs, this was displayed as [Any, Any] - I wondered if this was only presentational, but it wasn't - these quantile ranges were not considered. I wrote a failing test and implemented a fix that turned it green.

There's also an optional second commit, where I fix a bit of code that's not terribly idiomatic - it's the same snippet of code where I fixed the docs issue - it's not like I'm proposing some random piece of code.

+26 -6

0 comment

3 changed files

pr created time in 3 months

push eventkokes/great_expectations

Ondrej Kokes

commit sha 827e79d94429a74edb4050cb224f5df4d7709e45

minor python style fixes - native iteration over multiple collections (zip) - leveraging dictionary getter with a default fallback

view details

push time in 3 months

create barnchkokes/great_expectations

branch : any_bound

created branch time in 3 months

issue openedgreat-expectations/great_expectations

helpful exceptions for empty datasets

I happened to have an empty dataset due to a pipeline failure and then ran some expectations - most of them ran just fine, but one threw an unexpected expectation, which made things hard to track.

To reproduce, write your header into a CSV file and launch init using all the defaults (and pandas).

echo "foo,bar" > foo.csv
great_expectations init

Then launch great_expecatations suite edit foo.warning and try to check for unique values. Instead of telling me "cannot run stats on empty datasets" or something along those lines, the procedure calculates the number of unique values as None, thus triggering a TypeError when comparing this None with float boundaries.

batch.expect_column_proportion_of_unique_values_to_be_between('foo', .5, .9)
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-3-7aaacd9190af> in <module>
----> 1 batch.expect_column_proportion_of_unique_values_to_be_between('foo', .5, .9)

~/.pyenv/versions/3.7.7/lib/python3.7/site-packages/great_expectations/data_asset/util.py in f(*args, **kwargs)
     77         @wraps(self.mthd, assigned=("__name__", "__module__"))
     78         def f(*args, **kwargs):
---> 79             return self.mthd(obj, *args, **kwargs)
     80 
     81         f.__doc__ = doc

~/.pyenv/versions/3.7.7/lib/python3.7/site-packages/great_expectations/data_asset/data_asset.py in wrapper(self, *args, **kwargs)
    262 
    263                         else:
--> 264                             raise err
    265 
    266                 else:

~/.pyenv/versions/3.7.7/lib/python3.7/site-packages/great_expectations/data_asset/data_asset.py in wrapper(self, *args, **kwargs)
    247                 ):
    248                     try:
--> 249                         return_obj = func(self, **evaluation_args)
    250                         if isinstance(return_obj, dict):
    251                             return_obj = ExpectationValidationResult(**return_obj)

~/.pyenv/versions/3.7.7/lib/python3.7/site-packages/great_expectations/dataset/dataset.py in inner_wrapper(self, column, result_format, *args, **kwargs)
     93             null_count = element_count - nonnull_count
     94 
---> 95             evaluation_result = func(self, column, *args, **kwargs)
     96 
     97             if "success" not in evaluation_result:

~/.pyenv/versions/3.7.7/lib/python3.7/site-packages/great_expectations/dataset/dataset.py in expect_column_proportion_of_unique_values_to_be_between(self, column, min_value, max_value, strict_min, strict_max, result_format, include_config, catch_exceptions, meta)
   3006                 above_min = proportion_unique > min_value
   3007             else:
-> 3008                 above_min = proportion_unique >= min_value
   3009         else:
   3010             above_min = True

TypeError: '>=' not supported between instances of 'NoneType' and 'float'

It might be instructive to see if any other expectations fail on empty dataset.

created time in 3 months

issue openedgolang/go

encoding/csv: skipping of empty rows leads to loss of data in single-column datasets

What version of Go are you using (go version)?

<pre> $ go version go version go1.14 darwin/amd64 </pre>

Does this issue reproduce with the latest release?

Yes.

What operating system and processor architecture are you using (go env)?

<details><summary><code>go env</code> Output</summary><br><pre> $ go env GO111MODULE="" GOARCH="amd64" GOBIN="" GOCACHE="/Users/ondrej/Library/Caches/go-build" GOENV="/Users/ondrej/Library/Application Support/go/env" GOEXE="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="darwin" GOINSECURE="" GONOPROXY="" GONOSUMDB="" GOOS="darwin" GOPATH="/Users/ondrej/go" GOPRIVATE="" GOPROXY="https://proxy.golang.org,direct" GOROOT="/usr/local/Cellar/go/1.14/libexec" GOSUMDB="sum.golang.org" GOTMPDIR="" GOTOOLDIR="/usr/local/Cellar/go/1.14/libexec/pkg/tool/darwin_amd64" GCCGO="gccgo" AR="ar" CC="clang" CXX="clang++" CGO_ENABLED="1" GOMOD="" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/hp/q7nph21s1q76nw1hv1hfxv2m0000gn/T/go-build988616937=/tmp/go-build -gno-record-gcc-switches -fno-common" </pre></details>

What did you do?

I wrote a CSV with a single column and missing data.

What did you expect to see?

I expected to load the data back, intact.

What did you see instead?

I lost the missing values, encoding/csv skipped them as it skips blank lines. In this case, a blank line actually represents data.


I'm not sure I understand the rationale behind skipping blank lines. Neither in terms of common practice (why would I have blank lines in my CSVs?) nor in terms of standards (the closest we have is RFC 4180 and I couldn't find anything about blank lines - so I'm not sure if Go follows it).

Here's a reproduction of the problem. I wrote a dataset into a file and was unable to read it back.

<details> <pre> package main

import ( "encoding/csv" "errors" "log" "os" "reflect" )

func writeData(filename string, data [][]string) error { f, err := os.Create(filename) if err != nil { return err } defer f.Close() cw := csv.NewWriter(f) defer cw.Flush() if err := cw.WriteAll(data); err != nil { return err } return nil }

func readData(filename string) ([][]string, error) { f, err := os.Open(filename) if err != nil { return nil, err } defer f.Close() cr := csv.NewReader(f) rows, err := cr.ReadAll() if err != nil { return nil, err } return rows, nil }

func run() error { fn := "data/roundtrip.csv" data := [][]string{{"john"}, {"jane"}, {""}, {"jack"}}

if err := writeData(fn, data); err != nil {
	return err
}

returned, err := readData(fn)
if err != nil {
	return err
}
if !reflect.DeepEqual(returned, data) {
	log.Println("expected", data, "got", returned)
	return errors.New("not equal")
}

return nil

}

func main() { if err := run(); err != nil { log.Fatal(err) } }

</pre> </details>

created time in 3 months

issue commentgreat-expectations/great_expectations

NameError: name 'WithinGroup' is not defined

Duplicate of #1443.

shahinism

comment created time in 3 months

issue commentgreat-expectations/great_expectations

Lack of optional dependency crashes init

@eugmandel I understand that, but GE crashes before I connect to a database, before I do anything, really. And it’s not a “I don’t have SQLAlchemy error”, it’s due to a lack of type information.

I even included a trivial way to reproduce the problem.

kokes

comment created time in 3 months

issue openedgreat-expectations/great_expectations

Lack of optional dependency crashes init

When I install Great Expectations without SQL Alchemy, great_expectations init will fail due to a missing type information.

To Reproduce Steps to reproduce the behavior:

docker run -it --rm python:3-slim bash
pip3 install great_expectations
great_expectations init

Expected behavior I expected init to run. (After I manually installed sqlachemy, it did work.)

The reason for this is that you allow SQL Alchemy to be not imported (https://github.com/great-expectations/great_expectations/blob/develop/great_expectations/dataset/util.py#L16), but later require its type information (https://github.com/great-expectations/great_expectations/blob/develop/great_expectations/dataset/util.py#L578).

Environment (please complete the following information):

  • OS: macOS, Ubuntu
  • GE Version: latest, 0.10.11

Additional context

traceback:

<details> <pre> $ great_expectations -v init Traceback (most recent call last): File "/Users/okokes/.pyenv/versions/3.7.7/bin/great_expectations", line 5, in <module> from great_expectations.cli import main File "/Users/okokes/.pyenv/versions/3.7.7/lib/python3.7/site-packages/great_expectations/init.py", line 7, in <module> from great_expectations.data_context import DataContext File "/Users/okokes/.pyenv/versions/3.7.7/lib/python3.7/site-packages/great_expectations/data_context/init.py", line 3, in <module> from .data_context import BaseDataContext, DataContext, ExplorerDataContext File "/Users/okokes/.pyenv/versions/3.7.7/lib/python3.7/site-packages/great_expectations/data_context/data_context.py", line 20, in <module> from great_expectations.core.usage_statistics.usage_statistics import ( File "/Users/okokes/.pyenv/versions/3.7.7/lib/python3.7/site-packages/great_expectations/core/usage_statistics/usage_statistics.py", line 20, in <module> from great_expectations.core.usage_statistics.anonymizers.data_docs_site_anonymizer import ( File "/Users/okokes/.pyenv/versions/3.7.7/lib/python3.7/site-packages/great_expectations/core/usage_statistics/anonymizers/data_docs_site_anonymizer.py", line 2, in <module> from great_expectations.core.usage_statistics.anonymizers.site_builder_anonymizer import ( File "/Users/okokes/.pyenv/versions/3.7.7/lib/python3.7/site-packages/great_expectations/core/usage_statistics/anonymizers/site_builder_anonymizer.py", line 2, in <module> from great_expectations.render.renderer.site_builder import ( File "/Users/okokes/.pyenv/versions/3.7.7/lib/python3.7/site-packages/great_expectations/render/renderer/init.py", line 7, in <module> from .other_section_renderer import ProfilingResultsOverviewSectionRenderer File "/Users/okokes/.pyenv/versions/3.7.7/lib/python3.7/site-packages/great_expectations/render/renderer/other_section_renderer.py", line 4, in <module> from great_expectations.profile.basic_dataset_profiler import BasicDatasetProfiler File "/Users/okokes/.pyenv/versions/3.7.7/lib/python3.7/site-packages/great_expectations/profile/init.py", line 1, in <module> from .basic_dataset_profiler import BasicDatasetProfiler File "/Users/okokes/.pyenv/versions/3.7.7/lib/python3.7/site-packages/great_expectations/profile/basic_dataset_profiler.py", line 3, in <module> from great_expectations.profile.base import ( File "/Users/okokes/.pyenv/versions/3.7.7/lib/python3.7/site-packages/great_expectations/profile/base.py", line 8, in <module> from ..dataset import Dataset File "/Users/okokes/.pyenv/versions/3.7.7/lib/python3.7/site-packages/great_expectations/dataset/init.py", line 3, in <module> from .dataset import Dataset File "/Users/okokes/.pyenv/versions/3.7.7/lib/python3.7/site-packages/great_expectations/dataset/dataset.py", line 13, in <module> from great_expectations.dataset.util import ( File "/Users/okokes/.pyenv/versions/3.7.7/lib/python3.7/site-packages/great_expectations/dataset/util.py", line 578, in <module> selects: List[WithinGroup], sql_engine_dialect: DefaultDialect NameError: name 'WithinGroup' is not defined </pre> </details>

created time in 3 months

issue openedgolang/go

cmd/link: binary on darwin takes up more space on disk

What version of Go are you using (go version)?

<pre> $ go version go version devel +8ab37b1baf Mon Apr 20 18:32:58 2020 +0000 darwin/amd64 </pre>

Does this issue reproduce with the latest release?

No (as in this is a regression in the current master, the latest stable is fine)

What operating system and processor architecture are you using (go env)?

<details><summary><code>go env</code> Output</summary><br><pre> $ go env GO111MODULE="" GOARCH="amd64" GOBIN="" GOCACHE="/Users/okokes/Library/Caches/go-build" GOENV="/Users/okokes/Library/Application Support/go/env" GOEXE="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="darwin" GOINSECURE="" GONOPROXY="" GONOSUMDB="" GOOS="darwin" GOPATH="/Users/okokes/go" GOPRIVATE="" GOPROXY="https://proxy.golang.org,direct" GOROOT="/usr/local/Cellar/go/1.14.2_1/libexec" GOSUMDB="sum.golang.org" GOTMPDIR="" GOTOOLDIR="/usr/local/Cellar/go/1.14.2_1/libexec/pkg/tool/darwin_amd64" GCCGO="gccgo" AR="ar" CC="clang" CXX="clang++" CGO_ENABLED="1" GOMOD="" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/81/4jydp7kn51n6p68z88sqnkzc0000gn/T/go-build271357823=/tmp/go-build -gno-record-gcc-switches -fno-common" </pre></details>

What did you do?

I noticed my binary went from 8MB (1.14.2) to 13MB (master), but only when running du -sh or inspecting the binary in Finder (under "how much it takes on disk").

When stating both binaries, they are very similar in size, so this issue only revolves around disk usage, not size.

I replicated the issue by creating a hello world app

<details><pre> package main

import "fmt"

func main() { fmt.Println("ahoy") } </pre> </details>

And then I bisected it, starting at 181153369534c6987306c47630f9e4fbf07b467f (good) and ending at cb11c981df7b4dc40550ab71cc097c25d24d7a71 (bad).

<details><pre> package main

import ( "log" "os/exec" )

func main() { build := exec.Command("/Users/okokes/git/go/src/make.bash") build.Dir = "/Users/okokes/git/go/src" err := build.Run() if err != nil { log.Fatal("toolchain build failed", err) }

cmd := exec.Command("/Users/okokes/git/go/bin/go", "build", "src/hello.go")
err = cmd.Run()
if err != nil {
	log.Fatal("program build failed", err)
}

sz := exec.Command("du", "-sh", "hello")
out, err := sz.Output()
if err != nil {
	log.Fatal("du failed", err)
}
num := out[0]
if num != '2' {
	log.Fatal(string(out))
}

}

</pre></details>

Bisect identified 8ab37b1 as the first offending commit. I verified it manually - the commit before that leads to a 2.1MB binary on disk, this commit leads to 4.1MB on disk.

I could not replicate this on a Ubuntu 18.04 box, so I presume it's a Darwin thing.

What did you expect to see?

$ stat -f '%z %N' hello_* 2216280 hello_8ab37b1 2174008 hello_go1.14 $ du -sh hello_* 2.1M hello_8ab37b1 2.1M hello_go1.14

What did you see instead?

$ stat -f '%z %N' hello_* 2216280 hello_8ab37b1 2174008 hello_go1.14 $ du -sh hello_* 4.1M hello_8ab37b1 2.1M hello_go1.14

created time in 3 months

push eventkokes/nbviewer.js

Ondrej Kokes

commit sha 4575ee0117d0a46bde8429bf1c439b08e9514562

defer script loading

view details

push time in 3 months

pull request commentnteract/commuter

Automatic testing via GitHub Actions

  1. Yup, the stacktrace is in my actions.
  2. When installing on 10/12, I'm told npm WARN deprecated chokidar@2.1.8: Chokidar 2 will break on node v14+. Upgrade to chokidar 3 with 15x less dependencies., so that will be an issue as well. I traced this a few days ago through babel/cli and watchpack and I hit a few beta packages, so I'm not sure it's ready to be updated yet.
  3. While this issue originally popped on my local machine (macOS, Node v14.2.0), I can't quite replicate it now, it builds, though the stack trace should give us something to follow.
  4. I added a build:all to the yaml file and it passed (again, see my actions).
  5. Sadly, act (local CI) might be broken due to a parsing issue.
kokes

comment created time in 3 months

push eventkokes/commuter

Ondrej Kokes

commit sha 07404c745e3f74a45da8b7b0cba63517dd984abd

adding a build step in CI

view details

push time in 3 months

more