profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/daanzu/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.
David Zurow daanzu david.zurow at gmail

daanzu/kaldi-active-grammar 212

Python Kaldi speech recognition with grammars that can be set active/inactive dynamically at decode-time

daanzu/deepspeech-websocket-server 80

Server & client for DeepSpeech using WebSockets for real-time speech recognition in separate environments

daanzu/kaldi-grammar-simple 7

Simple voice recognition grammar for coding on Linux.

daanzu/dragonfly 5

Speech recognition framework allowing powerful Python-based scripting and extension of Dragon NaturallySpeaking (DNS), Windows Speech Recognition (WSR) and CMU Pocket Sphinx

daanzu/py-webrtcvad-wheels 5

Python interface to the WebRTC Voice Activity Detector

daanzu/hidemulator 1

Automatically exported from code.google.com/p/hidemulator

daanzu/kaldi 1

This is now the official location of the Kaldi project.

issue commentflashlight/wav2letter

How to deal with model checkpoint compatibility issue?

Good!

Since there is some syntax error, i change these two lines to

yBatched = layer(i)->forward(std::vector<Variable>({yBatched, fl::Variable(af::array(), false)})).front();
tmp.push_back(fl::Variable(af::array(), false));
yBatched = layer(i)->forward(tmp).front();

then, the model works fine!

DongChanS

comment created time in an hour

issue commentdictation-toolbox/dragonfly

Dragonfly Python3 Support

@LexiconCode Thanks for your help

Look like wxPython needed to be installed too. Thanks to your tip found the package here and installed it with

pip install https://download.lfd.uci.edu/pythonlibs/w4tscw6k/cp27/wxPython-3.0.2.0-cp27-none-win32.whl

Unfortunately, it didn't help me get past of the following error when executing start_configurenatlink.py

Try to run configurenatlink.py, the Natlink Config GUI, in Elevated mode...

Unable to run the GUI configuration program of NatLink/Unimacro/Vocola
because module wx was not found.  This probably
means that wxPython is not installed correct:

Look like the only option to use Dragonfly with DNS is to upgrade to python3

bijanbina

comment created time in 2 hours

push eventSpeechColab/GigaSpeech

chaisz19

commit sha 0154f813d1b9e606045964b711a11707f43d52cf

Update download_meta.sh

view details

push time in 2 hours

issue openedflashlight/wav2letter

Reasoning delay problem

wav2letter/recipes/streaming_convnets/inference/inference/examples/ InteractiveStreamingASRExample.cpp MultithreadedStreamingASRExample.cpp SimpleStreamingASRExample.cpp

I found a problem with streaming decoding, and all above codes have this problem. The currently sent data will not be transcribed until 3 seconds later, so the last 3 seconds of text will be lost in the transcribed text of a file. If another 3 seconds of data are sent in, they will not be transcribed. Is there any way to solve this problem? Or let this data be solved immediately without delay? Thanks.

created time in 3 hours

push eventSpeechColab/GigaSpeech

Di Wu

commit sha 3a8b65fe2ce0ab336c535c02be22cc4116c15a0e

[wenet] data format support

view details

Di Wu

commit sha c3b705b21f2c617a6f6658ebcaa65aefea3bfe4c

[wenet] data format support

view details

Di Wu

commit sha 74f4c3429ec181071c401e0c3659c8a525984231

Merge branch 'diwu-giga' of https://github.com/whiteshirt0429/GigaSpeech into diwu-giga # Conflicts: # toolkits/wenet/gigaspeech_data_prep.sh

view details

Guoguo Chen

commit sha 5fbf9d76ff3760de223a54d18206377c5597aad7

Merge pull request #39 from whiteshirt0429/diwu-giga [wenet] data format support

view details

push time in 4 hours

issue commentflashlight/wav2letter

How to deal with model checkpoint compatibility issue?

@DongChanS Please change this https://github.com/flashlight/flashlight/blob/master/flashlight/app/asr/criterion/TransformerCriterion.cpp#L284 to

yBatched = layer(i)->forward(std::vector<Variable>({yBatched}), fl::Variable(af::array())).front();

and this https://github.com/flashlight/flashlight/blob/master/flashlight/app/asr/criterion/TransformerCriterion.cpp#L296 to

yBatched = layer(i)->forward(tmp, fl::Variable(af::array())).front();

I will send this fix later, but this should unblock you. Let me know if you still have problems.

DongChanS

comment created time in 4 hours

issue openedSpeechColab/GigaSpeech

Metadata conversion (json -> jsonl)

See the discussion here: https://github.com/SpeechColab/PySpeechColab/pull/2

We should make changes to https://github.com/SpeechColab/GigaSpeech/blob/main/utils/download_meta.sh to include the json->jsonl conversion.

The conversion command is jq -c '.audios[]' GigaSpeech.json > GigaSpeech.jsonl

See examples here for checking jq installation: https://github.com/SpeechColab/GigaSpeech/blob/main/utils/show_segment_info.sh

created time in 5 hours

Pull request review commentSpeechColab/GigaSpeech

[wenet] data format support

+#!/usr/bin/env bash+# Copyright 2021  Xiaomi Corporation (Author: Yongqing Wang)+#                 Seasalt AI, Inc (Author: Guoguo Chen)+#                 Mobvoi Corporation (Author: Di Wu)++set -e+set -o pipefail++stage=2

Is there a reason why we start with stage 2?

whiteshirt0429

comment created time in 5 hours

Pull request review commentSpeechColab/GigaSpeech

[wenet] data format support

+#!/usr/bin/env bash+# Copyright 2021  Xiaomi Corporation (Author: Yongqing Wang)+#                 Seasalt AI, Inc (Author: Guoguo Chen)+#                 Mobvoi Corporation (Author: Di Wu)++set -e+set -o pipefail++stage=2+prefix=gigaspeech+garbage_utterance_tags="<SIL> <MUSIC> <NOISE> <OTHER>"+punctuation_tags="<COMMA> <EXCLAMATIONPOINT> <PERIOD> <QUESTIONMARK>"+train_subset=XL++. ./utils/parse_options.sh || exit 1;++filter_by_id () {+  idlist=$1+  input=$2+  output=$3+  field=1+  if [ $# -eq 4 ]; then+    field=$4+  fi+  cat $input | perl -se '+    open(F, "<$idlist") || die "Could not open id-list file $idlist";+    while(<F>) {+      @A = split;+      @A>=1 || die "Invalid id-list file line $_";+      $seen{$A[0]} = 1;+    }+    while(<>) {+      @A = split;+      @A > 0 || die "Invalid file line $_";+      @A >= $field || die "Invalid file line $_";+      if ($seen{$A[$field-1]}) {+        print $_;+      }+    }' -- -idlist="$idlist" -field="$field" > $output ||\+  (echo "$0: filter_by_id() error: $input" && exit 1) || exit 1;+}++subset_data_dir () {+  utt_list=$1+  src_dir=$2+  dest_dir=$3+  mkdir -p $dest_dir || exit 1;+  # wav.scp text segments utt2dur+  filter_by_id $utt_list $src_dir/utt2dur $dest_dir/utt2dur ||\+    (echo "$0: subset_data_dir() error: $src_dir/utt2dur" && exit 1) || exit 1;+  filter_by_id $utt_list $src_dir/text $dest_dir/text ||\+    (echo "$0: subset_data_dir() error: $src_dir/text" && exit 1) || exit 1;+  filter_by_id $utt_list $src_dir/segments $dest_dir/segments ||\+    (echo "$0: subset_data_dir() error: $src_dir/segments" && exit 1) || exit 1;+  awk '{print $2}' $dest_dir/segments | sort | uniq > $dest_dir/reco+  filter_by_id $dest_dir/reco $src_dir/wav.scp $dest_dir/wav.scp ||\+    (echo "$0: subset_data_dir() error: $src_dir/wav.scp" && exit 1) || exit 1;+  rm -f $dest_dir/reco+}++if [ $# -ne 2 ]; then+  echo "Usage: $0 [options] <gigaspeech-dataset-dir> <data-dir>"+  echo " e.g.: $0 --train-subset XL /disk1/audio_data/gigaspeech/ data/"+  echo ""+  echo "This script takes the GigaSpeech source directory, and prepares the"+  echo "WeNet format data directory."+  echo "  --garbage-utterance-tags <tags>  # Tags for non-speech."+  echo "  --prefix <prefix>                # Prefix for output data directory."+  echo "  --punctuation-tags <tags>        # Tags for punctuations."+  echo "  --stage <stage>                  # Processing stage."+  echo "  --train-subset <XL|L|M|S|XS>     # Train subset to be created."+  exit 1+fi++gigaspeech_dir=$1+data_dir=$2++declare -A subsets+subsets=(+  [XL]="train_xl"+  [L]="train_l"+  [M]="train_m"+  [S]="train_s"+  [XS]="train_xs"+  [DEV]="dev"+  [TEST]="test")+prefix=${prefix:+${prefix}_}++corpus_dir=$data_dir/${prefix}corpus/+if [ $stage -le 1 ]; then+  echo "$0: Extract meta into $corpus_dir"+  # Sanity check.+  [ ! -f $gigaspeech_dir/GigaSpeech.json ] &&\+    echo "$0: Please download $gigaspeech_dir/GigaSpeech.json!" && exit 1;+  [ ! -d $gigaspeech_dir/audio ] &&\+    echo "$0: Please download $gigaspeech_dir/audio!" && exit 1;++  [ ! -d $corpus_dir ] && mkdir -p $corpus_dir++  # Files to be created:+  # wav.scp text segments utt2dur+  python3 ../../toolkits/wenet/extract_meta.py \+     $gigaspeech_dir/GigaSpeech.json $corpus_dir | exit 1;

We should probably assume that users will run the gigaspeech scripts under the repo root directory? So something like

  python3 toolkits/wenet/extract_meta.py ...
whiteshirt0429

comment created time in 5 hours

issue closedflashlight/wav2letter

[TDS model with transducer]

Question

transducer loss is better than ctc in paper https://arxiv.org/pdf/2011.04785.pdf. it's use LC-BLSTM as encoder to compare 3 loss ,have you try to train a TDS-transducer, is it better than TDS-ctc?

closed time in 5 hours

jinggaizi

PR opened SpeechColab/GigaSpeech

[wenet] data format support
+229 -1

0 comment

3 changed files

pr created time in 6 hours

push eventSpeechColab/GigaSpeech

chaisz19

commit sha 505b034229be765f72a5f66a037e56f7afbd277b

Update

view details

push time in 7 hours

push eventdictation-toolbox/Caster

tieTYT

commit sha 19fcb490666b2c4a7158242584bc0ea18277ed9a

ContextSet Doc Clarification

view details

push time in 10 hours

PR merged dictation-toolbox/Caster

ContextSet Doc Clarification Documentation

I think A few points about the ContextSet: implies that the topic is changing, so hopefully my edit will override that potential assumption.

Motivation and Context

I was confused reading the documentation and I think this could prevent confusion in the future

Types of changes

<!-- What types of changes does your code introduce Put an x in all the boxes that apply --> <!-- and delete the options that do not apply. -->

  • [x] Docs change / refactoring / dependency upgrade
  • [ ] Bug fix (non-breaking change which fixes an issue or bug)
  • [ ] New feature (non-breaking change which adds functionality)
  • [ ] Breaking change (fix or feature that would cause existing functionality to change)
  • [ ] Renamed existing command phrases (we discourage this without a strong rationale).

Checklist

<!-- Go over all the following points, and put an x in all the boxes that apply. --> <!-- You DO NOT NEED TO FINISH all of these to submit a pull request to Caster. --> <!-- You may submit a pull request at any stage of completion to get feedback. --> <!-- Please add further items to this checklist as appropriate and delete any items --> <!-- that do not apply. If you leave the "My code implements all the features --> <!-- I wish to merge in this pull request." box unchecked we will not review the code --> <!-- unless requested. -->

  • [x ] I have read the CONTRIBUTING document.
  • [ ] My code follows the code style of this project.
  • [ ] I have checked that my code does not duplicate functionality elsewhere in Caster.
  • [ ] I have checked for and utilized existing command phrases from within Caster (delete if not applicable).
  • [ ] My code implements all the features I wish to merge in this pull request.
  • [ ] My change requires a change to the documentation.
  • [x ] I have updated the documentation accordingly.
  • [ ] I have added tests to cover my changes.
  • [ ] All new and existing tests pass.

Maintainer/Reviewer Checklist

<!-- Please leave these unchecked and add any other specific tasks you would like a --> <!-- reviewer or maintainer to complete. -->

  • [ ] Basic functionality has been tested and works as claimed.
  • [ ] New documentation is clear and complete.
  • [x] Code is clear and readable.
+1 -1

0 comment

1 changed file

tieTYT

pr closed time in 10 hours

PR opened dictation-toolbox/Caster

I'm suggesting this so the reader knows you shouldn't be expected to understand the examples yet

I think A few points about the ContextSet: implies that the topic is changing, so hopefully my edit will override that potential assumption.

Motivation and Context

I was confused reading the documentation and I think this could prevent confusion in the future

Types of changes

<!-- What types of changes does your code introduce Put an x in all the boxes that apply --> <!-- and delete the options that do not apply. -->

  • [x] Docs change / refactoring / dependency upgrade
  • [ ] Bug fix (non-breaking change which fixes an issue or bug)
  • [ ] New feature (non-breaking change which adds functionality)
  • [ ] Breaking change (fix or feature that would cause existing functionality to change)
  • [ ] Renamed existing command phrases (we discourage this without a strong rationale).

Checklist

<!-- Go over all the following points, and put an x in all the boxes that apply. --> <!-- You DO NOT NEED TO FINISH all of these to submit a pull request to Caster. --> <!-- You may submit a pull request at any stage of completion to get feedback. --> <!-- Please add further items to this checklist as appropriate and delete any items --> <!-- that do not apply. If you leave the "My code implements all the features --> <!-- I wish to merge in this pull request." box unchecked we will not review the code --> <!-- unless requested. -->

  • [x ] I have read the CONTRIBUTING document.
  • [ ] My code follows the code style of this project.
  • [ ] I have checked that my code does not duplicate functionality elsewhere in Caster.
  • [ ] I have checked for and utilized existing command phrases from within Caster (delete if not applicable).
  • [ ] My code implements all the features I wish to merge in this pull request.
  • [ ] My change requires a change to the documentation.
  • [x ] I have updated the documentation accordingly.
  • [ ] I have added tests to cover my changes.
  • [ ] All new and existing tests pass.

Maintainer/Reviewer Checklist

<!-- Please leave these unchecked and add any other specific tasks you would like a --> <!-- reviewer or maintainer to complete. -->

  • [ ] Basic functionality has been tested and works as claimed.
  • [ ] New documentation is clear and complete.
  • [ ] Code is clear and readable.
+1 -1

0 comment

1 changed file

pr created time in 10 hours

issue commentalphacep/vosk-api

Android documentation missing?

Apologies, I wasn't very clear with my original question.

The README file for the vosk-android-demo project that exists already says that the demo implements speaker verification, and I was wondering if it was necessary to import the necessary functionality from this repo or whether the demo project already contained the functionality, and I simply could not find it.

Also, I am quite new to Android development. I am trying to take in speaker audio (not an existing audio file) and use the speaker verification to compare it to an existing database of audio files. I am comfortable creating the speaker models, but I am not sure how to write to a model to compare with the existing speaker models using the given functionality. The vosk-android-demo as it comes uses the recognition as I would like (live input of audio) but I am not sure how to generate a model from that input to compare and verify speaker.

I have this right now in the VoskActivity.java class in the android-demo project, but I am sure I am not doing it correctly:

private void recognizeSpeaker() { Model model = new Model(/model path/); //how to receive model data?? SpeakerModel spkmod = new SpeakerModel(/path/); Recognizer recog = new Recognizer(model, spkmod, 16000.0f); recog.getResult(); //does this return the speaker? }

Thanks for this wonderful project, I have learned a lot already and am happy to keep working and debugging.

ian-yang-02

comment created time in 10 hours

issue commentalphacep/vosk-api

How to set-up a Vosk multi-threads server architecture in NodeJs

Related to https://github.com/alphacep/vosk-api/issues/516 maybe also depending on https://github.com/node-ffi-napi/ref-napi/issues/54

solyarisoftware

comment created time in 16 hours

startedo-o-overflow/dc2021q-a-fallen-lap-ray

started time in 16 hours

issue commentalphacep/vosk-api

Raspberry PI 3 B+ Kaldi Docker Image Issues - Does not match host platform

You can install vosk with pip, then simply run server from git clone. There is no need to run docker in rpi3, it is very slow.

mjrevel

comment created time in 17 hours

issue openedalphacep/vosk-api

Raspberry PI 3 B+ Kaldi Docker Image Issues - Does not match host platform

Having a bit of trouble following the Vosk install guide. When I tried to get the latest Kaldi Docker image (alphacep/kaldi-en:latest) I received the error

WARNING: The requested image's platform (linux/amd64 does not match the detected host platform (linux/arm/v7)...

I'm familiar with what this error means but there doesn't seem to be an image available with the cpu architecture I'm using.

For those running Vosk on a PI how did you get around this? Did you end up having to compile from source?

created time in 17 hours

Pull request review commentSpeechColab/GigaSpeech

download subset

 #!/usr/bin/env bash # Copyright 2021  Jiayu Du #                 Seasalt AI, Inc (Author: Guoguo Chen)-+#                 Tsinghua University (Author: Shuzhou Chai)   set -e set -o pipefail -if [ $# -ne 1 ]; then-  echo "Usage: $0 <gigaspeech-dataset-dir>"-  echo " e.g.: $0 /disk1/audio_data/gigaspeech"++subset=XL++. ./utils/parse_options.sh || exit 1;++if [ $# -ne 2 ]; then+  echo "Usage: $0 [options] <gigaspeech-dataset-dir> <GigaSpeech-json> "

Yes, that's why was saying that it was not backward compatible... I'd prefer we provide that through something like --gigaspeech-meta or gigaspeech-json-meta

chaisz19

comment created time in 17 hours

issue commentalphacep/vosk-android-demo

Improve accuracy for hot word detection

Sure, Thanks! How did you reduce the false positive? Can you share your code please?

dewijones92

comment created time in 18 hours

startedcoqui-ai/coqpit

started time in 19 hours

issue commentalphacep/vosk-api

C# - BadImageFormatException

@GABowers those models need to be nnet3 chain setup i think, now (ver .27) with const arpa and rnnlm rescoring

GABowers

comment created time in 20 hours

issue commentalphacep/vosk-api

VoskJS HTTPServer crash (v8::internal::GlobalBackingStoreRegistry::Register)

Meanwhile, I confirm that with node 13.14.0, all tests I previous mentioned, pass successfully (without any crash) .

$ node --version
v13.14.0

$ abtest.sh 

test httpServer using apache bench
  10 concurrent clients
  200 requests to run

full ab command
  ab -p /home/giorgio/voskjs/tests/body.json -T application/json -H 'Content-Type: application/json' -c 10 -n 200 -l -k -r http://localhost:3000/transcript

This is ApacheBench, Version 2.3 <$Revision: 1843412 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking localhost (be patient)
Completed 100 requests
Completed 200 requests
Finished 200 requests


Server Software:        
Server Hostname:        localhost
Server Port:            3000

Document Path:          /transcript
Document Length:        Variable

Concurrency Level:      10
Time taken for tests:   78.491 seconds
Complete requests:      200
Failed requests:        0
Keep-Alive requests:    0
Total transferred:      78199 bytes
Total body sent:        48400
HTML transferred:       56799 bytes
Requests per second:    2.55 [#/sec] (mean)
Time per request:       3924.528 [ms] (mean)
Time per request:       392.453 [ms] (mean, across all concurrent requests)
Transfer rate:          0.97 [Kbytes/sec] received
                        0.60 kb/s sent
                        1.58 kb/s total

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:   603 3836 628.8   4029    4808
Waiting:      601 3734 636.1   3955    4804
Total:        603 3836 628.8   4030    4808

Percentage of the requests served within a certain time (ms)
  50%   4030
  66%   4146
  75%   4221
  80%   4296
  90%   4487
  95%   4655
  98%   4737
  99%   4808
 100%   4808 (longest request)
solyarisoftware

comment created time in a day

issue commentalphacep/vosk-api

VoskJS HTTPServer crash (v8::internal::GlobalBackingStoreRegistry::Register)

Ah! I didn't try node 13 (but with node 16 and 14), good to know. thanks

solyarisoftware

comment created time in a day

issue commentalphacep/vosk-api

VoskJS HTTPServer crash (v8::internal::GlobalBackingStoreRegistry::Register)

Its a work in progress. You can use node 13 for now, node 14-16 is broken. You have to wait till I fix this bug in node-ffi-napi.

solyarisoftware

comment created time in a day

issue commentalphacep/vosk-api

VoskJS HTTPServer crash (v8::internal::GlobalBackingStoreRegistry::Register)

Hi Nicolay

I'm following you tests on https://github.com/node-ffi-napi/ref-napi/issues/54 with some difficulties for my ignorance in these realms.

My side I done more trivial tests on my project VoskJS:

(1) if I do NOT use new Vosk multithreading feature, so I do NOT use rec.acceptWaveformAsync(data) but I use old (thread blocking) rec.acceptWaveform(data): https://github.com/solyarisoftware/voskJs/blob/master/voskjs.js#L137 I do not see the issue.

By example, consider a program that SEQUENTIALLY invokes Vosk API recognizer: https://github.com/solyarisoftware/voskJs/blob/master/tests/sequentialRequests.js#L27

If I use rec.acceptWaveform(data):

const { result, latency } = await transcript(audioFile, model, {multiThreads:false})

All run smoothly and the there are no crashes.

(2) But the issue instead happens all times I use acceptWaveformAsync. By example in the program above if I set

const { result, latency } = await transcript(audioFile, model, {multiThreads:true})

Now the program crashes after N requests:

$ node sequentialRequests 500
requests               : 500

model directory        : ../models/vosk-model-small-en-us-0.15
speech file name       : ../audio/2830-3980-0043.wav

load model latency     : 327ms

request nr.            : 1
transcript latency     : 567ms
request nr.            : 2
transcript latency     : 583ms
request nr.            : 3
transcript latency     : 563ms
request nr.            : 4
transcript latency     : 553ms
request nr.            : 5
transcript latency     : 549ms
request nr.            : 6
transcript latency     : 561ms
request nr.            : 7
transcript latency     : 551ms
request nr.            : 8
transcript latency     : 571ms
request nr.            : 9
transcript latency     : 589ms
request nr.            : 10
transcript latency     : 560ms
request nr.            : 11
transcript latency     : 556ms
request nr.            : 12
transcript latency     : 557ms
request nr.            : 13
transcript latency     : 578ms
request nr.            : 14
transcript latency     : 563ms
request nr.            : 15
transcript latency     : 565ms
request nr.            : 16
transcript latency     : 571ms
request nr.            : 17
transcript latency     : 560ms
request nr.            : 18
transcript latency     : 595ms
request nr.            : 19
transcript latency     : 552ms

...
...

request nr.            : 71
transcript latency     : 781ms
request nr.            : 72
transcript latency     : 792ms
request nr.            : 73
transcript latency     : 774ms
request nr.            : 74
transcript latency     : 783ms
request nr.            : 75
transcript latency     : 791ms
request nr.            : 76
transcript latency     : 799ms
request nr.            : 77
transcript latency     : 799ms
request nr.            : 78
transcript latency     : 794ms
request nr.            : 79


#
# Fatal error in , line 0
# Check failed: result.second.
#
#
#
#FailureMessage Object: 0x7ffc02120c80
 1: 0xb7db41  [node]
 2: 0x1c15474 V8_Fatal(char const*, ...) [node]
 3: 0x100c201 v8::internal::GlobalBackingStoreRegistry::Register(std::shared_ptr<v8::internal::BackingStore>) [node]
 4: 0xd23818 v8::ArrayBuffer::GetBackingStore() [node]
 5: 0xacb640 napi_get_typedarray_info [node]
 6: 0x7f4554f3c0ff  [/home/giorgio/voskjs/node_modules/ref-napi/prebuilds/linux-x64/node.napi.node]
 7: 0x7f4554f3c8a8  [/home/giorgio/voskjs/node_modules/ref-napi/prebuilds/linux-x64/node.napi.node]
 8: 0x7f4554f3e591  [/home/giorgio/voskjs/node_modules/ref-napi/prebuilds/linux-x64/node.napi.node]
 9: 0x7f4554f44d6b Napi::details::CallbackData<void (*)(Napi::CallbackInfo const&), void>::Wrapper(napi_env__*, napi_callback_info__*) [/home/giorgio/voskjs/node_modules/ref-napi/prebuilds/linux-x64/node.napi.node]
10: 0xac220f  [node]
11: 0xd5f70b  [node]
12: 0xd60bac  [node]
13: 0xd61226 v8::internal::Builtin_HandleApiCall(int, unsigned long*, v8::internal::Isolate*) [node]
14: 0x160c579  [node]
Illegal instruction (core dumped)

Note: it seems to me that N (after how much requests the crash happens) depends on the availability of free RAM. I mean that if I have "few" hundred MBs, just because I have the browser open etc., it seems that the program crashes after very few (N~=10) requests. Maybe this unrelated/I'm wrong. Just FYI

$ free -h
              total        used        free      shared  buff/cache   available
Mem:           11Gi       6,8Gi       1,6Gi       740Mi       3,3Gi       3,8Gi
Swap:         2,0Gi       1,8Gi       191Mi

Anyway, I believe you are one step in investigating the napi bug 54. I just wish to share with you my modest tests. Thanks Giorgio

solyarisoftware

comment created time in a day

issue commentalphacep/vosk-api

Cleanup lapack build on android

This is more straightforward now, though I'd move away from cmake.

nshmyrev

comment created time in a day