profile
viewpoint

guillaumekln/gmm-classifier 2

Gaussian Mixture Model (GMM) classifier in Matlab.

guillaumekln/config-files 1

Personal configuration files.

guillaumekln/nmt-wizard 1

Launch NMT tasks on the cloud

guillaumekln/OpenNMT 1

Open-Source Neural Machine Translation in Torch

guillaumekln/addons 0

Useful extra functionality for TensorFlow maintained by SIG-addons

guillaumekln/asearch 0

Fast approximate string search.

guillaumekln/babble 0

Discourse Shoutbox plugin

guillaumekln/cookie-notice 0

WordPress.org Plugin Mirror

guillaumekln/CTranslate 0

OpenNMT C++ translator

guillaumekln/CTranslate2 0

Custom C++ inference engine for OpenNMT models

Pull request review commentOpenNMT/OpenNMT-tf

New scorers

 def get_long_description():         "rouge>=1.0,<2",         "sacrebleu>=1.4.3,<2",         "tensorflow>=2.1,<2.2",-        "tensorflow-addons>=0.8.1,<0.9"+        "tensorflow-addons>=0.8.1,<0.9",+        "pyter3>=0.3"

As we don't know how stable this package is, let's use a strict requirement pyter3==0.3.

cservan

comment created time in 7 hours

Pull request review commentOpenNMT/OpenNMT-tf

New scorers

 def make_scorers(names):       scorer = BLEUScorer()     elif name == "rouge":       scorer = ROUGEScorer()+    elif name == "wer":+      scorer = WERScorer()+    elif name == "ter":+      scorer = TERScorer()+    elif name == "precision":+      scorer = PRECISIONScorer()+    elif name == "recall":+      scorer = RECALLScorer()+    elif name == "fmeasure":+      scorer = FMEASUREScorer()

I'm wondering if we need these scores as PRFScorer computes all values in one shot. What do you think?

cservan

comment created time in 7 hours

Pull request review commentOpenNMT/OpenNMT-tf

New scorers

+"""Hypotheses file scoring."""+import numpy+import pyter++def wer(ref_path, hyp_path):+  """ Compute Word Error Rate between two files """+  ref_fp = open(ref_path)+  hyp_fp = open(hyp_path)

These file handles should be closed at the end of the function. To do it automatically, you could turn this into:

with open(ref_path) as ref_fp, open(hyp_path) as hyp_fp:
  ...

This also applies for the other scoring functions.

cservan

comment created time in 7 hours

issue commentOpenNMT/nmt-wizard-docker

ImportError: No module named nmtwizard.framework

Sorry for the lack of clear instructions. We did not make enough effort on this side.

I completely revisited the main README and added an example. @gussmith Could you give a look and see if it makes things clearer?

I think part of the confusion came from the instructions in frameworks/*/README.md which describe how to run the script without Docker. As it is not the recommended usage, I removed these instructions entirely.

paulkp

comment created time in 7 hours

push eventOpenNMT/nmt-wizard-docker

Guillaume Klein

commit sha 8d2d60a96d1bfccde58b46605297e1b3d3020009

Update README.md

view details

push time in 7 hours

push eventOpenNMT/nmt-wizard-docker

Guillaume Klein

commit sha b3da0b01260c52fc2ec4cf2c3d5ecde2df275b00

Revisit documentation

view details

Guillaume Klein

commit sha 60797363b6f7a71667b53e51c6dad44917cd42e2

Set LANG in OpenNMT-tf Dockerfile

view details

Guillaume Klein

commit sha 182bdbc14c4a2daa7373368d633ad03a11804cf1

Update OpenNMT-py to 1.1.1

view details

push time in 8 hours

delete branch guillaumekln/CTranslate2

delete branch : optional-scores

delete time in a day

push eventOpenNMT/CTranslate2

Guillaume Klein

commit sha c76af4034684049936d70e36f0344bef9f6544c2

Make output scores optional (#170) Additional optimizations are possible when the output scores are not required. For example, we can skip the final LogSoftMax during greedy search.

view details

push time in a day

PR merged OpenNMT/CTranslate2

Make output scores optional

Additional optimizations are possible when the output scores are not required. For example, we can skip the final LogSoftMax during greedy search.

+97 -34

0 comment

12 changed files

guillaumekln

pr closed time in a day

PR opened OpenNMT/CTranslate2

Make output scores optional

Additional optimizations are possible when the output scores are not required. For example, we can skip the final LogSoftMax during greedy search.

+97 -34

0 comment

12 changed files

pr created time in a day

push eventguillaumekln/CTranslate2

Guillaume Klein

commit sha f6449712e614087c4dc81905da525d3feddfd52d

Make output scores optional Additional optimizations are possible when the output scores are not required. For example, we can skip the final LogSoftMax during greedy search.

view details

push time in a day

create barnchguillaumekln/CTranslate2

branch : optional-scores

created branch time in a day

delete branch guillaumekln/CTranslate2

delete branch : relu-avx

delete time in a day

push eventOpenNMT/CTranslate2

Guillaume Klein

commit sha 1e5d549ef614fb1ee63804c865c6e47908e15428

Add static AVX optimization for ReLU (#169)

view details

push time in a day

PR opened OpenNMT/CTranslate2

Add static AVX optimization for ReLU
+33 -1

0 comment

2 changed files

pr created time in a day

create barnchguillaumekln/CTranslate2

branch : relu-avx

created branch time in a day

delete branch guillaumekln/CTranslate2

delete branch : partial-weight-alignment

delete time in a day

push eventOpenNMT/CTranslate2

Guillaume Klein

commit sha 7d6345cb8c64bc288d768a5a8684b9855b321910

Ensure vocabulary map produces weights with a size divisble by 16 (#168) This is to ensure memory alignment for subsequent operations.

view details

push time in a day

PR merged OpenNMT/CTranslate2

Ensure vocabulary map produces weights with a size divisble by 16

This is to ensure memory alignment for subsequent operations.

+17 -4

0 comment

2 changed files

guillaumekln

pr closed time in a day

PR opened OpenNMT/CTranslate2

Ensure vocabulary map produces weights with a size divisble by 16

This is to ensure memory alignment for subsequent operations.

+17 -4

0 comment

2 changed files

pr created time in a day

create barnchguillaumekln/CTranslate2

branch : partial-weight-alignment

created branch time in a day

issue closedOpenNMT/OpenNMT-tf

understanding weighted inputs functionality

Hi,

I'm trying to use the weighted inputs functionality that was released recently, in the hope to see if it reduces over-fitting in domain adaption. Documentation states we can give multiple training files which I have (presently two for now). My question is on the eval files, vocabulary and attention files (previously I have used train_alignments). Can I just give two eval files one for each train file ? Do I have to combine the vocabulary sets from both train files for source_words_vocabulary and target_words_vocabulary ? For train_alignments do I have to generate one or two files ?

Config file for data portion looks something like this -

model_dir: Weighted_Attention_Run/model/

data: train_features_file: - euro_train.en - custom_train.en train_labels_file: - euro_train.es - custom_train.es eval_features_file: - euro_dev.en - custom_dev.en eval_labels_file: - euro_dev.es - custom_dev.es source_words_vocabulary: combined_src_vocab_150k.txt target_words_vocabulary: combined_trg_vocab_150k.txt train_alignments: - euro_corpus_en_es.gdfa - custom_corpus_en_es.gdfa

I'm using --model Transformer

Is this correct way to test this functionality ...

Thanks !

closed time in 2 days

mohammedayub44

push eventOpenNMT/opennmt.github.io

Guillaume Klein

commit sha 16807e58350769e3ba4fef70ae399e2f75324f08

Update features table

view details

push time in 3 days

issue commentOpenNMT/nmt-wizard-docker

Training a model

Hi,

Your configuration uses mode_type instead of of model_type.

julsal

comment created time in 3 days

push eventOpenNMT/OpenNMT-tf

Guillaume Klein

commit sha 2c1d81ccd00ff6abd886c180ff81e9821e0fd572

Synchronize words counters in a function

view details

push time in 3 days

delete branch guillaumekln/OpenNMT-tf

delete branch : trainer-cleanup

delete time in 3 days

push eventOpenNMT/OpenNMT-tf

Guillaume Klein

commit sha 5cee6d046afc3d3b9a3d14cde7dfb85c9ff1fbdc

Additional cleanup in training classes (#641) * Reorder methods in base trainer class * Remove unnecessary overrides * Move some initialization to the base class * Simplify reduction

view details

push time in 3 days

create barnchguillaumekln/OpenNMT-tf

branch : trainer-cleanup

created branch time in 3 days

delete branch guillaumekln/OpenNMT-tf

delete branch : horovod-v2

delete time in 3 days

push eventOpenNMT/OpenNMT-tf

Guillaume Klein

commit sha 72ddc9de29a7987330720927d82a7769f3a82c5c

Add Horovod trainer (#639) * Add Horovod trainer * Fix pylint * Avoid checkpoint related work in workers * Remove unnecessary manual call to hvd.shutdown * Update docs * Factorize gradient accumulation and application

view details

push time in 3 days

PR merged OpenNMT/OpenNMT-tf

Add Horovod trainer

Closes #581.

+278 -131

0 comment

7 changed files

guillaumekln

pr closed time in 3 days

issue closedOpenNMT/OpenNMT-tf

2.x hovorod status

Opening this issue to express interest in and request priority/status updates for hovorod support in OpenNMT-tf 2.x.

From the 2.0.0 release notes:

Some features available in OpenNMT-tf v1 were removed or are temporarily missing in this v2 release. If you relied on some of them, please open an issue to track future support or find workarounds.

  • Asynchronous distributed training
  • Horovod integration ...

closed time in 3 days

JoshuaPostel

push eventguillaumekln/OpenNMT-tf

Guillaume Klein

commit sha 51d55270ad94cf9b8320e6552fa6ee7385def3e5

Update docs

view details

Guillaume Klein

commit sha 3dc1b4263d061d9441d6e9f14de34da25eff0e2e

Factorize gradient accumulation and application

view details

push time in 3 days

push eventguillaumekln/OpenNMT-tf

Guillaume Klein

commit sha 9b8b323225cc88f64507ef7b37acf14e9e62a0cb

Remove unnecessary manual call to hvd.shutdown

view details

push time in 6 days

push eventguillaumekln/OpenNMT-tf

Guillaume Klein

commit sha 0bf8505dc4aa8fcae315a9384f67a6888a86de44

Avoid checkpoint related work in workers

view details

push time in 6 days

issue commentOpenNMT/OpenNMT-tf

GPT2 training reports on source and target words

Right, good find. A quick fix would be to add a condition when declaring the target counter, for example:

if not self._model.unsupervised:
  self._update_words_counter("target", target)
jsenellart

comment created time in 6 days

push eventguillaumekln/OpenNMT-tf

Guillaume Klein

commit sha 10d32d7c3d51317ea84cc78d2cc48b144b1ab8e9

Fix pylint

view details

push time in 6 days

issue commentOpenNMT/OpenNMT-tf

2.x hovorod status

Note sure if any of you are still around, but if someone could try #639 and provide feedback, that would be helpful.

JoshuaPostel

comment created time in 6 days

PR opened OpenNMT/OpenNMT-tf

Add Horovod trainer

Closes #581.

+165 -55

0 comment

4 changed files

pr created time in 6 days

create barnchguillaumekln/OpenNMT-tf

branch : horovod-v2

created branch time in 6 days

delete branch guillaumekln/addons

delete branch : sequence-loss-tf2.2-compat

delete time in 6 days

delete branch guillaumekln/CTranslate2

delete branch : disable-auto-fill

delete time in 6 days

push eventOpenNMT/CTranslate2

Guillaume Klein

commit sha 8249b1ef364e728feb51fcbce7460d0e2a84db95

Remove fill operation in StorageView(Shape) constructor (#167) 2 reasons: * initialized memory is not always needed so this is a small optimization for these cases * this matches the semantic of resize(Shape) which does not initialize memory

view details

push time in 6 days

PR merged OpenNMT/CTranslate2

Remove fill operation in StorageView(Shape) constructor

2 reasons:

  • initialized memory is not always needed so this is a small optimization for these cases
  • this matches the semantic of resize(Shape) which does not initialize memory
+9 -4

1 comment

4 changed files

guillaumekln

pr closed time in 6 days

pull request commenttensorflow/addons

Update SequenceLoss for TensorFlow 2.2 compatibility

Iet's let you do what you think is best here.

I propose to just make a bug fix in this PR and try to not change the behavior. Maybe @pavithrasv has more recommendation on how to deal with custom loss reduction.

guillaumekln

comment created time in 6 days

push eventguillaumekln/addons

Guillaume Klein

commit sha 474d758d23b512eac6f0d532f075fb726a4fb971

Fix format

view details

push time in 6 days

push eventguillaumekln/addons

Gabriel de Marmiesse

commit sha 30363eb56325ebfcf87198b84a63ef592a432abe

Test gather tree only in eager mode because it's a custom op. (#1367)

view details

Gabriel de Marmiesse

commit sha bfbd388923336d2a79304e46f10bc26408ca5465

test gather_tree_from_array in eager mode only. (#1368)

view details

Gabriel de Marmiesse

commit sha bebeba786f75c30ad1f7769e8f6f7f3454484fee

Test eos masking in eager mode only. (#1369)

view details

Gabriel de Marmiesse

commit sha 2582dfcefc853ec4131c3ab190159daf119110af

Test gather tree in eager mode only. (#1370)

view details

Gabriel de Marmiesse

commit sha 480a8ee3ba5ac37e205f51fb7662737bee6c5294

Use eager mode for a gather tree test because it's only a custom op. (#1372)

view details

Gabriel de Marmiesse

commit sha 3b41cfe40c92942fb31e6cac01c9d7cc876951c8

run keras compatibility test with v2 behavior. (#1374)

view details

Gabriel de Marmiesse

commit sha 711e7251d973f384a495712fb3a8787251e700a4

Test ambiguous order in eager and tf.function. (#1377)

view details

Gabriel de Marmiesse

commit sha 7aac34c52fc8fa90a2581d3bfb086ee73ef5f61e

Fix the serialization bug of rectified adam. (#1375) * Fix the serialization bug of rectified adam. * Better error message. * Update tensorflow_addons/optimizers/rectified_adam.py

view details

Gabriel de Marmiesse

commit sha c9ccce405039ea0f33130a29432847e9174e5edf

Moved test_sequence_loss outside the run_all_in_graph_and_eager_mode. (#1378) * Converted to pytest test. * Moved test sequence loss outside the run_all_in_graph_and_eager_mode. * More tests.

view details

pkan2

commit sha f0cef018bc659590ce783147639b1797e0f36a01

Adding another maintainer into CODEOWNERS file (#1382) * Improve with providing Nuclear Norm Constraint * Update based on feedback * Update Based on Feedback * Solve Conflict * Solve Conflict * change format and fix conflict * Update CODEOWNERS with adding another maintainer Update CODEOWNERS with adding lokande-vishnu as another maintainer of /tensorflow_addons/optimizers/conditional_gradient.py. Co-authored-by: gabrieldemarmiesse <gabrieldemarmiesse@gmail.com>

view details

Gabriel de Marmiesse

commit sha 591a5976ea179e929bae6bbf2df8763f99ffc4d5

Moved test_sequence_loss outside run_all_in_graph_and_eager_mode. (#1384)

view details

Gabriel de Marmiesse

commit sha 778b9229b19d2f00d28949dbc508f8ad7d41dc54

Migrated some tests out of run_all_in_graph_and_eager_mode in seq2seq (#1385) * Migrated stuff to pytest. * Migrated weighted sum reduction test to pytest.

view details

Gabriel de Marmiesse

commit sha 0f680e2751560844906899ac8c20f94b252a5b6f

Moved code out of tf.test.TestCase in skip_gram_ops_test.py (#1347)

view details

Gabriel de Marmiesse

commit sha bbcb6b9ab38d82591a740675429c3053fd1dd856

Moved method out of the run_all_in_graph_and_eager_mode in utils_test.py (#1357)

view details

Gabriel de Marmiesse

commit sha 8f073def90780f6269502bb24a174e244e7e2da9

Used pytest only to test tanshrink in eager mode (#1346)

view details

Gabriel de Marmiesse

commit sha b59f0e89ca09837808331be1eee8ae8df8eb9355

Removed the run_all_in_graph_and_eager_mode from the testsumreduction. (#1386) * Removed the run_all_in_graph_and_eager_mode from the testsumreduction. * Unused import.

view details

Michael Reneer

commit sha 272e3797fc09dfabcb962d68cc55cdf8defba4d5

Update `setup.py` to include the Python3 programming language classifier. (#1387)

view details

Gabriel de Marmiesse

commit sha 125d97de3754b44ec4e966e0853e64fd23fc5baf

Moving out of tf.test.TestCase in sparsemax_test.py (#1345)

view details

Gabriel de Marmiesse

commit sha d4c240438abc3af3371d418ba1041d26820ca286

Removed test_utils.run_all_in_graph_and_eager_modes in netvlad_test.py (#1350) * Use pytest only for netvlad. * Used pytestmark.

view details

Gabriel de Marmiesse

commit sha b170487848bade2e2856467c6e8f2b6ee3dc96e8

Removed run_all_in_graph_and_eager_mode in r_square_test.py (#1356)

view details

push time in 6 days

issue commentOpenNMT/OpenNMT-tf

low GPU usage

TensorFlow 2.1 requires CUDA 10.1, not 10.2.

See the related TensorFlow documentation: https://www.tensorflow.org/install/gpu

eveliao

comment created time in 7 days

pull request commentOpenNMT/CTranslate2

Remove fill operation in StorageView(Shape) constructor

CI is failing because of https://github.com/pypa/manylinux/issues/512

guillaumekln

comment created time in 7 days

PR opened OpenNMT/CTranslate2

Remove fill operation in StorageView(Shape) constructor

2 reasons:

  • initialized memory is not always needed so this is a small optimization for these cases
  • this matches the semantic of resize(Shape) which does not initialize memory
+9 -4

0 comment

4 changed files

pr created time in 7 days

push eventguillaumekln/CTranslate2

Guillaume Klein

commit sha 4ccf44ff6e5d48133964f5edff093669e683b7a2

Remove fill operation in StorageView(Shape) constructor 2 reasons: * initialized memory is not always needed so this is a small optimization for these cases * this matches the semantic of resize(Shape) which does not initialize memory

view details

push time in 7 days

create barnchguillaumekln/CTranslate2

branch : disable-auto-fill

created branch time in 7 days

delete branch guillaumekln/CTranslate2

delete branch : fix-invalid-num-hypotheses

delete time in 7 days

push eventOpenNMT/papers

Sasha Rush

commit sha 811a486bcf0a9deb26584b5f5413525918b558b1

.

view details

Sasha Rush

commit sha 4bda9b1bb15c23762dbecf38187b754ab334dbb0

.

view details

Sasha Rush

commit sha a663725014791908f16c5178ba566cb73da314ba

.

view details

Sasha Rush

commit sha 9dabce3b94b5d4c0262c93248a15a3fda973b188

.

view details

Sasha Rush

commit sha 2bc5349baa1f7b60392c3c9629f093b92b777de7

.

view details

Sasha Rush

commit sha 431e034848aeeeff77b6414722b9d3f6b15ba508

.

view details

Sasha Rush

commit sha 632bbef07f5cf2fa53bbcf29aaf396938c6fccc7

update report

view details

Sasha Rush

commit sha f495bf37dffafa1fac45b3f3d5303d0b4659ac04

report

view details

Sasha Rush

commit sha 91a9312bd030e1905cd4d127c6d99e424e830f10

report

view details

Sasha Rush

commit sha 152477bab6b57bba6c01f7efb5d1f5a6784d6ffd

report

view details

Jean A. Senellart

commit sha a7b414f8937ae41b0f2ff90c028249b9412d4c07

adding more details on parallel, tokenization and first results on romance language model

view details

Sasha Rush

commit sha 825fd67ad4043b7e415d3db1f3748caa1c94069f

.

view details

Sasha Rush

commit sha f77ddde1d81590e3cc2c592890847d651d8ece9b

.

view details

Jean A. Senellart

commit sha 0d28a396c43a9e51c63f9081c6f64908630dc89d

shorten a bit, complete mw experiments

view details

Jean A. Senellart

commit sha c396358d070b5fdb0774722a6b1fa4a94682596b

edit reference

view details

Jean A. Senellart

commit sha a39b9dfa9771ef3b12270bd2ddffbb069fe03e57

add Nematus results

view details

Jean A. Senellart

commit sha f2e94782e1e67b9180d2af5699fd10cb006e3747

add batch size

view details

Jean A. Senellart

commit sha 7b3d6fa71f0809c6fee40e27f7a851059f38217c

fix reference

view details

Jean A. Senellart

commit sha a26627bd4ee186e51e846fe7bdd5fdc2d394f984

changed sent/sec > words/sec

view details

Sasha Rush

commit sha 80ea3b4cbae6f4b3af885a5883f4719b044e275d

.

view details

push time in 7 days

push eventOpenNMT/CTranslate2

Guillaume Klein

commit sha d58b45f58e7de7987fae8379a9438a1d72546672

Fix crash on invalid num_hypotheses value (#166)

view details

push time in 7 days

create barnchguillaumekln/CTranslate2

branch : fix-invalid-num-hypotheses

created branch time in 7 days

push eventguillaumekln/CTranslate2

Guillaume Klein

commit sha b8070175258b098ae4d09d23681e108de2d6bd88

Add decoding examples

view details

push time in 7 days

create barnchguillaumekln/CTranslate2

branch : decoding-examples

created branch time in 7 days

push eventtensorflow/addons

Gabriel de Marmiesse

commit sha b59f0e89ca09837808331be1eee8ae8df8eb9355

Removed the run_all_in_graph_and_eager_mode from the testsumreduction. (#1386) * Removed the run_all_in_graph_and_eager_mode from the testsumreduction. * Unused import.

view details

push time in 8 days

push eventOpenNMT/opennmt.github.io

Guillaume Klein

commit sha b727b3d7f705fe6e34f4094db56cbdb123b190e6

Fix table cell formatting

view details

push time in 8 days

push eventOpenNMT/opennmt.github.io

Guillaume Klein

commit sha c42126bc6d9d893a222c9a61ecf260c3b87f4214

Add V2 pretrained SavedModel

view details

push time in 8 days

push eventtensorflow/addons

Gabriel de Marmiesse

commit sha 778b9229b19d2f00d28949dbc508f8ad7d41dc54

Migrated some tests out of run_all_in_graph_and_eager_mode in seq2seq (#1385) * Migrated stuff to pytest. * Migrated weighted sum reduction test to pytest.

view details

push time in 8 days

push eventtensorflow/addons

Gabriel de Marmiesse

commit sha 591a5976ea179e929bae6bbf2df8763f99ffc4d5

Moved test_sequence_loss outside run_all_in_graph_and_eager_mode. (#1384)

view details

push time in 8 days

pull request commenttensorflow/addons

Update SequenceLoss for TensorFlow 2.2 compatibility

I guess the easiest solution is to do a version check before deleting the reduction.

I suppose the project will shortly move to require TensorFlow 2.2, right? Then we could merge this PR as is (after a rebase).

But having a custom reduction prevents us from using a distributed strategy (from what I remember). Would it be possible to use keras' built-in reductions for some use cases (not all)?

I'm not sure it prevents the use of distribution strategies, but it requires the user to carefully scale the loss (or gradients) based on the global batch size.

We could indeed rely on the built-in reduction for some combinations. But these combinations are actually the ones that can be implemented with tf.keras.losses.{Sparse}CategoricalCrossentropy.

guillaumekln

comment created time in 8 days

startedNVIDIA/DeepLearningExamples

started time in 8 days

created tagOpenNMT/nmt-wizard-docker

tagv2.3.0

Dockerized NMT frameworks for nmt-wizard

created time in 8 days

delete tag OpenNMT/nmt-wizard-docker

delete tag : v1.24.0

delete time in 8 days

created tagOpenNMT/nmt-wizard-docker

tagv1.24.0

Dockerized NMT frameworks for nmt-wizard

created time in 8 days

push eventtensorflow/addons

Gabriel de Marmiesse

commit sha c9ccce405039ea0f33130a29432847e9174e5edf

Moved test_sequence_loss outside the run_all_in_graph_and_eager_mode. (#1378) * Converted to pytest test. * Moved test sequence loss outside the run_all_in_graph_and_eager_mode. * More tests.

view details

push time in 8 days

PR merged tensorflow/addons

Reviewers
Moved test_sequence_loss outside the run_all_in_graph_and_eager_mode. cla: yes seq2seq

Follow-up on #1374

cc @guillaumekln

+47 -99

0 comment

1 changed file

gabrieldemarmiesse

pr closed time in 8 days

CommitCommentEvent

push eventtensorflow/addons

Gabriel de Marmiesse

commit sha 711e7251d973f384a495712fb3a8787251e700a4

Test ambiguous order in eager and tf.function. (#1377)

view details

push time in 9 days

PR merged tensorflow/addons

Reviewers
Test ambiguous order in eager mode. cla: yes seq2seq

Follow-up on #1374

@guillaumekln

+12 -11

0 comment

1 changed file

gabrieldemarmiesse

pr closed time in 9 days

issue closedOpenNMT/papers

Getting an empty gzipped phrase table

Hi @jsenellart @srush , what should be the N value passed to docker while creating phrase table, not passing it and passing a huge value such as 128 (assuming phrase contains 10 words with around 10 characters) outputs an empty file after around 40 + minutes on a 16 core machine.

Input : train files containing sentencepiece-tokenized sentences.

while processing : image

Ref : https://github.com/OpenNMT/papers/tree/master/WNMT2018/vmap#building-phrase-table

closed time in 9 days

gvskalyan

issue commentOpenNMT/papers

Getting an empty gzipped phrase table

For reference, this was discussed on the forum: https://forum.opennmt.net/t/get-vmap-from-the-corpus-to-be-used-in-ctranslate2/3573

There were some issues with the training data (mostly empty lines). This is fixed by https://github.com/OpenNMT/papers/commit/c44b9ffdb561da7e963f835cb90aa5f56e20c6f4 which adds a basic filtering.

gvskalyan

comment created time in 9 days

release OpenNMT/CTranslate2

v1.9.0

released time in 9 days

push eventtensorflow/addons

Gabriel de Marmiesse

commit sha 3b41cfe40c92942fb31e6cac01c9d7cc876951c8

run keras compatibility test with v2 behavior. (#1374)

view details

push time in 9 days

PR merged tensorflow/addons

Reviewers
run keras compatibility test for sequence loss with v2 behavior. cla: yes seq2seq

I runs in full eager mode + with the tf.function enabled by default for keras layers.

@guillaumekln that might be useful for #1371

get_test_data is here to replace the setup method. I'll convert to pytest the rest of the tests in this file in another pull request.

np.testing.assert_allclose(calculated_loss, expected_loss, rtol=1e-6, atol=1e-6)

np.testing.assert_allclose is more strict than the tf counterpart self.assertAllClose so I took the rtol and atol from the self.assertAllClose to have the same behavior. Otherwise the tests were not passing.

+79 -53

0 comment

1 changed file

gabrieldemarmiesse

pr closed time in 9 days

Pull request review commenttensorflow/addons

run keras compatibility test for sequence loss with v2 behavior.

 from tensorflow_addons.utils import test_utils  +def get_test_data():

Ok!

gabrieldemarmiesse

comment created time in 9 days

Pull request review commenttensorflow/addons

run keras compatibility test for sequence loss with v2 behavior.

 from tensorflow_addons.utils import test_utils  +def get_test_data():

You could already factorize the data preparation with the other class. Do you plan to do that in the next PR instead?

gabrieldemarmiesse

comment created time in 9 days

created tagOpenNMT/CTranslate2

tagv1.9.0

Optimized inference engine for OpenNMT models

created time in 9 days

push eventOpenNMT/CTranslate2

Guillaume Klein

commit sha 22d00307142f5a306581f434f638dbb8dda4fe92

Bump version to 1.9.0

view details

push time in 9 days

push eventOpenNMT/CTranslate2

Guillaume Klein

commit sha 27b44002362e90e1f176de7f5ea0b7508640b64c

Remove unused include

view details

Guillaume Klein

commit sha 866fd455c80d04c9ffd42546091afd9f6156385d

Update CHANGELOG.md

view details

push time in 9 days

push eventOpenNMT/nmt-wizard-docker

Guillaume Klein

commit sha 1ec7ab4abe386bf9cc5b86e94c481add323fd6be

Update OpenNMT-tf to 2.8.1

view details

push time in 9 days

delete branch guillaumekln/CTranslate2

delete branch : file-translation-stats

delete time in 9 days

push eventOpenNMT/CTranslate2

Guillaume Klein

commit sha 67f10585084bc904fe8d7c6174d4437e85ef7785

Return more stats from file translation APIs (#164)

view details

push time in 9 days

create barnchguillaumekln/CTranslate2

branch : file-translation-stats

created branch time in 9 days

release OpenNMT/OpenNMT-tf

v2.8.1

released time in 9 days

push eventOpenNMT/CTranslate2

Guillaume Klein

commit sha d5c2f51864392892ac974710afee0bce0f40f7a5

Harmonize indentation in options list

view details

Guillaume Klein

commit sha 6197f73b8806e67814f2878a917a9a075d9f6949

Silence compilation warning for missing return

view details

push time in 9 days

push eventOpenNMT/OpenNMT-tf

Doctr (Travis CI)

commit sha 9353e0604db2a977cd54df36d9239802f97948d8

Update docs after building Travis build 1673 of OpenNMT/OpenNMT-tf The docs were built from the tag 'v2.8.1' against the commit 5db46ec0d6f1f6b93f422d2f7e7a454620833796. The Travis build that generated this commit is at https://travis-ci.org/OpenNMT/OpenNMT-tf/jobs/666300158. The doctr command that was run is /home/travis/virtualenv/python3.6.7/bin/doctr deploy --build-tags --branch-whitelist --built-docs docs/build .

view details

push time in 9 days

more