profile
viewpoint
Tzu-Wei Sung WindQAQ Apple California https://www.linkedin.com/in/tzu-wei-sung/ SWE Intern @apple | Seeking for 2021 full time SWE position.

tensorflow/addons 1054

Useful extra functionality for TensorFlow 2.x maintained by SIG-addons

Alexander-H-Liu/End-to-end-ASR-Pytorch 715

This is an open source project (formerly named Listen, Attend and Spell - PyTorch Implementation) for end-to-end ASR implemented with Pytorch, the well known deep learning toolkit.

WindQAQ/tf-recsys 82

tf-recsys contains collaborative filtering (CF) model based on famous SVD and SVD++ algorithm. Both of them are implemented by tensorflow in order to utilize GPU acceleration.

WindQAQ/listen-attend-and-spell 77

Tensorflow implementation of "Listen, Attend and Spell" authored by William Chan. This project utilizes input pipeline and estimator API of Tensorflow, which makes the training and evaluation truly end-to-end.

WindQAQ/MPM 50

Simulating on GPU using Material Point Method and rendering.

WindQAQ/ML2017 30

NTUEE Machine Learning, 2017 Spring

yistLin/H264-Encoder 14

Implementation of a subset of CBP of h.264 encoder

WindQAQ/tensorflow-wavenet 9

Implementation of WaveNet network based on Tensorflow.

yistLin/LSTM-pinyin2ch 6

seq2seq PinYin to Chinese translator

push eventtensorflow/addons

bhack

commit sha 86e8fe8370985ccaecdf0bb024ce01419063ad70

Add support to releases list (#2200) * Add support to releases list * Add stdout usage

view details

push time in 10 days

PR merged tensorflow/addons

Reviewers
Add support to releases list cla: yes

Add support to update releases list *

+24 -5

1 comment

1 changed file

bhack

pr closed time in 10 days

PullRequestReviewEvent

Pull request review commenttensorflow/addons

Add support to releases list

 # limitations under the License. # ============================================================================== -# usage: bash tools/update_release_version.sh <release_number>+# usage: bash tools/update_release_version.sh <list_of_release_numbers>+# e.g. bash tools/update_release_version.sh 2.3.0 2.3.1 

How about adding a help here

if [ $# -lt 1 ]; then
	echo "Usage: bash tools/update_release_version.sh <list_of_release_numbers>"
	echo "e.g. bash tools/update_release_version.sh 2.3.0 2.3.1"
	exit 1
fi
bhack

comment created time in 10 days

PullRequestReviewEvent
PullRequestReviewEvent

push eventtensorflow/addons

Tzu-Wei Sung

commit sha 40c3e2a635f8d510a710fe0ee60bbe3b939a73b6

Deprecate set env (#2199) test Deprecate set env

view details

push time in 11 days

PR merged tensorflow/addons

Reviewers
Deprecate set env cla: yes github

Description

Fix warning in https://github.com/tensorflow/addons/actions/runs/299787623. https://github.blog/changelog/2020-10-01-github-actions-deprecating-set-env-and-add-path-commands/

Type of change

Checklist:

  • [ ] I've properly formatted my code according to the guidelines
    • [ ] By running Black + Flake8
    • [ ] By running pre-commit hooks
  • [ ] This PR addresses an already submitted issue for TensorFlow Addons
  • [ ] I have made corresponding changes to the documentation
  • [ ] I have added tests that prove my fix is effective or that my feature works
  • [ ] This PR contains modifications to C++ custom-ops

How Has This Been Tested?

If you're adding a bugfix or new feature please describe the tests that you ran to verify your changes: *

+1 -1

0 comment

1 changed file

WindQAQ

pr closed time in 11 days

push eventWindQAQ/addons

Tzu-Wei Sung

commit sha cb93d43c6342593542c72053a72da13393ec0c3d

Deprecate set env test Deprecate set env

view details

push time in 12 days

push eventWindQAQ/addons

Tzu-Wei Sung

commit sha 55050912229a514fb2408417f748cb1946640af0

test

view details

push time in 12 days

PR opened tensorflow/addons

Deprecate set env

Description

Fix warning in https://github.com/tensorflow/addons/actions/runs/299787623. https://github.blog/changelog/2020-10-01-github-actions-deprecating-set-env-and-add-path-commands/

Type of change

Checklist:

  • [ ] I've properly formatted my code according to the guidelines
    • [ ] By running Black + Flake8
    • [ ] By running pre-commit hooks
  • [ ] This PR addresses an already submitted issue for TensorFlow Addons
  • [ ] I have made corresponding changes to the documentation
  • [ ] I have added tests that prove my fix is effective or that my feature works
  • [ ] This PR contains modifications to C++ custom-ops

How Has This Been Tested?

If you're adding a bugfix or new feature please describe the tests that you ran to verify your changes: *

+2 -2

0 comment

1 changed file

pr created time in 12 days

create barnchWindQAQ/addons

branch : github/deprecate-set-env

created branch time in 12 days

push eventtensorflow/addons

nataliyah123

commit sha 927f66727aa593e25370a52403c9fd27f8de4c59

#2066 seq2seq.beamsearch (#2198) * beamsearch with attention wrapper * beamsearch with attention wrapperv1 * flake suggestions * Update attention_wrapper.py * flake suggestions2 * changes * Apply suggestions from code review * Update tensorflow_addons/seq2seq/attention_wrapper.py * Update tensorflow_addons/seq2seq/attention_wrapper.py * Apply suggestions from code review * Apply suggestions from code review * Apply suggestions from code review * Update tensorflow_addons/seq2seq/attention_wrapper.py Co-authored-by: Tzu-Wei Sung <windqaq@gmail.com>

view details

push time in 12 days

PR merged tensorflow/addons

Reviewers
#2066 seq2seq.beamsearch cla: yes seq2seq

part of #2066

+19 -22

0 comment

1 changed file

nataliyah123

pr closed time in 12 days

PullRequestReviewEvent

push eventnataliyah123/addons

Tzu-Wei Sung

commit sha feca7862d861b61f68a5f4ac28da2cd03823ded2

Update tensorflow_addons/seq2seq/attention_wrapper.py

view details

push time in 12 days

Pull request review commenttensorflow/addons

#2066 seq2seq.beamsearch

 def __init__(          An example: -        ```-        tiled_encoder_outputs = tfa.seq2seq.tile_batch(-            encoder_outputs, multiplier=beam_width)-        tiled_encoder_final_state = tfa.seq2seq.tile_batch(-            encoder_final_state, multiplier=beam_width)-        tiled_sequence_length = tfa.seq2seq.tile_batch(-            sequence_length, multiplier=beam_width)-        attention_mechanism = MyFavoriteAttentionMechanism(-            num_units=attention_depth,-            memory=tiled_inputs,-            memory_sequence_length=tiled_sequence_length)-        attention_cell = AttentionWrapper(cell, attention_mechanism, ...)-        decoder_initial_state = attention_cell.get_initial_state(-            batch_size=true_batch_size * beam_width, dtype=dtype)-        decoder_initial_state = decoder_initial_state.clone(-            cell_state=tiled_encoder_final_state)-        ```+        >>> batch_size = 1+        >>> beam_width = 5+        >>> sequence_length = [5]
        >>> sequence_length = tf.convert_to_tensor([5])
nataliyah123

comment created time in 12 days

PullRequestReviewEvent

push eventnataliyah123/addons

Tzu-Wei Sung

commit sha 8c3266bacd1d2626da46ca36d14ef1dc1bae41b6

Apply suggestions from code review

view details

push time in 12 days

Pull request review commenttensorflow/addons

#2066 seq2seq.beamsearch

 def __init__(          An example: -        ```-        tiled_encoder_outputs = tfa.seq2seq.tile_batch(-            encoder_outputs, multiplier=beam_width)-        tiled_encoder_final_state = tfa.seq2seq.tile_batch(-            encoder_final_state, multiplier=beam_width)-        tiled_sequence_length = tfa.seq2seq.tile_batch(-            sequence_length, multiplier=beam_width)-        attention_mechanism = MyFavoriteAttentionMechanism(-            num_units=attention_depth,-            memory=tiled_inputs,-            memory_sequence_length=tiled_sequence_length)-        attention_cell = AttentionWrapper(cell, attention_mechanism, ...)-        decoder_initial_state = attention_cell.get_initial_state(-            batch_size=true_batch_size * beam_width, dtype=dtype)-        decoder_initial_state = decoder_initial_state.clone(-            cell_state=tiled_encoder_final_state)-        ```+        >>> batch_size = 1+        >>> beam_width = 5+        >>> sequence_length = [5]+        >>> encoder_outputs = tf.random.uniform(shape=(batch_size, 5, 10))+        >>> encoder_final_state = [tf.zeros((1, 10)), tf.zeros((1, 10))]
        >>> encoder_final_state = [tf.zeros((batch_size, 10)), tf.zeros((batch_size, 10))]
nataliyah123

comment created time in 12 days

PullRequestReviewEvent

Pull request review commenttensorflow/addons

#2066 seq2seq.beamsearch

 def clone(self, **kwargs):          Example: -        ```python-        initial_state = attention_wrapper.get_initial_state(-            batch_size=..., dtype=...)-        initial_state = initial_state.clone(cell_state=encoder_state)-        ```+        >>> batch_size = 1+        >>> memory = tf.random.normal(shape=[batch_size, 3, 100])+        >>> encoder_state = [tf.zeros((1, 100)), tf.zeros((1, 100))]
        >>> encoder_state = [tf.zeros((batch_size, 100)), tf.zeros((batch_size, 100))]
nataliyah123

comment created time in 12 days

PullRequestReviewEvent

push eventnataliyah123/addons

Tzu-Wei Sung

commit sha e4ab5ca36bcb90389f38ac7fa5da35f743ca6ae2

Apply suggestions from code review

view details

push time in 12 days

Pull request review commenttensorflow/addons

#2066 seq2seq.beamsearch

 def __init__(          An example: -        ```-        tiled_encoder_outputs = tfa.seq2seq.tile_batch(-            encoder_outputs, multiplier=beam_width)-        tiled_encoder_final_state = tfa.seq2seq.tile_batch(-            encoder_final_state, multiplier=beam_width)-        tiled_sequence_length = tfa.seq2seq.tile_batch(-            sequence_length, multiplier=beam_width)-        attention_mechanism = MyFavoriteAttentionMechanism(-            num_units=attention_depth,-            memory=tiled_inputs,-            memory_sequence_length=tiled_sequence_length)-        attention_cell = AttentionWrapper(cell, attention_mechanism, ...)-        decoder_initial_state = attention_cell.get_initial_state(-            batch_size=true_batch_size * beam_width, dtype=dtype)-        decoder_initial_state = decoder_initial_state.clone(-            cell_state=tiled_encoder_final_state)-        ```+        >>> batch_size = 1+        >>> beam_width = 5+        >>> sequence_length = [5]+        >>> encoder_outputs = tf.random.uniform(shape=(batch_size, 5, 10))+        >>> encoder_final_states = [tf.zeros((1, 10)), tf.zeros((1, 10))]
        >>> encoder_final_state = [tf.zeros((1, 10)), tf.zeros((1, 10))]
nataliyah123

comment created time in 12 days

PullRequestReviewEvent

push eventnataliyah123/addons

Tzu-Wei Sung

commit sha 1324efdc7d4f3c71849aaf5d6e86ed83e06953ab

Apply suggestions from code review

view details

push time in 12 days

Pull request review commenttensorflow/addons

#2066 seq2seq.beamsearch

 def clone(self, **kwargs):          Example: -        ```python-        initial_state = attention_wrapper.get_initial_state(-            batch_size=..., dtype=...)-        initial_state = initial_state.clone(cell_state=encoder_state)-        ```+        >>> batch_size = 1+        >>> memory = tf.random.normal(shape=[1, 3, 100])
        >>> memory = tf.random.normal(shape=[batch_size, 3, 100])
nataliyah123

comment created time in 12 days

Pull request review commenttensorflow/addons

#2066 seq2seq.beamsearch

 def clone(self, **kwargs):          Example: -        ```python-        initial_state = attention_wrapper.get_initial_state(-            batch_size=..., dtype=...)-        initial_state = initial_state.clone(cell_state=encoder_state)-        ```+        >>> batch_size = 1+        >>> memory = tf.random.normal(shape=[1, 3, 100])+        >>> encoder_state = [tf.zeros((1, 100)), tf.zeros((1, 100))]+        >>> decoder_rnn_cell = tf.keras.layers.LSTMCell(100)
nataliyah123

comment created time in 12 days

Pull request review commenttensorflow/addons

#2066 seq2seq.beamsearch

 def clone(self, **kwargs):          Example: -        ```python-        initial_state = attention_wrapper.get_initial_state(-            batch_size=..., dtype=...)-        initial_state = initial_state.clone(cell_state=encoder_state)-        ```+        >>> batch_size = 1+        >>> memory = tf.random.normal(shape=[1, 3, 100])+        >>> encoder_state = [tf.zeros((1, 100)), tf.zeros((1, 100))]+        >>> decoder_rnn_cell = tf.keras.layers.LSTMCell(100)+        >>> attention_mechanism = tfa.seq2seq.LuongAttention(100, memory=memory, memory_sequence_length=[3] * batch_size)+        >>> decoder_rnn_cell = tfa.seq2seq.AttentionWrapper(decoder_rnn_cell, attention_mechanism, attention_layer_size=10)+        >>> decoder_initial_state = decoder_rnn_cell.get_initial_state(batch_size=batch_size, dtype=tf.float32)
        >>> decoder_initial_state = attention_cell.get_initial_state(batch_size=batch_size, dtype=tf.float32)
nataliyah123

comment created time in 12 days

Pull request review commenttensorflow/addons

#2066 seq2seq.beamsearch

 def clone(self, **kwargs):          Example: -        ```python-        initial_state = attention_wrapper.get_initial_state(-            batch_size=..., dtype=...)-        initial_state = initial_state.clone(cell_state=encoder_state)-        ```+        >>> batch_size = 1+        >>> memory = tf.random.normal(shape=[1, 3, 100])+        >>> encoder_state = [tf.zeros((1, 100)), tf.zeros((1, 100))]+        >>> decoder_rnn_cell = tf.keras.layers.LSTMCell(100)+        >>> attention_mechanism = tfa.seq2seq.LuongAttention(100, memory=memory, memory_sequence_length=[3] * batch_size)+        >>> decoder_rnn_cell = tfa.seq2seq.AttentionWrapper(decoder_rnn_cell, attention_mechanism, attention_layer_size=10)
        >>> attention_cell = tfa.seq2seq.AttentionWrapper(tf.keras.layers.LSTMCell(100), attention_mechanism, attention_layer_size=10)
nataliyah123

comment created time in 12 days

PullRequestReviewEvent
PullRequestReviewEvent

push eventnataliyah123/addons

Tzu-Wei Sung

commit sha 6881791e4036fa43004f1e8475577feb4b99b262

Update tensorflow_addons/seq2seq/attention_wrapper.py

view details

push time in 12 days

Pull request review commenttensorflow/addons

#2066 seq2seq.beamsearch

 def __init__(          An example: -        ```-        tiled_encoder_outputs = tfa.seq2seq.tile_batch(-            encoder_outputs, multiplier=beam_width)-        tiled_encoder_final_state = tfa.seq2seq.tile_batch(-            encoder_final_state, multiplier=beam_width)-        tiled_sequence_length = tfa.seq2seq.tile_batch(-            sequence_length, multiplier=beam_width)-        attention_mechanism = MyFavoriteAttentionMechanism(-            num_units=attention_depth,-            memory=tiled_inputs,-            memory_sequence_length=tiled_sequence_length)-        attention_cell = AttentionWrapper(cell, attention_mechanism, ...)-        decoder_initial_state = attention_cell.get_initial_state(-            batch_size=true_batch_size * beam_width, dtype=dtype)-        decoder_initial_state = decoder_initial_state.clone(-            cell_state=tiled_encoder_final_state)-        ```+        >>> batch_size = 1+        >>> beam_width = 5+        >>> sequence_length = [5]+        >>> encoder_outputs = tf.random.uniform(shape=(batch_size, 5, 10))+        >>> encoder_final_states = [tf.zeros((1, 10)), tf.zeros((1, 10))]+        >>> tiled_encoder_outputs = tfa.seq2seq.tile_batch(encoder_outputs, multiplier=beam_width)+        >>> tiled_encoder_final_state = tfa.seq2seq.tile_batch(encoder_final_state, multiplier=beam_width)+        >>> tiled_sequence_length = tfa.seq2seq.tile_batch(sequence_length, multiplier=beam_width)+        >>> attention_mechanism = tfa.seq2seq.BahdanauAttention(10, memory=tiled_encoder_outputs, memory_sequence_length=tiled_sequence_length)+        >>> attention_cell = tfa.seq2seq.AttentionWrapper(tf.keras.layers.LSTMCell(10), attention_mechanism)+        >>> decoder_initial_state = attention_cell.get_initial_state(batch_size=true_batch_size * beam_width, dtype=tf.float32)
        >>> decoder_initial_state = attention_cell.get_initial_state(batch_size=batch_size * beam_width, dtype=tf.float32)
nataliyah123

comment created time in 12 days

PullRequestReviewEvent

push eventnataliyah123/addons

Tzu-Wei Sung

commit sha 782c145cec591a806aa0d5c52ddc22a49dc4355d

Update tensorflow_addons/seq2seq/attention_wrapper.py

view details

push time in 12 days

Pull request review commenttensorflow/addons

#2066 seq2seq.beamsearch

 def __init__(          An example: -        ```-        tiled_encoder_outputs = tfa.seq2seq.tile_batch(-            encoder_outputs, multiplier=beam_width)-        tiled_encoder_final_state = tfa.seq2seq.tile_batch(-            encoder_final_state, multiplier=beam_width)-        tiled_sequence_length = tfa.seq2seq.tile_batch(-            sequence_length, multiplier=beam_width)-        attention_mechanism = MyFavoriteAttentionMechanism(-            num_units=attention_depth,-            memory=tiled_inputs,-            memory_sequence_length=tiled_sequence_length)-        attention_cell = AttentionWrapper(cell, attention_mechanism, ...)-        decoder_initial_state = attention_cell.get_initial_state(-            batch_size=true_batch_size * beam_width, dtype=dtype)-        decoder_initial_state = decoder_initial_state.clone(-            cell_state=tiled_encoder_final_state)-        ```+        >>> vocab_size = 10+        >>> max_time = 16+        >>> batch_size = 2+        >>> emb_dim = 20+        >>> cell_dim = 5+        >>> attention_dim = cell_dim+        >>> beam_width = 3+        >>> hidden_size = 7++        >>> inputs = tf.random.uniform([batch_size, max_time, emb_dim], maxval=1., dtype=tf.float32)+        >>> embedding = tf.random.uniform([vocab_size, emb_dim], maxval=1., dtype=tf.float32)++        # make encoder++        >>> lstm = tf.keras.layers.LSTMCell(hidden_size)+        >>> lstmW = tf.keras.layers.RNN(lstm, return_sequences=True, return_state=True)+        >>> whole_encoder_seq_output, final_encoder_state, final_carry_state = lstmW(inputs)+        >>> #print("final_state_ibia", final_encoder_state)+        >>> #when beamsearch is used+        >>> tiled_encoder_output = tfa.seq2seq.tile_batch(whole_encoder_seq_output, multiplier=beam_width)+        >>> tiled_encoder_final_state = tfa.seq2seq.tile_batch(final_encoder_state, multiplier=beam_width)+        >>> encoder_initial_state = lstmW.get_initial_state(inputs)+        >>> tiled_encoder_initial_state = tfa.seq2seq.tile_batch(encoder_initial_state, multiplier=beam_width)++        #make decoder++        >>> memory = tiled_encoder_output++        #attention wrapper++        >>> attn_cells = tfa.seq2seq.AttentionWrapper(+        ... lstm,+        ... attention_mechanism=tfa.seq2seq.BahdanauAttention(units=hidden_size, memory=memory, memory_sequence_length=batch_size*beam_width),+        ... attention_layer_size=hidden_size,+        ... initial_cell_state=tiled_encoder_final_state+        ... )+        >>> decoder_initial_state= attn_cells.get_initial_state(batch_size=batch_size*beam_width, dtype= tf.float32)+        >>> decoder_initial_state = decoder_initial_state.clone(cell_state=tiled_encoder_final_state)++        #make predictions++        >>> decoder = tfa.seq2seq.BeamSearchDecoder(+        ... cell=attn_cells,+        ... beam_width=batch_size*beam_width,+        ... output_layer=tf.keras.layers.Dense(hidden_size, name='output_proj')+        ...   ) #second structure decoder++        >>> start_tokens = tf.zeros((batch_size,), dtype=tf.int32)+        >>> decoder.initialize(embedding=embedding, start_tokens= start_tokens ,end_token= 1, initial_state=decoder_initial_state)#first structure decoder_initial_state+        >>> #final_outputs, final_state, final_sequence_lengths = tfa.seq2seq.dynamic_decode(decoder=decoder, impute_finished=False, maximum_iterations= 100)
        >>> batch_size = 1
        >>> beam_width = 5
        >>> sequence_length = [5]
        >>> encoder_outputs = tf.random.uniform(shape=(batch_size, 5, 10))
        >>> encoder_final_states = [tf.zeros((1, 10)), tf.zeros((1, 10))]
        >>> tiled_encoder_outputs = tfa.seq2seq.tile_batch(encoder_outputs, multiplier=beam_width)
        >>> tiled_encoder_final_state = tfa.seq2seq.tile_batch(encoder_final_state, multiplier=beam_width)
        >>> tiled_sequence_length = tfa.seq2seq.tile_batch(sequence_length, multiplier=beam_width)
        >>> attention_mechanism = tfa.seq2seq.BahdanauAttention(10, memory=tiled_encoder_outputs, memory_sequence_length=tiled_sequence_length)
        >>> attention_cell = tfa.seq2seq.AttentionWrapper(tf.keras.layers.LSTMCell(10), attention_mechanism)
        >>> decoder_initial_state = attention_cell.get_initial_state(batch_size=true_batch_size * beam_width, dtype=tf.float32)
        >>> decoder_initial_state = decoder_initial_state.clone(cell_state=tiled_encoder_final_state)
nataliyah123

comment created time in 12 days

PullRequestReviewEvent
PullRequestReviewEvent

push eventnataliyah123/addons

Tzu-Wei Sung

commit sha 1041be126033bfb85d075879d156b2f9cdf6977c

Apply suggestions from code review

view details

push time in 12 days

Pull request review commenttensorflow/addons

#2066 seq2seq.beamsearch

 def clone(self, **kwargs):          Example: -        ```python-        initial_state = attention_wrapper.get_initial_state(-            batch_size=..., dtype=...)-        initial_state = initial_state.clone(cell_state=encoder_state)-        ```+        >>> BATCH_SIZE = 1
        >>> batch_size = 1
nataliyah123

comment created time in 12 days

Pull request review commenttensorflow/addons

#2066 seq2seq.beamsearch

 def clone(self, **kwargs):          Example: -        ```python-        initial_state = attention_wrapper.get_initial_state(-            batch_size=..., dtype=...)-        initial_state = initial_state.clone(cell_state=encoder_state)-        ```+        >>> BATCH_SIZE = 1+        >>> memory = tf.random.normal(shape =[1,3, 100])+        >>> encoder_state = [tf.zeros((1, 100)), tf.zeros((1, 100))]+        >>> decoder_rnn_cell = tf.keras.layers.LSTMCell(100)+        >>> attention_mechanism = tfa.seq2seq.LuongAttention(100,memory=memory, memory_sequence_length=BATCH_SIZE*[3])
        >>> attention_mechanism = tfa.seq2seq.LuongAttention(100, memory=memory, memory_sequence_length=[3] * batch_size)
nataliyah123

comment created time in 12 days

Pull request review commenttensorflow/addons

#2066 seq2seq.beamsearch

 def clone(self, **kwargs):          Example: -        ```python-        initial_state = attention_wrapper.get_initial_state(-            batch_size=..., dtype=...)-        initial_state = initial_state.clone(cell_state=encoder_state)-        ```+        >>> BATCH_SIZE = 1+        >>> memory = tf.random.normal(shape =[1,3, 100])+        >>> encoder_state = [tf.zeros((1, 100)), tf.zeros((1, 100))]+        >>> decoder_rnn_cell = tf.keras.layers.LSTMCell(100)+        >>> attention_mechanism = tfa.seq2seq.LuongAttention(100,memory=memory, memory_sequence_length=BATCH_SIZE*[3])+        >>> rnn_cell = tfa.seq2seq.AttentionWrapper(decoder_rnn_cell, attention_mechanism, attention_layer_size=10)
        >>> decoder_rnn_cell = tfa.seq2seq.AttentionWrapper(decoder_rnn_cell, attention_mechanism, attention_layer_size=10)
nataliyah123

comment created time in 12 days

Pull request review commenttensorflow/addons

#2066 seq2seq.beamsearch

 def clone(self, **kwargs):          Example: -        ```python-        initial_state = attention_wrapper.get_initial_state(-            batch_size=..., dtype=...)-        initial_state = initial_state.clone(cell_state=encoder_state)-        ```+        >>> BATCH_SIZE = 1+        >>> memory = tf.random.normal(shape =[1,3, 100])+        >>> encoder_state = [tf.zeros((1, 100)), tf.zeros((1, 100))]+        >>> decoder_rnn_cell = tf.keras.layers.LSTMCell(100)+        >>> attention_mechanism = tfa.seq2seq.LuongAttention(100,memory=memory, memory_sequence_length=BATCH_SIZE*[3])+        >>> rnn_cell = tfa.seq2seq.AttentionWrapper(decoder_rnn_cell, attention_mechanism, attention_layer_size=10)+        >>> decoder_initial_state = rnn_cell.get_initial_state(batch_size=BATCH_SIZE, dtype=tf.float32)
        >>> decoder_initial_state = decoder_rnn_cell.get_initial_state(batch_size=batch_size, dtype=tf.float32)
nataliyah123

comment created time in 12 days

Pull request review commenttensorflow/addons

#2066 seq2seq.beamsearch

 def clone(self, **kwargs):          Example: -        ```python-        initial_state = attention_wrapper.get_initial_state(-            batch_size=..., dtype=...)-        initial_state = initial_state.clone(cell_state=encoder_state)-        ```+        >>> BATCH_SIZE = 1+        >>> memory = tf.random.normal(shape =[1,3, 100])
        >>> memory = tf.random.normal(shape=[1, 3, 100])
nataliyah123

comment created time in 12 days

PullRequestReviewEvent
PullRequestReviewEvent

pull request commenttensorflow/addons

Bump TF 2.3.1

Only in tf-version right? Are the single entries just the first element of this list?

Yes, I think so.

WindQAQ

comment created time in 12 days

pull request commenttensorflow/addons

Bump TF 2.3.1

Need we need to handle arbitrary list of releases e.g. 2.3.0,2.3.1,2.4.0rc etc..

Yes

WindQAQ

comment created time in 12 days

pull request commenttensorflow/addons

Bump TF 2.3.1

We usually pin to rc version first.

WindQAQ

comment created time in 12 days

PR closed tensorflow/addons

Bump TF 2.3.1 cla: yes github

Bump TF 2.3.1

+7 -7

7 comments

2 changed files

WindQAQ

pr closed time in 12 days

pull request commenttensorflow/addons

Bump TF 2.3.1

Let me close it first as I don't have good implementation of that script in mind.

WindQAQ

comment created time in 12 days

pull request commenttensorflow/addons

Bump TF 2.3.1

So, should we modify that script to make it compatible with multiple versions?

WindQAQ

comment created time in 13 days

push eventtensorflow/addons

Mark Sandler

commit sha 0cb4674436376505c58bc91c13d51e1089e666fb

Makes create_slots automatically setup weights for swap_weights (#2195) * Enables moving_average optimizer to allow calling swap_weights without the need to call shadow_copy first. * Update moving_average.py

view details

push time in 14 days

PR merged tensorflow/addons

Makes create_slots automatically setup weights for swap_weights cla: yes optimizers

Description

Brief Description of the PR:

Fixes # (issue) https://github.com/tensorflow/addons/issues/2107

Type of change

  • [x ] Bug fix

Checklist:

How Has This Been Tested?

Run unit-tests

+36 -3

0 comment

2 changed files

marksandler2

pr closed time in 14 days

PullRequestReviewEvent

pull request commenttensorflow/addons

Expose tfa.types doc

@WindQAQ It is the risk of https://en.m.wikipedia.org/wiki/Bystander_effect 😄

Haha, it's not a good effect actually 😆

Can you quickly explain what is this?

Okay, so the problem is orignated in https://github.com/tensorflow/addons/issues/2132.

In short, decide the public API in https://github.com/tensorflow/addons/pull/2162#issuecomment-696876373

TL;DR

Changes:

  • Fixes #2132. The exposed namespace aligns to the one of core TF: tf.types.experiemental.* vs tfa.types.*.
  • Use explicit_package_contents_filter to filter out meaningless endpoints. This also leads to the fact that only modules/functions/classes imported in __init__.py will be generated publicly. https://github.com/tensorflow/addons/pull/2162#issuecomment-696876373 lists all modules that I thought to be dropped/exposed after this PR. I will expose them according to the review in this PR as well.
  • Pin to newest tensorflow_docs version.
WindQAQ

comment created time in 14 days

pull request commenttensorflow/addons

Expose tfa.types doc

Hmm, team tagging is not so useful. @seanpmorgan @bhack Can you review the comments above when time allows, I would like to move this forward. Thank you!

WindQAQ

comment created time in 14 days

Pull request review commenttensorflow/addons

Makes create_slots automatically setup weights for swap_weights

 class MovingAverage(AveragedOptimizerWrapper):     Empirically it has been found that using the moving average of the trained     parameters of a deep network is better than using its trained parameters     directly. This optimizer allows you to compute this moving average and swap+    raise app.UsageError('Too many command-line arguments.')

This is added accidentally?

marksandler2

comment created time in 14 days

PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commenttensorflow/addons

Makes create_slots automatically setup weights for swap_weights

 def get_config(self):         return {**base_config, **config}      def _create_slots(self, var_list):-        self._optimizer._create_slots(-            var_list=var_list-        )  # pylint: disable=protected-access+        self._optimizer._create_slots(var_list=var_list)  # pylint: disable=protected-access

we don't use pylint anymore, can you remove the comment # pylint: xxx? Thanks!

marksandler2

comment created time in 15 days

Pull request review commenttensorflow/addons

Makes create_slots automatically setup weights for swap_weights

 def test_dynamic_decay():     np.testing.assert_allclose(ema_var0.read_value(), [0.64, 1.64])  +@pytest.mark.usefixtures("maybe_run_functions_eagerly")+@pytest.mark.with_device([tf.distribute.MirroredStrategy])+def test_swap_weight_no_shadow_copy():
def test_swap_weight_no_shadow_copy(device):
marksandler2

comment created time in 15 days

PullRequestReviewEvent
PullRequestReviewEvent

issue commenttensorflow/addons

Make API for custom optimizer wrappers more consistent

Hi @reedwm, thanks for bringing this up. I would prefer option 2. BTW, do you have the timeline that the new loss scale optimizer API will land in OSS TensorFlow? We usually pin to 2.4rc few days after it releases, so we want to have some time to prepare in advance. If you don't mind, we can discuss or even reveal the designated API here. Thank you!

Gentle ping to @tensorflow/sig-addons-maintainers because of API change. We should also update the related documentation as well.

Also, I want to notice that because this is a huge API change, and combined with https://github.com/tensorflow/addons/issues/2122, we can discuss whether we should bump our major version to 1.0.0 to inform users that there are no more backward compatibility.

reedwm

comment created time in 15 days

Pull request review commenttensorflow/addons

Update ABI compatibility version

  import tensorflow as tf -MIN_TF_VERSION_FOR_ABI_COMPATIBILITY = "2.2.0"-MAX_TF_VERSION_FOR_ABI_COMPATIBILITY = "2.3.0"+MIN_TF_VERSION_FOR_ABI_COMPATIBILITY = "2.3.0"+MAX_TF_VERSION_FOR_ABI_COMPATIBILITY = "2.4.0"

Actually, I don't know about that too much. I guess it is because

>>> LooseVersion("2.3.1") <= LooseVersion("2.3")
False
WindQAQ

comment created time in 16 days

PullRequestReviewEvent

pull request commenttensorflow/addons

Bump TF 2.3.1

It's not so easy... But seems that we have never published the wheel again patch versioned tensorflow so we should discuss if we should let this PR go forward.

WindQAQ

comment created time in 16 days

Pull request review commenttensorflow/addons

Update ABI compatibility version

  import tensorflow as tf -MIN_TF_VERSION_FOR_ABI_COMPATIBILITY = "2.2.0"-MAX_TF_VERSION_FOR_ABI_COMPATIBILITY = "2.3.0"+MIN_TF_VERSION_FOR_ABI_COMPATIBILITY = "2.3.0"+MAX_TF_VERSION_FOR_ABI_COMPATIBILITY = "2.4.0"

Not okay, so we check the version via min_version <= version < max_version.

https://github.com/tensorflow/addons/blob/master/tensorflow_addons/utils/resource_loader.py#L115

WindQAQ

comment created time in 16 days

PullRequestReviewEvent

issue commenttensorflow/addons

Deprecate all functions/arguments that should be deprecated in Addons 0.12

Another thought is that we can bump our major version to 1.0.0. We can finalize our optimizer API consistency in the next major version as well.

https://github.com/tensorflow/addons/issues/2187

WindQAQ

comment created time in 16 days

pull request commenttensorflow/addons

Bump TF 2.3.1

Is this going to break https://github.com/tensorflow/addons/blob/master/tools/update_release_version.sh?

Oops, I don't even notice that we have this amazing tool. And yes, it will break it.

WindQAQ

comment created time in 16 days

PR opened tensorflow/addons

Update ABI compatibility version

Fix https://colab.research.google.com/drive/19AFVhLDcFa1OHMkaZm5K3alytnxODaGh?usp=sharing. This is not the desired warning. We are built custom ops against TF2.3.*.

+2 -2

0 comment

1 changed file

pr created time in 16 days

create barnchWindQAQ/addons

branch : update-abi-compatibility-version

created branch time in 16 days

PR opened tensorflow/addons

Bump TF 2.3.1

Bump TF 2.3.1

+7 -7

0 comment

2 changed files

pr created time in 16 days

create barnchWindQAQ/addons

branch : bump-tf-2.3.1

created branch time in 16 days

push eventWindQAQ/addons

nataliyah123

commit sha 5ba7e214553d73183c3f56f2ff61efd2f10272b5

#2066-doc-update-layers-v1 (#2177) * layers * fixed flake8 errors * different got in local and github * requested changes * requested_Changes 2 * Update tensorflow_addons/layers/polynomial.py * Update tensorflow_addons/layers/polynomial.py * Update tensorflow_addons/layers/polynomial.py * Update tensorflow_addons/layers/multihead_attention.py * Update tensorflow_addons/layers/multihead_attention.py * Update tensorflow_addons/layers/multihead_attention.py * Update tensorflow_addons/layers/multihead_attention.py * Update tensorflow_addons/layers/multihead_attention.py * Update tensorflow_addons/layers/spectral_normalization.py * Update tensorflow_addons/layers/spectral_normalization.py * Update tensorflow_addons/layers/wrappers.py * Update tensorflow_addons/layers/wrappers.py Co-authored-by: Tzu-Wei Sung <windqaq@gmail.com>

view details

Tzu-Wei Sung

commit sha 392f36c9e0c32b42567668b8811e55c7b90e0141

Fix MultiOptimizer list of layers (#2180) * Fix MultiOptimizer list of layers * Fix name * Remove unused tests * Change list to iterable * Update doc * Update code snippet * Update doc * Back to list * Update error message * Update doc * Fix tmpdir fixture * Fix tmpdir * Update doc * Add test on tf.keras.Model * Add nested model tests * Better naming * Add custom subclass model tests * Inherit from Layer * Move assert_not_allclose to test_utils * Change input to ones * Inherit from Model * Test all weights instead of first one * Update doc

view details

Sean Morgan

commit sha 3b3402bdea5b626c44088065c1c91c74c2babdb9

Add information on ecosystem review for new contributions (#2157) * Add information on grace period feature requests * Update label

view details

Sean Morgan

commit sha 07017131091433642643fdeac5988adb071605cd

* Remove subpackage maintainers (#2185)

view details

Sean Morgan

commit sha 6552d5771d4a1ebffd4b2cf374a5a2315e2f8f3b

Publish dev container (#1888) * Add dev container build * Add docker upload on commit push * Modify pre-commit to work in dev container

view details

Sean Morgan

commit sha f0e00154e2c937aae19dc271cc0768c829504166

Update release.yml

view details

Sean Morgan

commit sha 9dd2389124a52e6cc298eba7d16c6732ec484d30

Update release.yml Add git checkout

view details

Sean Morgan

commit sha 88a3526a301def529ba379b5fee2837f0bc9e26d

Update release.yml Use password-stdin

view details

Sean Morgan

commit sha 07ed7acb047c3c2161f5ed70eae90350c4d995ba

* Add support for ARM architecture build from source (#2182)

view details

rybakov

commit sha cc621dcfee0f7652df7e5f456a88f73dbf242b69

make cutout op compatible with non eager mode (#2190) cutout op is not compatible with non eager mode, this is a fix

view details

Sean Morgan

commit sha 7f7c97d219a7d6ef026121a1c0e0ad9301aea504

Update build_dev_container.sh (#2189)

view details

push time in 16 days

PR closed tensorflow/tensorflow

Reviewers
Add ImageProjectiveTransform XLA kernel cla: yes size:L

As per comment https://github.com/tensorflow/tensorflow/pull/41365#issuecomment-658818273, cc @tanzhenyu for visibility.

+428 -0

6 comments

5 changed files

WindQAQ

pr closed time in 16 days

pull request commenttensorflow/tensorflow

Add ImageProjectiveTransform XLA kernel

Yep, sure. As it doesn't improve the performance, let me close it.

WindQAQ

comment created time in 16 days

pull request commenttensorflow/tensorflow

Add ImageProjectiveTransform XLA kernel

@tanzhenyu Do I have to do something specific on the implementation? Thank you!

WindQAQ

comment created time in 16 days

push eventtensorflow/addons

Sean Morgan

commit sha 07ed7acb047c3c2161f5ed70eae90350c4d995ba

* Add support for ARM architecture build from source (#2182)

view details

push time in 22 days

PR merged tensorflow/addons

Add support for ARM architecture build from source build cla: yes

Description

Add support for building from source for ARM64 architectures. Requests for this can be seen in https://github.com/tensorflow/addons/issues/1982

Type of change

  • [x] Build improvement

Checklist:

+1 -0

4 comments

1 changed file

seanpmorgan

pr closed time in 22 days

PullRequestReviewEvent

pull request commenttensorflow/tensorflow

Add training call argument for MultiHeadAttention

Is the training argument required? I thought keras will automatically inject training from models. Something would not work if it is inside a tf.module?

Hi @saberkun It will be injected when using it inside tf.keras.Model or tf.keras.layers.Layer but will not work inside tf.Module. I think this is not an usually case actually.

WindQAQ

comment created time in 23 days

pull request commenttensorflow/addons

Add support for ARM architecture build from source

Is there any way to test arm build on CI?

seanpmorgan

comment created time in 24 days

push eventtensorflow/addons

Sean Morgan

commit sha 07017131091433642643fdeac5988adb071605cd

* Remove subpackage maintainers (#2185)

view details

push time in 24 days

PR merged tensorflow/addons

Remove subpackage maintainer concept cla: yes github

Description

The repo maintainers have discussed and we feel that the subpackage maintainer concept has grown stale. Originally we wanted to split subpackages amongst ourselves where small groups of maintainers would serve as the failsafe for given subpackages. In practice this meant every subpackage would have had a very limited amount of people who were ultimately responsible. This created unnecessary stress on maintainers to feel like they were required to keep working on the project in order for the subpackage to remain viable.

Ultimately all write access maintainers can help across the repo and have been doing so for an extended period of time. However, there is an added need for reliable submodule owners which is what we've discovered over the past few months. We'll be working on improving that system through https://github.com/tensorflow/addons/pull/2024

Type of change

  • [x] Updated or additional documentation

Checklist:

  • [x] I've properly [formatted my code according to the guidelines]
+6 -24

0 comment

2 changed files

seanpmorgan

pr closed time in 24 days

PullRequestReviewEvent

push eventtensorflow/addons

Tzu-Wei Sung

commit sha 392f36c9e0c32b42567668b8811e55c7b90e0141

Fix MultiOptimizer list of layers (#2180) * Fix MultiOptimizer list of layers * Fix name * Remove unused tests * Change list to iterable * Update doc * Update code snippet * Update doc * Back to list * Update error message * Update doc * Fix tmpdir fixture * Fix tmpdir * Update doc * Add test on tf.keras.Model * Add nested model tests * Better naming * Add custom subclass model tests * Inherit from Layer * Move assert_not_allclose to test_utils * Change input to ones * Inherit from Model * Test all weights instead of first one * Update doc

view details

push time in 25 days

PR merged tensorflow/addons

Fix MultiOptimizer list of layers cla: yes optimizers

Description

Fixes #2178

Type of change

Checklist:

  • [x] I've properly formatted my code according to the guidelines
    • [ ] By running Black + Flake8
    • [ ] By running pre-commit hooks
  • [ ] This PR addresses an already submitted issue for TensorFlow Addons
  • [ ] I have made corresponding changes to the documentation
  • [ ] I have added tests that prove my fix is effective or that my feature works
  • [ ] This PR contains modifications to C++ custom-ops

How Has This Been Tested?

Additional test.

+282 -101

6 comments

3 changed files

WindQAQ

pr closed time in 25 days

issue closedtensorflow/addons

Error in MultiOptimizer when layers list are used

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 18.04
  • TensorFlow version and how it was installed (source or binary): docker tensorflow
  • TensorFlow-Addons version and how it was installed (source or binary): binary
  • Is GPU used? (yes/no):

Describe the bug

Documentation of MultiOptimizer explains that list of layers can be used but it results in an error.

https://github.com/tensorflow/addons/blob/c7b867ae6c01e9ac3dbb8c3408f1308b41acfc9b/tensorflow_addons/optimizers/discriminative_layer_training.py#L38

https://github.com/tensorflow/addons/blob/c7b867ae6c01e9ac3dbb8c3408f1308b41acfc9b/tensorflow_addons/optimizers/discriminative_layer_training.py#L52

Code to reproduce the issue


import tensorflow as tf
import tensorflow_addons as tfa
import numpy as np


model = tf.keras.Sequential(
        [tf.keras.Input(shape=[1]), tf.keras.layers.Dense(1), tf.keras.layers.Dense(1), tf.keras.layers.Dense(1)]
    )

x = np.array(np.ones([100]))
y = np.array(np.ones([100]))

weights_before_train = (
    model.layers[0].weights[0].numpy(),
    model.layers[1].weights[0].numpy(),
)

opt1 = tf.keras.optimizers.Adam(learning_rate=1e-3)
opt2 = tf.keras.optimizers.SGD(learning_rate=0)

opt_layer_pairs = [(opt1, model.layers[0]), (opt2, model.layers[1:])]

loss = tf.keras.losses.MSE
optimizer = tfa.optimizers.MultiOptimizer(opt_layer_pairs)

model.compile(optimizer=optimizer, loss=loss)

Error produced

Traceback (most recent call last):
  File "test_multi_optimizer.py", line 24, in <module>
    optimizer = tfa.optimizers.MultiOptimizer(opt_layer_pairs)
  File "/usr/local/lib/python3.6/dist-packages/typeguard/__init__.py", line 840, in wrapper
    retval = func(*args, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_addons/optimizers/discriminative_layer_training.py", line 92, in __init__
    for opt, layer in optimizers_and_layers
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_addons/optimizers/discriminative_layer_training.py", line 92, in <listcomp>
    for opt, layer in optimizers_and_layers
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_addons/optimizers/discriminative_layer_training.py", line 142, in create_optimizer_spec
    ), "Object passed is not an instance of tf.keras.layers.Layer nor tf.keras.Model"
AssertionError: Object passed is not an instance of tf.keras.layers.Layer nor tf.keras.Model

closed time in 25 days

guillaumelorre28

Pull request review commenttensorflow/addons

Fix MultiOptimizer list of layers

 class MultiOptimizer(tf.keras.optimizers.Optimizer):     """Multi Optimizer Wrapper for Discriminative Layer Training. -    Creates a wrapper around a set of instantiated optimizer layer pairs. Generally useful for transfer learning-    of deep networks.+    Creates a wrapper around a set of instantiated optimizer layer pairs.+    Generally useful for transfer learning of deep networks. -    Each optimizer will optimize only the weights associated with its paired layer. This can be used-    to implement discriminative layer training by assigning different learning rates to each optimizer-    layer pair. (Optimizer, list(Layers)) pairs are also supported. Please note that the layers must be-    instantiated before instantiating the optimizer.+    Each optimizer will optimize only the weights associated with its paired layer.+    This can be used to implement discriminative layer training by assigning+    different learning rates to each optimizer layer pair.+    `(tf.keras.optimizers.Optimizer, List[tf.keras.layers.Layer])` pairs are also supported.

It's ( optimizer, List[layer] ), where () stands for Tuple.

WindQAQ

comment created time in 25 days

PullRequestReviewEvent

issue commenttensorflow/addons

Testable docstrings

@tacho090 sure, feel free to take any module you want to deal with and request my review :-)

WindQAQ

comment created time in a month

push eventWindQAQ/addons

Tzu-Wei Sung

commit sha 7db5362509ddb43cd1a92832deca0b10c118a633

Update doc

view details

push time in a month

push eventWindQAQ/addons

Tzu-Wei Sung

commit sha f6443080280033de66440f66d9d1a66164506958

Test all weights instead of first one

view details

push time in a month

push eventtensorflow/addons

nataliyah123

commit sha 5ba7e214553d73183c3f56f2ff61efd2f10272b5

#2066-doc-update-layers-v1 (#2177) * layers * fixed flake8 errors * different got in local and github * requested changes * requested_Changes 2 * Update tensorflow_addons/layers/polynomial.py * Update tensorflow_addons/layers/polynomial.py * Update tensorflow_addons/layers/polynomial.py * Update tensorflow_addons/layers/multihead_attention.py * Update tensorflow_addons/layers/multihead_attention.py * Update tensorflow_addons/layers/multihead_attention.py * Update tensorflow_addons/layers/multihead_attention.py * Update tensorflow_addons/layers/multihead_attention.py * Update tensorflow_addons/layers/spectral_normalization.py * Update tensorflow_addons/layers/spectral_normalization.py * Update tensorflow_addons/layers/wrappers.py * Update tensorflow_addons/layers/wrappers.py Co-authored-by: Tzu-Wei Sung <windqaq@gmail.com>

view details

push time in a month

PR merged tensorflow/addons

Reviewers
#2066-doc-update-layers-v1 cla: yes layers

part of #2066

+49 -52

2 comments

4 changed files

nataliyah123

pr closed time in a month

more