profile
viewpoint
Yan Facai (颜发才) facaiy Beijing, China. linkedin.com/in/facaiy Software Engineer.

facaiy/math-expression-parser 11

A scala library for parsing mathemitical expressions with support for parentheses and variables.

facaiy/book_notes 4

整理我读过的书籍笔记

facaiy/DAG-lite 4

An experimental DAG library with functional programming technology.

facaiy/facaiy.github.io 2

个人博客

facaiy/facaiy-scala-with-java-quickstart 1

An archetype which creates a mixed Java/Scala project.

facaiy/scikit-learn 1

scikit-learn: machine learning in Python

dynamicwebpaige/community-1 0

Stores documents used by the TensorFlow developer community

facaiy/addons 0

Useful extra functionality for TensorFlow maintained by SIG-addons

facaiy/angel 0

A Flexible and Powerful Parameter Server for large-scale machine learning

facaiy/business-card 0

A business card in LaTeX.

push eventfacaiy/facaiy.github.io

Yan Facai (颜发才)

commit sha 4067d1b2ec9b941d56bc70bad11784a1729fadc2

愿望清单:8月

view details

push time in 7 days

push eventfacaiy/facaiy.github.io

Yan Facai (颜发才)

commit sha 8e57d31b8a82abc37adeded6185e05a11ee26bed

修正

view details

push time in 14 days

push eventfacaiy/facaiy.github.io

Yan Facai (颜发才)

commit sha cdea2512f1069753a3d7678ef861db3eea2b5187

完成第一小节

view details

Yan Facai (颜发才)

commit sha 39d1e898ee0c811c9fcb03a07c660801d70d746a

部份草稿

view details

Yan Facai (颜发才)

commit sha 1d54af79986406c68d22e1c7e3ff020ff28844e4

update

view details

Yan Facai (颜发才)

commit sha cb117374f8c13ced06fcc737b1e8a67730478cf0

Merge branch 'let_me_eat_your_pancreas'

view details

Yan Facai (颜发才)

commit sha d3e7296f3e9e18fc7ec55d98121e68adf30dd462

漫评发布

view details

push time in 14 days

push eventfacaiy/facaiy.github.io

Yan Facai (颜发才)

commit sha 1dc5ae90301e92311c8bf30879483ba7505b825d

愿望清单:更新7月

view details

push time in 2 months

pull request commenttensorflow/addons

Use tf.keras.backend.epsilon() as dtype

Hi, Tzu-Wei, I know there is one way to serialize tf.dtypes.DType:

In [21]: tf.float32.as_datatype_enum
Out[21]: 1

In [22]: tf.as_dtype(tf.float32.as_datatype_enum)
Out[22]: tf.float32
WindQAQ

comment created time in 2 months

pull request commenttensorflow/addons

Install setuptools first

@gabrieldemarmiesse @seanpmorgan Hi, the PR makes sense for me, could you double check it? Thanks

image

WindQAQ

comment created time in 2 months

pull request commenttensorflow/addons

Set shape for dense image warp

Hi, Tzu-Wei, sorry for misleading you. I mean to use the helper function to replace tf.shape here. Does it work? https://github.com/tensorflow/addons/blob/9cdc1855af5dcbfa61584203328775a6b733ceff/tensorflow_addons/image/dense_image_warp.py#L244-L248

WindQAQ

comment created time in 2 months

pull request commenttensorflow/addons

Set shape for dense image warp

@WindQAQ Tzu-Wei, is it possible to create a private helper function which attempts to get static shape ?

       def _get_shape(x):
           static_shape = x.shape
           dynamic_shape = tf.shape(x)
           shape = []
           for idx, dim in zip(static_shape):
               if idx is None:
                    shape.append(dynamic_shape[idx])
               else:
                    shape.append(dim)
          return shape
WindQAQ

comment created time in 2 months

Pull request review commenttensorflow/addons

Clean up .assign in optmizers

 def _resource_apply_dense(self, grad, var, apply_state=None):             )             s = top_singular_vector -        var_update_tensor = tf.math.multiply(var, lr) - (1 - lr) * lambda_ * s-        var_update_kwargs = {-            "resource": var.handle,-            "value": var_update_tensor,-        }-        var_update_op = tf.raw_ops.AssignVariableOp(**var_update_kwargs)-        return tf.group(var_update_op)+        return var.assign(+            var * lr - (1 - lr) * lambda_ * s, use_locking=self._use_locking

How about creating a temp variable for the expression?

var_update = xxxx
var.assign(xxx)
WindQAQ

comment created time in 2 months

pull request commenttensorflow/addons

Add deprecation warning

Sounds good to me. Should we revert the change and add the warning?

Tzu-wei, sorry, I think you're right. Let's revert #1980 and remove the argument after next release, what do you think?

WindQAQ

comment created time in 2 months

pull request commenttensorflow/addons

Drop data_format argument

Thanks, I'm fine with it

WindQAQ

comment created time in 2 months

pull request commenttensorflow/addons

Drop data_format argument

which package to use, Tzu-wei ? eg: deprecation, etc

WindQAQ

comment created time in 2 months

pull request commenttensorflow/addons

Drop data_format argument

Great, it's not necessary to revert the change, let's just add the warning :-)

WindQAQ

comment created time in 2 months

Pull request review commenttensorflow/addons

Port ProximalAdagrad

+# Copyright 2020 The TensorFlow Authors. All Rights Reserved.+#+# Licensed under the Apache License, Version 2.0 (the "License");+# you may not use this file except in compliance with the License.+# You may obtain a copy of the License at+#+#     http://www.apache.org/licenses/LICENSE-2.0+#+# Unless required by applicable law or agreed to in writing, software+# distributed under the License is distributed on an "AS IS" BASIS,+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+# See the License for the specific language governing permissions and+# limitations under the License.+# ==============================================================================+"""Proximal Adagrad optimizer."""++import tensorflow as tf++from tensorflow_addons.utils.types import FloatTensorLike++from typing import Union, Callable+from typeguard import typechecked+++@tf.keras.utils.register_keras_serializable(package="Addons")+class ProximalAdagrad(tf.keras.optimizers.Optimizer):+    """Optimizer that implements the Proximal Adagrad algorithm.++    References:+        - [Efficient Learning using Forward-Backward Splitting](+          http://papers.nips.cc/paper/3793-efficient-learning-using-forward-backward-splitting.pdf).+    """++    @typechecked+    def __init__(+        self,+        learning_rate: Union[FloatTensorLike, Callable] = 0.001,+        initial_accumulator_value: float = 0.1,+        l1_regularization_strength: float = 0.0,+        l2_regularization_strength: float = 0.0,+        name: str = "ProximalAdagrad",+        **kwargs+    ):+        """Construct a new Proximal Adagrad optimizer.++        Args:+            learning_rate: A Tensor or a floating point value, or a schedule+                that is a `tf.keras.optimizers.schedules.LearningRateSchedule`.+                The learning rate.+            initial_accumulator_value: A floating point value.+                Starting value for the accumulators, must be positive.+            l1_regularization_strength: A floating point value.+                The l1 regularization term, must be greater than or+                equal to zero.+            l2_regularization_strength: A floating point value.+                The l2 regularization term, must be greater than or+                equal to zero.+            name: Optional name for the operations created when applying+                gradients. Defaults to "ProximalAdagrad".+            **kwargs: keyword arguments. Allowed to be {`clipnorm`,+                `clipvalue`, `lr`, `decay`}. `clipnorm` is clip gradients+                by norm; `clipvalue` is clip gradients by value, `decay` is+                included for backward compatibility to allow time inverse+                decay of learning rate. `lr` is included for backward+                compatibility, recommended to use `learning_rate` instead.+        Raises:+            ValueError: If the `initial_accumulator_value`,+                `l1_regularization_strength` or `l2_regularization_strength`+                is invalid.+        """+        if initial_accumulator_value < 0.0:+            raise ValueError("`initial_accumulator_value` must be non-negative.")+        if l1_regularization_strength < 0.0:+            raise ValueError("`l1_regularization_strength` must be non-negative.")+        if l2_regularization_strength < 0.0:+            raise ValueError("`l2_regularization_strength` must be non-negative.")+        super().__init__(name, **kwargs)+        self._set_hyper("learning_rate", kwargs.get("lr", learning_rate))+        self.l1_regularization_strength = l1_regularization_strength+        self.l2_regularization_strength = l2_regularization_strength

Tzu-Wei, perhaps we could use self._set_hyper for l1 and l2, what do you think?

WindQAQ

comment created time in 2 months

Pull request review commenttensorflow/addons

Port ProximalAdagrad

+# Copyright 2020 The TensorFlow Authors. All Rights Reserved.+#+# Licensed under the Apache License, Version 2.0 (the "License");+# you may not use this file except in compliance with the License.+# You may obtain a copy of the License at+#+#     http://www.apache.org/licenses/LICENSE-2.0+#+# Unless required by applicable law or agreed to in writing, software+# distributed under the License is distributed on an "AS IS" BASIS,+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+# See the License for the specific language governing permissions and+# limitations under the License.+# ==============================================================================+"""Proximal Adagrad optimizer."""++import tensorflow as tf++from tensorflow_addons.utils.types import FloatTensorLike++from typing import Union, Callable+from typeguard import typechecked+++@tf.keras.utils.register_keras_serializable(package="Addons")+class ProximalAdagrad(tf.keras.optimizers.Optimizer):+    """Optimizer that implements the Proximal Adagrad algorithm.++    References:+        - [Efficient Learning using Forward-Backward Splitting](+          http://papers.nips.cc/paper/3793-efficient-learning-using-forward-backward-splitting.pdf).+    """++    @typechecked+    def __init__(+        self,+        learning_rate: Union[FloatTensorLike, Callable] = 0.001,+        initial_accumulator_value: float = 0.1,+        l1_regularization_strength: float = 0.0,+        l2_regularization_strength: float = 0.0,+        name: str = "ProximalAdagrad",+        **kwargs+    ):+        """Construct a new Proximal Adagrad optimizer.++        Args:+            learning_rate: A Tensor or a floating point value, or a schedule+                that is a `tf.keras.optimizers.schedules.LearningRateSchedule`.+                The learning rate.+            initial_accumulator_value: A floating point value.+                Starting value for the accumulators, must be positive.+            l1_regularization_strength: A floating point value.+                The l1 regularization term, must be greater than or+                equal to zero.+            l2_regularization_strength: A floating point value.+                The l2 regularization term, must be greater than or+                equal to zero.+            name: Optional name for the operations created when applying+                gradients. Defaults to "ProximalAdagrad".+            **kwargs: keyword arguments. Allowed to be {`clipnorm`,+                `clipvalue`, `lr`, `decay`}. `clipnorm` is clip gradients+                by norm; `clipvalue` is clip gradients by value, `decay` is+                included for backward compatibility to allow time inverse+                decay of learning rate. `lr` is included for backward+                compatibility, recommended to use `learning_rate` instead.+        Raises:+            ValueError: If the `initial_accumulator_value`,+                `l1_regularization_strength` or `l2_regularization_strength`+                is invalid.+        """+        if initial_accumulator_value < 0.0:+            raise ValueError("`initial_accumulator_value` must be non-negative.")+        if l1_regularization_strength < 0.0:+            raise ValueError("`l1_regularization_strength` must be non-negative.")+        if l2_regularization_strength < 0.0:+            raise ValueError("`l2_regularization_strength` must be non-negative.")+        super().__init__(name, **kwargs)+        self._set_hyper("learning_rate", kwargs.get("lr", learning_rate))+        self.l1_regularization_strength = l1_regularization_strength+        self.l2_regularization_strength = l2_regularization_strength

I agree, l1 and l2 are more concise :-)

WindQAQ

comment created time in 2 months

pull request commenttensorflow/addons

Drop data_format argument

Should we have some warning on deprecated arguments?

I think it's a good idea to warn user that the argument is not valid any more. And we can safely remove those warnings after 2~3 releases :-)

WindQAQ

comment created time in 2 months

issue commenttensorflow/addons

MovingAverage num_updates support Variable

Could you explain the cons and pros for the change?

fsx950223

comment created time in 3 months

Pull request review commenttensorflow/addons

Port ProximalAdagrad

+# Copyright 2020 The TensorFlow Authors. All Rights Reserved.+#+# Licensed under the Apache License, Version 2.0 (the "License");+# you may not use this file except in compliance with the License.+# You may obtain a copy of the License at+#+#     http://www.apache.org/licenses/LICENSE-2.0+#+# Unless required by applicable law or agreed to in writing, software+# distributed under the License is distributed on an "AS IS" BASIS,+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+# See the License for the specific language governing permissions and+# limitations under the License.+# ==============================================================================+"""Proximal Adagrad optimizer."""++import tensorflow as tf
from typing import Callable, Union

import tensorflow as tf
from typeguard import typechecked

from tensorflow_addons.utils.types import FloatTensorLike
WindQAQ

comment created time in 3 months

Pull request review commenttensorflow/addons

Port ProximalAdagrad

+# Copyright 2020 The TensorFlow Authors. All Rights Reserved.+#+# Licensed under the Apache License, Version 2.0 (the "License");+# you may not use this file except in compliance with the License.+# You may obtain a copy of the License at+#+#     http://www.apache.org/licenses/LICENSE-2.0+#+# Unless required by applicable law or agreed to in writing, software+# distributed under the License is distributed on an "AS IS" BASIS,+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+# See the License for the specific language governing permissions and+# limitations under the License.+# ==============================================================================+"""Proximal Adagrad optimizer."""++import tensorflow as tf++from tensorflow_addons.utils.types import FloatTensorLike++from typing import Union, Callable+from typeguard import typechecked+++@tf.keras.utils.register_keras_serializable(package="Addons")+class ProximalAdagrad(tf.keras.optimizers.Optimizer):+    """Optimizer that implements the Proximal Adagrad algorithm.++    References:+        - [Efficient Learning using Forward-Backward Splitting](+          http://papers.nips.cc/paper/3793-efficient-learning-using-forward-backward-splitting.pdf).+    """++    @typechecked+    def __init__(+        self,+        learning_rate: Union[FloatTensorLike, Callable] = 0.001,+        initial_accumulator_value: float = 0.1,+        l1_regularization_strength: float = 0.0,+        l2_regularization_strength: float = 0.0,+        name: str = "ProximalAdagrad",+        **kwargs+    ):+        """Construct a new Proximal Adagrad optimizer.++        Args:+            learning_rate: A Tensor or a floating point value, or a schedule+                that is a `tf.keras.optimizers.schedules.LearningRateSchedule`.+                The learning rate.+            initial_accumulator_value: A floating point value.+                Starting value for the accumulators, must be positive.+            l1_regularization_strength: A floating point value.+                The l1 regularization term, must be greater than or+                equal to zero.+            l2_regularization_strength: A floating point value.+                The l2 regularization term, must be greater than or+                equal to zero.+            name: Optional name for the operations created when applying+                gradients. Defaults to "ProximalAdagrad".+            **kwargs: keyword arguments. Allowed to be {`clipnorm`,+                `clipvalue`, `lr`, `decay`}. `clipnorm` is clip gradients+                by norm; `clipvalue` is clip gradients by value, `decay` is+                included for backward compatibility to allow time inverse+                decay of learning rate. `lr` is included for backward+                compatibility, recommended to use `learning_rate` instead.+        Raises:+            ValueError: If the `initial_accumulator_value`,+                `l1_regularization_strength` or `l2_regularization_strength`+                is invalid.+        """+        if initial_accumulator_value < 0.0:+            raise ValueError("`initial_accumulator_value` must be non-negative.")+        if l1_regularization_strength < 0.0:+            raise ValueError("`l1_regularization_strength` must be non-negative.")+        if l2_regularization_strength < 0.0:+            raise ValueError("`l2_regularization_strength` must be non-negative.")+        super().__init__(name, **kwargs)+        self._set_hyper("learning_rate", kwargs.get("lr", learning_rate))+        self.l1_regularization_strength = l1_regularization_strength+        self.l2_regularization_strength = l2_regularization_strength

How about creating l1_regularxx and l2_regularxx as hyper-parameter?

WindQAQ

comment created time in 3 months

push eventtensorflow/addons

Tzu-Wei Sung

commit sha 5f746971d0d9491716f2f13206299a2c45941b0c

Use tf.raw_ops instead of private API (#1975)

view details

push time in 3 months

PR merged tensorflow/addons

Use tf.raw_ops instead of private API for novograd cla: yes optimizers test-cases

Finally get rid of one :-)

+13 -17

1 comment

2 changed files

WindQAQ

pr closed time in 3 months

issue commenttensorflow/addons

Add Mixout module

BTW, I would say we can place it in layers but want to see other members' opinion

+1, it would be better if we create a new subclass for it, eg: Dropout

crystina-z

comment created time in 3 months

push eventfacaiy/facaiy.github.io

Yan Facai (颜发才)

commit sha 9393cdfed5bb1540ff928ac29d2812910ddcbf5d

更新:2020年上半年

view details

push time in 3 months

push eventfacaiy/facaiy.github.io

Yan Facai (颜发才)

commit sha 1d54af79986406c68d22e1c7e3ff020ff28844e4

update

view details

push time in 3 months

pull request commenttensorflow/tensorflow

add assert_element_shape method for tf.contrib.data

@dhdaines Hi, David, would you please file a new issue for it? Thanks :-)

facaiy

comment created time in 3 months

issue closedtensorflow/addons

Does tfa.metrics.F1Score calculate f1 score on each batch (or) on full eval set?

Please update the documentation to include the information regarding whether the metric calculates f1 per each batch and then averages it out (or) does it calculate on full validation set?

closed time in 3 months

John-8704

issue commenttensorflow/addons

Does tfa.metrics.F1Score calculate f1 score on each batch (or) on full eval set?

I'll close the issue, and feel free to reopen it :-)

John-8704

comment created time in 3 months

pull request commenttensorflow/addons

CRF: Add scores for decoded tags to crf_decode

Thanks, Tanja. I'm wondering if we could add a test case for confidence? What do you think, Dheeraj @Squadrick

tabergma

comment created time in 3 months

more