profile
viewpoint
Zhiyong neilteng Georgia Institute of Technology

neilteng/CS7643-DeepLearning 3

CS7643-DeepLearning

neilteng/A-practice-on-Association-Rule-Mining 0

A practice on Association Rule Mining by mlxtend and python

neilteng/A-practice-on-binary-Classification 0

A practice on binary Classification by Sklearning

neilteng/Adaboost-python 0

python Adaboosting for continue/discrete feature

neilteng/Apriori 0

c++ implementation of apriori algorithm

neilteng/backtobackswe 0

Legacy Code Examples For Back To Back SWE Lessons

neilteng/BFS 0

python

neilteng/coursera 0

Data sets and scripts for Coursera Big Data Specialization.

push eventneilteng/CppLibrary

Zhiyong

commit sha fc32358a870b9b92554ff8eab8385ffcad7b36b2

Update KMP.java add comments

view details

push time in a month

push eventneilteng/CppLibrary

Zhiyong

commit sha 4fb61475d6e4e54678aa9c8a4f24e0d114a0ebf2

Update and rename KMP.cpp to KMP.java

view details

push time in a month

push eventneilteng/CppLibrary

Zhiyong

commit sha 2240b5f234344693edb6793f22f714c6d0bb0751

Delete KMP

view details

push time in a month

push eventneilteng/CppLibrary

Zhiyong

commit sha 932f05c6cf387a04d17357938fdce43660f1e004

Update rolling_hash_aka_Rabin_Karp.cpp

view details

push time in a month

push eventneilteng/CppLibrary

Zhiyong

commit sha b01ab6f01b67da78571616ea8e0a7706d42da607

Create Modulo computaion

view details

push time in a month

issue commenttensorflow/tensorflow

Documentation instructions on installing tensorflow with CUDA support doesn't work

facing the exact same issue:

sudo apt --fix-broken install
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Correcting dependencies... Done
The following packages were automatically installed and are no longer required:
  cuda-command-line-tools-10-1 cuda-compiler-10-1 cuda-cudart-10-1 cuda-cudart-dev-10-1 cuda-cufft-10-1 cuda-cufft-dev-10-1 cuda-cuobjdump-10-1 cuda-cupti-10-1 cuda-curand-10-1 cuda-curand-dev-10-1
  cuda-cusolver-10-1 cuda-cusolver-dev-10-1 cuda-cusparse-10-1 cuda-cusparse-dev-10-1 cuda-demo-suite-10-1 cuda-documentation-10-1 cuda-driver-dev-10-1 cuda-drivers cuda-drivers-450 cuda-gdb-10-1
  cuda-gpu-library-advisor-10-1 cuda-libraries-10-1 cuda-libraries-dev-10-1 cuda-license-10-1 cuda-license-10-2 cuda-memcheck-10-1 cuda-misc-headers-10-1 cuda-npp-10-1 cuda-npp-dev-10-1 cuda-nsight-10-1
  cuda-nsight-compute-10-1 cuda-nsight-systems-10-1 cuda-nvcc-10-1 cuda-nvdisasm-10-1 cuda-nvgraph-10-1 cuda-nvgraph-dev-10-1 cuda-nvjpeg-10-1 cuda-nvjpeg-dev-10-1 cuda-nvml-dev-10-1 cuda-nvprof-10-1
  cuda-nvprune-10-1 cuda-nvrtc-10-1 cuda-nvrtc-dev-10-1 cuda-nvtx-10-1 cuda-nvvp-10-1 cuda-runtime-10-1 cuda-samples-10-1 cuda-sanitizer-api-10-1 cuda-toolkit-10-1 cuda-tools-10-1 cuda-visual-tools-10-1
  dkms freeglut3 freeglut3-dev libatomic1:i386 libbsd0:i386 libcublas-dev libcublas10 libdrm-amdgpu1:i386 libdrm-dev libdrm-intel1:i386 libdrm-nouveau2:i386 libdrm-radeon1:i386 libdrm2:i386
  libedit2:i386 libelf1:i386 libexpat1:i386 libffi6:i386 libgl1:i386 libgl1-mesa-dev libgl1-mesa-dri:i386 libglapi-mesa:i386 libgles1 libglu1-mesa-dev libglvnd-core-dev libglvnd-dev libglvnd0:i386
  libglx-mesa0:i386 libglx0:i386 libice-dev libllvm9:i386 libnvidia-cfg1-450 libnvidia-common-440 libnvidia-common-450 libnvidia-compute-450 libnvidia-decode-450 libnvidia-encode-450 libnvidia-extra-440
  libnvidia-fbc1-450 libnvidia-gl-450 libnvidia-ifr1-450 libopengl0 libpciaccess0:i386 libpthread-stubs0-dev libsensors4:i386 libsm-dev libstdc++6:i386 libx11-6:i386 libx11-dev libx11-xcb-dev
  libx11-xcb1:i386 libxau-dev libxau6:i386 libxcb-dri2-0:i386 libxcb-dri2-0-dev libxcb-dri3-0:i386 libxcb-dri3-dev libxcb-glx0:i386 libxcb-glx0-dev libxcb-present-dev libxcb-present0:i386
  libxcb-randr0-dev libxcb-render0-dev libxcb-shape0-dev libxcb-sync-dev libxcb-sync1:i386 libxcb-xfixes0-dev libxcb1:i386 libxcb1-dev libxdamage-dev libxdamage1:i386 libxdmcp-dev libxdmcp6:i386
  libxext-dev libxext6:i386 libxfixes-dev libxfixes3:i386 libxi-dev libxmu-dev libxmu-headers libxnvctrl0 libxshmfence-dev libxshmfence1:i386 libxt-dev libxxf86vm-dev libxxf86vm1:i386 mesa-common-dev
  nsight-compute-2020.1.0 nsight-systems-2019.5.2 nvidia-compute-utils-450 nvidia-dkms-450 nvidia-driver-450 nvidia-kernel-common-450 nvidia-kernel-source-450 nvidia-modprobe nvidia-prime
  nvidia-settings nvidia-utils-450 pkg-config screen-resolution-extra x11proto-core-dev x11proto-damage-dev x11proto-dev x11proto-fixes-dev x11proto-input-dev x11proto-xext-dev x11proto-xf86vidmode-dev
  xorg-sgml-doctools xserver-xorg-video-nvidia-450 xtrans-dev
Use 'sudo apt autoremove' to remove them.
The following additional packages will be installed:
  libnvidia-compute-450
The following NEW packages will be installed:
  libnvidia-compute-450
0 upgraded, 1 newly installed, 0 to remove and 201 not upgraded.
124 not fully installed or removed.
Need to get 0 B/21.8 MB of archives.
After this operation, 115 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
(Reading database ... 201507 files and directories currently installed.)
Preparing to unpack .../libnvidia-compute-450_450.36.06-0ubuntu1_amd64.deb ...
Unpacking libnvidia-compute-450:amd64 (450.36.06-0ubuntu1) ...
dpkg: error processing archive /var/cache/apt/archives/libnvidia-compute-450_450.36.06-0ubuntu1_amd64.deb (--unpack):
 trying to overwrite '/usr/lib/x86_64-linux-gnu/libnvidia-allocator.so', which is also in package libnvidia-extra-440:amd64 440.100-0ubuntu0.18.04.1
Errors were encountered while processing:
 /var/cache/apt/archives/libnvidia-compute-450_450.36.06-0ubuntu1_amd64.deb

dhiegomaga

comment created time in a month

issue commenttensorflow/tensorflow

2 errors while building NodeDef 'tf_op_layer_Maximum_2/Maximum_2' using Op<name=Maximum; signature=x:T, y:T -> z:T ...Inconsistent values for attr 'T' DT_INT32 vs. DT_INT64

I think I accidentally delete the notebook.. And I cannot recover it.. If we are luck and you guys have a copy, I will work on it again. Otherwise, I can only close this issue for now.

neilteng

comment created time in a month

issue closedtensorflow/tensorflow

2 errors while building NodeDef 'tf_op_layer_Maximum_2/Maximum_2' using Op<name=Maximum; signature=x:T, y:T -> z:T ...Inconsistent values for attr 'T' DT_INT32 vs. DT_INT64

System information I am using colab to reproduce the issue and the ipynb is attached below.

You can collect some of this information using our environment capture tf.version.GIT_VERSION: v1.12.1-32511-g2cc80a74f2 tf.version.VERSION: 2.3.0-dev20200525

Describe the current behavior cannot load the saved tf model

Describe the expected behavior successifully save the model and serve it like this example: https://github.com/tensorflow/transform/blob/master/examples/census_example_v2_test.py

Standalone code to reproduce the issue https://colab.research.google.com/drive/1h2QIX_QZetIzSuG0J6lNWkHoSa2nnIyS?usp=sharing

Other info / logs Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.

error is show in the last cell of the colab notebook.

WARNING:tensorflow:Layer LSTM_1 will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
WARNING:tensorflow:Layer LSTM_1 will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
WARNING:tensorflow:Layer LSTM_1 will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
WARNING:tensorflow:Layer LSTM_1 will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
WARNING:tensorflow:Layer LSTM_1 will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
WARNING:tensorflow:Layer LSTM_1 will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
WARNING:tensorflow:Layer LSTM_2 will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
WARNING:tensorflow:Layer LSTM_2 will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
WARNING:tensorflow:Layer LSTM_2 will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
WARNING:tensorflow:Layer LSTM_2 will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
WARNING:tensorflow:Layer LSTM_2 will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
WARNING:tensorflow:Layer LSTM_2 will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
---------------------------------------------------------------------------
InvalidArgumentError                      Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in _create_c_op(graph, node_def, inputs, control_inputs, op_def)
   1819   try:
-> 1820     c_op = pywrap_tf_session.TF_FinishOperation(op_desc)
   1821   except errors.InvalidArgumentError as e:

InvalidArgumentError: 2 errors while building NodeDef 'tf_op_layer_Maximum_2/Maximum_2' using Op<name=Maximum; signature=x:T, y:T -> z:T; attr=T:type,allowed=[DT_BFLOAT16, DT_HALF, DT_FLOAT, DT_DOUBLE, DT_UINT8, DT_INT16, DT_INT32, DT_INT64]>:
Inconsistent values for attr 'T' DT_INT32 vs. DT_INT64
Inconsistent values for attr 'T' DT_INT32 vs. DT_INT64

During handling of the above exception, another exception occurred:

ValueError                                Traceback (most recent call last)
15 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in _create_c_op(graph, node_def, inputs, control_inputs, op_def)
   1821   except errors.InvalidArgumentError as e:
   1822     # Convert to ValueError for backwards compatibility.
-> 1823     raise ValueError(str(e))
   1824 
   1825   return c_op

ValueError: 2 errors while building NodeDef 'tf_op_layer_Maximum_2/Maximum_2' using Op<name=Maximum; signature=x:T, y:T -> z:T; attr=T:type,allowed=[DT_BFLOAT16, DT_HALF, DT_FLOAT, DT_DOUBLE, DT_UINT8, DT_INT16, DT_INT32, DT_INT64]>:
Inconsistent values for attr 'T' DT_INT32 vs. DT_INT64
Inconsistent values for attr 'T' DT_INT32 vs. DT_INT64

closed time in a month

neilteng

issue openedkeras-team/autokeras

Support for Dictionary like tf tensor input from tf.data

Feature Description

People may use tfrecords for distributed training or large data set training. To this end, it is nice to allow dictionary like tensor input from tf.data api.

And also note that, StructuredDataClassifier han't support tf.data yet.

Code Example

like input format here https://www.tensorflow.org/api_docs/python/tf/data/experimental/make_batched_features_dataset

{
  "age": [[0], [-1]],
  "gender": [["f"], ["f"]],
  "kws": SparseTensor(
    indices=[[0, 0], [0, 1], [1, 0]],
    values=["code", "art", "sports"]
    dense_shape=[2, 2]),
}

Solution

<!--- This should be similar to reading from csv or pandas which has column name and types. -->

created time in 2 months

issue commenttensorflow/tensorflow

2 errors while building NodeDef 'tf_op_layer_Maximum_2/Maximum_2' using Op<name=Maximum; signature=x:T, y:T -> z:T ...Inconsistent values for attr 'T' DT_INT32 vs. DT_INT64

@neilteng Were you able to reproduce the error with a simpler example?

I think I accidentally delete the notebook.. And I cannot recover it.. If we are luck and you guys have a copy, I will work on it again. Otherwise, I can only close this issue for now.

neilteng

comment created time in 2 months

PR closed lyhue1991/eat_tensorflow2_in_30_days

nan indicator is actually Category column

nan indicator col is actually indicator_column instead of numberic columns.

+9 -2

0 comment

1 changed file

neilteng

pr closed time in 2 months

PR opened lyhue1991/eat_tensorflow2_in_30_days

Fix the input of curl method

Guys, you forgot to change the curl method to the right input. This is the same input from the TF officical tutorial..

+2 -2

0 comment

1 changed file

pr created time in 2 months

push eventneilteng/eat_tensorflow2_in_30_days

Zhiyong

commit sha fbb8a20469553ef50f47881364b05198eb4710b5

Fix the input of curl method Guys, you forgot to change the curl method to the right input. This is the same input from the TF officical tutorial..

view details

push time in 2 months

PR opened lyhue1991/eat_tensorflow2_in_30_days

Rephrase

"increase process" is ambiguous.

+2 -2

0 comment

1 changed file

pr created time in 2 months

push eventneilteng/eat_tensorflow2_in_30_days

Zhiyong

commit sha a99a53de8d7fdd1cef5925878f2786936996187c

Rephrase "increase process" is ambiguous.

view details

push time in 2 months

push eventneilteng/eat_tensorflow2_in_30_days

Zhiyong

commit sha 50257d449b22257ee0068e82e2126f5b949ac40d

typo

view details

push time in 2 months

issue openedlyhue1991/eat_tensorflow2_in_30_days

why can we use parameter "input_shape = (2,)" which undefined in __init__?

In 4-3, it is not a bug but just a question that I dont understand why we can do this model.add(Linear(units = 1,input_shape = (2,))) without this parameter in init method "input_shape = (2,)"


class Linear(layers.Layer):
    def __init__(self, units=32, **kwargs):
#         super(Linear, self).__init__(**kwargs)
        super().__init__(**kwargs)
        self.units = units
    
    # The trainable parameters are defined in build method
    # Since we do not need the input_shape except the build function,
    # we do not need to store then in the __init__ function
    def build(self, input_shape): 
        self.w = self.add_weight("w",shape=(input_shape[-1], self.units),
                                 initializer='random_normal',
                                 trainable=True) # Parameter named "w" is compulsory or an error will be thrown out
        self.b = self.add_weight("b",shape=(self.units,),
                                 initializer='random_normal',
                                 trainable=True)
        super().build(input_shape) # Identical to self.built = True

    # The logic of forward propagation is defined in call method, and is called by __call__ method
    @tf.function
    def call(self, inputs): 
        return tf.matmul(inputs, self.w) + self.b
    
    # Use customized get-config method to save the model as h5 format, specifically for the model composed through Functional API with customized Layer
    def get_config(self):  
        config = super().get_config()
        config.update({'units': self.units})
        return config

tf.keras.backend.clear_session()

model = models.Sequential()
# Note: the input_shape here will be modified by the model, so we don't have to fill None in the dimension representing the number of samples.
model.add(Linear(units = 1,input_shape = (2,)))  
print("model.input_shape: ",model.input_shape)
print("model.output_shape: ",model.output_shape)
model.summary()

created time in 2 months

push eventneilteng/eat_tensorflow2_in_30_days

Zhiyong

commit sha c2c5cdcf430d6f63df06782c4aaf98232c4df1f0

typo

view details

push time in 2 months

PR opened lyhue1991/eat_tensorflow2_in_30_days

nan indicator col is catergorical col instead.

nan indicator col is catergorical col instead. Same change as I did on the English version.

+9 -2

0 comment

1 changed file

pr created time in 2 months

push eventneilteng/eat_tensorflow2_in_30_days

Zhiyong

commit sha 0492ef8bf84a4bfd6fe5957bd8a2385ae86e7264

nan indicator col is catergorical col instead. nan indicator col is catergorical col instead. Same change as I did on the English version.

view details

push time in 2 months

PR opened lyhue1991/eat_tensorflow2_in_30_days

nan indicator is actually Category column

nan indicator col is actually indicator_column instead of numberic columns.

+9 -2

0 comment

1 changed file

pr created time in 2 months

push eventneilteng/eat_tensorflow2_in_30_days

Zhiyong

commit sha 7d9b87041923b4f0aedbbd5ccd3e3f7d242649b7

nan indicator is actually Category column nan indicator col is actually indicator_column instead of numberic columns.

view details

push time in 2 months

issue openedlyhue1991/eat_tensorflow2_in_30_days

Suggest a virtual environment.

I will suggest a virtual environment for this tutorial

For one thing, it decouples the change in new relase of tf and the development environment we use. And it also save reader effort to figure out missing package. e.g. When I run the 5-1, it tells me I miss the package pillow.

Best Neil

created time in 2 months

pull request commentlyhue1991/eat_tensorflow2_in_30_days

Inform there is exlpaination in next section

妥,我觉得在中文版加应该也有帮助!

neilteng

comment created time in 2 months

PR opened lyhue1991/eat_tensorflow2_in_30_days

Inform there is exlpaination in next section

Inform readers that there is exlpaination in the next section.

When I was reading this section without looking into the next section, I googled for the mechanisms behind these rules which turned out to be meaningless.

This is probabily because we do not have a title for each English section and I am not sure whether people reading the Chinese version find it clearly. This modification is just for better chaining the chapter.

+1 -1

0 comment

1 changed file

pr created time in 2 months

push eventneilteng/eat_tensorflow2_in_30_days

Zhiyong

commit sha 31f72e5c2328e0d27728c8a5b0379a53e451ac9f

Inform there is exlpaination in next section Inform the readers that there is exlpaination in next section. When I was reading this section without looking into the next section, I google for the mechanisms behind these rules and it costs me meaningless effort. This is probabily because we do not have a title for each English section and I am not sure whether people reading the Chinese version find it clearly.

view details

push time in 2 months

issue closedlyhue1991/eat_tensorflow2_in_30_days

请问作者考虑写一下tft data pipeline内容吗?

tf2 加入了tfx的扩展支持,其中tfr我觉得是最可能在工程中用到。请问作者考虑加入这部分的教程?

典型的一个场景是: 里面对apache beam的整合,可以让我们将线下训练和线上serving的data pipeline统一起来。这样子我们的model只要消费pipeline给的数据就好了。

closed time in 2 months

neilteng

pull request commentlyhue1991/eat_tensorflow2_in_30_days

typo and add equivalence to Ellipsis represents

@lyhue1991 @nbwuzhe 不好意思哈,看最后一个commit就好了,因为我直接在网页版上改,所以可能commit多了几次。cherrypick最后一次的就可以了我觉得?

neilteng

comment created time in 2 months

pull request commentlyhue1991/eat_tensorflow2_in_30_days

typo and add equivalence to Ellipsis represents

add equivalence to Ellipsis represents to better understand the bebavior of Ellipsis ...

neilteng

comment created time in 2 months

PR opened lyhue1991/eat_tensorflow2_in_30_days

typo and add equivalence to Ellipsis represents

add equivalence to Ellipsis represents to better understand the bebavior of Ellipsis ...

+6 -2

0 comment

1 changed file

pr created time in 2 months

push eventneilteng/eat_tensorflow2_in_30_days

Zhiyong

commit sha 9b129ce7567df629d440e2d062f71805a265e972

typo and add equivalence to Ellipsis represents add equivalence to Ellipsis represents to better understand the bebavior of Ellipsis ...

view details

push time in 2 months

push eventneilteng/eat_tensorflow2_in_30_days

Zhiyong

commit sha 4286d4a6646128cd88e1bb2788360d8238ee7359

typo and add equivalence to Ellipsis represents. add equivalence to Ellipsis represents to better understand the bebavior of Ellipsis ...

view details

push time in 2 months

push eventneilteng/eat_tensorflow2_in_30_days

Zhiyong

commit sha dcd6fbc0fa2911f0f9f4ca610a858f275d4579de

typo and add equivalence to Ellipsis represents add equivalence to Ellipsis represents to better understand the bebavior of ...

view details

push time in 2 months

fork neilteng/ML-CNN-Verification-Code-Recognition

This repo is about verification code recognition based on CNN. It introduced a new way to gain training dataset and recognition algorithm based on CNN.

fork in 2 months

push eventneilteng/eat_tensorflow2_in_30_days

Zhiyong

commit sha 5b07b2ba836209c01a09d3cb1f35526a47830892

typo on chapter3.md

view details

push time in 2 months

fork neilteng/eat_tensorflow2_in_30_days

Tensorflow2.0 🍎🍊 is delicious, just eat it! 😋😋

fork in 2 months

fork neilteng/backtobackswe

Legacy Code Examples For Back To Back SWE Lessons

fork in 2 months

startedbephrem1/backtobackswe

started time in 2 months

fork neilteng/InceptionTime

InceptionTime: Finding AlexNet for Time Series Classification

fork in 2 months

startedhfawaz/InceptionTime

started time in 2 months

fork neilteng/dl-4-tsc

Deep Learning for Time Series Classification

fork in 2 months

issue commenttensorflow/tensorflow

tf.estimator.BoostedTreesClassifier does support multi-classes: AttributeError: 'NoneType' object has no attribute 'is_compatible_with'

@jvishnuvardhan, the standalaone code linked above already changes the label.

Yes, the code linked has been changed. Thank you rsk7.

neilteng

comment created time in 2 months

issue openedtensorflow/tensorflow

Design a generic type Python API for the hparams plugin

System information

  • TensorFlow version (you are using): tf2.20
  • Are you willing to contribute it (Yes/No): No

Describe the feature and the current behavior/state. While I follow the following tutorial to do the hyperparameter tuning. I cannot choose from list type object. https://www.tensorflow.org/tensorboard/hyperparameter_tuning_with_hparams

HP_NUM_UNITS = hp.HParam('num_units', hp.Discrete([[256, 512, 256, 128,64],[256, 512, 1024, 512, 256, 128],[256, 512, 1024, 512, 256, 128, 64, 32]]))

ValueError: Unknown dtype: <class 'list'>

I am playing with the canned estimator and tune the 'dnn_hidden_units' hyperparameter with 'from tensorboard.plugins.hparams import api as hp'. But it seems that I cannot tune 'dnn_hidden_units' with this library.

    estimator = tf.estimator.DNNLinearCombinedClassifier(
        # wide settings
        linear_feature_columns=feature_columns,
        linear_optimizer=FtrlwithParams,
        # deep settings
        dnn_feature_columns=feature_columns,
        dnn_hidden_units=[256, 512, 256, 128, 64],
#         dnn_hidden_units=[1000, 500,100],
        dnn_optimizer=AdamWithParams,
        batch_norm=True,
        dnn_dropout=0.5,
        n_classes=NUM_LABEL,
        config=RUN_CONFIG,
        # warm-start settings
        warm_start_from=MODEL_DIR
    )

Will this change the current api? How? No. Maybe we can add a new API or a generic API.

Who will benefit with this feature? Anyone need to do hyperparameter tuning.

I expect that people build models with list-like parameter. So I think this is a common feature.

created time in 2 months

startedhfawaz/dl-4-tsc

started time in 2 months

issue openedtensorflow/tensorflow

tf.estimator.BoostedTreesClassifier does support multi-classes: AttributeError: 'NoneType' object has no attribute 'is_compatible_with'

System information I am using colab to reproduce the issue and the ipynb is attached below.

You can collect some of this information using our environment capture tf.version.GIT_VERSION: v2.2.0-0-g2b96f3662b tf.version.VERSION: 2.2.0

Describe the current behavior cannot train tf.estimator.BoostedTreesClassifier on multi-classes data

Describe the expected behavior Change the last 100 samples' label to a third class in following tutorial: https://www.tensorflow.org/tutorials/estimator/boosted_trees#train_and_evaluate_the_model

Standalone code to reproduce the issue https://colab.research.google.com/drive/13vl2mV7C_62HxKCw7-uuMp5WMm_OxYGL?usp=sharing

Other info / logs Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.

error is show in the last cell of the colab notebook.

INFO:tensorflow:Using default config.
WARNING:tensorflow:Using temporary folder as model directory: /tmp/tmpf3g1hc6c
INFO:tensorflow:Using config: {'_model_dir': '/tmp/tmpf3g1hc6c', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true
graph_options {
  rewrite_options {
    meta_optimizer_iterations: ONE
  }
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/canned/boosted_trees.py:398: VocabularyListCategoricalColumn._num_buckets (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
The old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead.
INFO:tensorflow:Calling model_fn.
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-35-828fa5064808> in <module>()
      8 # The model will stop training once the specified number of trees is built, not
      9 # based on the number of steps.
---> 10 est.train(train_input_fn, max_steps=100)
     11 
     12 # Eval.

13 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/parallel_for/gradients.py in batch_jacobian(output, inp, use_pfor, parallel_iterations)
    111   """
    112   output_shape = output.shape
--> 113   if not output_shape[0].is_compatible_with(inp.shape[0]):
    114     raise ValueError("Need first dimension of output shape (%s) and inp shape "
    115                      "(%s) to match." % (output.shape, inp.shape))

AttributeError: 'NoneType' object has no attribute 'is_compatible_with'

created time in 2 months

startedray-project/ray

started time in 2 months

startedmodin-project/modin

started time in 2 months

starteddask/dask

started time in 2 months

startedVowpalWabbit/vowpal_wabbit

started time in 2 months

issue openedtensorflow/tensorflow

2 errors while building NodeDef 'tf_op_layer_Maximum_2/Maximum_2' using Op<name=Maximum; signature=x:T, y:T -> z:T ...Inconsistent values for attr 'T' DT_INT32 vs. DT_INT64

System information I am using colab to reproduce the issue and the ipynb is attached below.

You can collect some of this information using our environment capture tf.version.GIT_VERSION: v1.12.1-32511-g2cc80a74f2 tf.version.VERSION: 2.3.0-dev20200525

Describe the current behavior cannot load the saved tf model

Describe the expected behavior successifully save the model and serve it like this example: https://github.com/tensorflow/transform/blob/master/examples/census_example_v2_test.py

Standalone code to reproduce the issue https://drive.google.com/open?id=1h2QIX_QZetIzSuG0J6lNWkHoSa2nnIyS

Other info / logs Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.

error is show in the last cell of the colab notebook.

WARNING:tensorflow:Layer LSTM_1 will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
WARNING:tensorflow:Layer LSTM_1 will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
WARNING:tensorflow:Layer LSTM_1 will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
WARNING:tensorflow:Layer LSTM_1 will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
WARNING:tensorflow:Layer LSTM_1 will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
WARNING:tensorflow:Layer LSTM_1 will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
WARNING:tensorflow:Layer LSTM_2 will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
WARNING:tensorflow:Layer LSTM_2 will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
WARNING:tensorflow:Layer LSTM_2 will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
WARNING:tensorflow:Layer LSTM_2 will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
WARNING:tensorflow:Layer LSTM_2 will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
WARNING:tensorflow:Layer LSTM_2 will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
---------------------------------------------------------------------------
InvalidArgumentError                      Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in _create_c_op(graph, node_def, inputs, control_inputs, op_def)
   1819   try:
-> 1820     c_op = pywrap_tf_session.TF_FinishOperation(op_desc)
   1821   except errors.InvalidArgumentError as e:

InvalidArgumentError: 2 errors while building NodeDef 'tf_op_layer_Maximum_2/Maximum_2' using Op<name=Maximum; signature=x:T, y:T -> z:T; attr=T:type,allowed=[DT_BFLOAT16, DT_HALF, DT_FLOAT, DT_DOUBLE, DT_UINT8, DT_INT16, DT_INT32, DT_INT64]>:
Inconsistent values for attr 'T' DT_INT32 vs. DT_INT64
Inconsistent values for attr 'T' DT_INT32 vs. DT_INT64

During handling of the above exception, another exception occurred:

ValueError                                Traceback (most recent call last)
15 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in _create_c_op(graph, node_def, inputs, control_inputs, op_def)
   1821   except errors.InvalidArgumentError as e:
   1822     # Convert to ValueError for backwards compatibility.
-> 1823     raise ValueError(str(e))
   1824 
   1825   return c_op

ValueError: 2 errors while building NodeDef 'tf_op_layer_Maximum_2/Maximum_2' using Op<name=Maximum; signature=x:T, y:T -> z:T; attr=T:type,allowed=[DT_BFLOAT16, DT_HALF, DT_FLOAT, DT_DOUBLE, DT_UINT8, DT_INT16, DT_INT32, DT_INT64]>:
Inconsistent values for attr 'T' DT_INT32 vs. DT_INT64
Inconsistent values for attr 'T' DT_INT32 vs. DT_INT64

created time in 3 months

more