profile
viewpoint

Ask questionsTFLite: C++/Java: experimental kernel ctc_beam_search_decoder returns always buffer length=length+1

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 20.04
  • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: Pixel 2
  • TensorFlow installed from (source or binary): source
  • TensorFlow version: master
  • Python version: 3.8
  • Installed using virtualenv? pip? conda?: pip
  • Bazel version (if compiling from source): 3.1.0
  • GCC/Compiler version (if compiling from source): 9.3.0
  • CUDA/cuDNN version: -
  • GPU model and memory: -
  • NDK: android-ndk-r20

Describe the current behavior All returned Java IntBuffer (TFLite Android) from the concrete function ( decoder.tflite) have an extra added byte=0 in the end of the returned dense_decoded, e.g. [11,11,4,7,8,0]. => This happens only in TFLite with the tflite experimental kernel ctc_beam_search_decoder.cc returned from Java.

Describe the expected behavior The returned dense_decoded from the concrete function ( decoder.tflite) should be e.g. [11,11,4,7,8]. => If I use the concrete function directly in python it works as expected => If I use the concrete function exported as decoder.tflite and loaded directly in python it works as expected.

Standalone code to reproduce the issue

git clone https://github.com/tensorflow/tensorflow.git
cd tensorflow
git checkout -b master 6e9d916229b5aefbdcfd33cbc4b34c9f48b5e6e1
nano .tf_configure.bazelrc

Bazel config .tf_configure.bazelrc:

build --action_env PYTHON_BIN_PATH="/usr/bin/python"
build --action_env PYTHON_LIB_PATH="/usr/lib/python3/dist-packages"
build --python_path="/usr/bin/python"
build:xla --define with_xla_support=true
build:opt --copt=-march=native
build:opt --copt=-Wno-sign-compare
build:opt --host_copt=-march=native
build:opt --define with_default_optimizations=true
build --action_env ANDROID_NDK_HOME="CHANGE_TO_YOUR_ANDROID_NDK_HOME"
build --action_env ANDROID_NDK_API_LEVEL="21"
build --action_env ANDROID_BUILD_TOOLS_VERSION="28.0.0"
build --action_env ANDROID_SDK_API_LEVEL="23"
build --action_env ANDROID_SDK_HOME="CHANGE_TO_YOUR_ANDROID_SDK_HOME"
test --flaky_test_attempts=3
test --test_size_filters=small,medium
test --test_tag_filters=-benchmark-test,-no_oss,-oss_serial
test --build_tag_filters=-benchmark-test,-no_oss
test --test_tag_filters=-gpu
test --build_tag_filters=-gpu
build --action_env TF_CONFIGURE_IOS="0"

And compile with

bazel build --cxxopt='--std=c++14' -c opt --fat_apk_cpu=arm64-v8a,armeabi-v7a --config=monolithic \
  --host_crosstool_top=@bazel_tools//tools/cpp:toolchain \
  //tensorflow/lite/java:tensorflow-lite \
  //tensorflow/lite/java:tensorflow-lite-gpu \
  //tensorflow/lite/delegates/flex:delegate \
  //tensorflow/lite/experimental/kernels:ctc_beam_search_decoder_op \
  //tmp:tensorflow-lite-select-tf-ops

Concrete function exported to decoder.tflite:

@tf.function
def decode(logits, top_paths=3, beam_width=3):
    batch_size_current, timesteps, _ = tf.shape(input=logits)
    seq_len = tf.fill([batch_size_current], timesteps)
    logits = tf.transpose(a=logits, perm=(1, 0, 2))

    decoded, log_probabilities = tf.nn.ctc_beam_search_decoder(inputs=logits, top_paths=top_paths, beam_width=beam_width, sequence_length=seq_len)

    dense_decoded = tf.sparse.to_dense(decoded[0], default_value=-1)
tensorflow/tensorflow

Answer questions renjie-liu

Hi,

Thanks for filing the bug, wonder do you have the tflite model?

Also, wonder if this only occurs with java usage? Have you tried with python tflite api?

useful!

Related questions

ModuleNotFoundError: No module named 'tensorflow.contrib' hot 9
Tf.Keras metrics issue hot 8
Error occurred when finalizing GeneratorDataset iterator hot 7
Error loading tensorflow hot 6
module 'tensorflow' has no attribute 'ConfigProto' hot 6
TF 2.0 'Tensor' object has no attribute 'numpy' while using .numpy() although eager execution enabled by default hot 6
tensorflow-gpu CUPTI errors
Lossy conversion from float32 to uint8. Range [0, 1]. Convert image to uint8 prior to saving to suppress this warning.
ModuleNotFoundError: No module named 'tensorflow.contrib'
When importing TensorFlow, error loading Hadoop
OSError: SavedModel file does not exist at: saved_model_dir/{saved_model.pbtxt|saved_model.pb}
AttributeError: module 'tensorflow.python.framework.op_def_registry' has no attribute 'register_op_list'
tf.keras.layers.Conv1DTranspose ?
[TF 2.0] tf.keras.optimizers.Adam hot 4
TF2.0 AutoGraph issue hot 4
source:https://uonfu.com/
Github User Rank List