profile
viewpoint

Ask questions[RNN] Invoke the tflite model for inference with dynamic batchsize

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows10
  • TensorFlow installed from (source or binary): source
  • TensorFlow version (use command below): tf-nightly(2.4.0.dev20200826)
  • Python version: 3.7.7
  • Bazel version (if compiling from source): no
  • GCC/Compiler version (if compiling from source): no
  • CUDA/cuDNN version: 7.6.5
  • GPU model and memory: GTX1050/2G

Describe the current behavior

When I want to invoke the tflite_model for inference(with dynamic batchsize), An error occurred:

RuntimeError: tensorflow/lite/kernels/concatenation.cc:76 t->dims->data[d] != t0->dims->data[d] (13 != 1)Node number 23 (CONCATENATION) failed to prepare.
Node number 10 (WHILE) failed to invoke.

Describe the expected behavior

Inference with dynamic batchsize

Standalone code to reproduce the issue

Here is the link to my Colab to reproduce the issue:

https://colab.research.google.com/drive/13fr-C53JjRxIKFC9d96H9iwwexkGeH2O?usp=sharing

Also here is the code segment where issure occured:

# Run the model with TensorFlow Lite
interpreter = tf.lite.Interpreter(model_content=tflite_model)

input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()

PREDICT_BATCH_SIZE = 13

# resize the input tensor
interpreter.resize_tensor_input(input_details[0]['index'], (PREDICT_BATCH_SIZE,28,28))
interpreter.allocate_tensors()

interpreter.set_tensor(input_details[0]["index"], x_test[0:PREDICT_BATCH_SIZE, :, :])
interpreter.invoke()
result = interpreter.get_tensor(output_details[0]["index"])

print(result)

interpreter.reset_all_variables()

Here is the error occured:

RuntimeError: tensorflow/lite/kernels/concatenation.cc:76 t->dims->data[d] != t0->dims->data[d] (13 != 1)Node number 23 (CONCATENATION) failed to prepare.
Node number 10 (WHILE) failed to invoke.

Other info / logs the whole error:

RuntimeError                              Traceback (most recent call last)
<ipython-input-28-6198c0bfcdb3> in <module>()
     13 
     14 interpreter.set_tensor(input_details[0]["index"], x_test[0:PREDICT_BATCH_SIZE, :, :])
---> 15 interpreter.invoke()
     16 result = interpreter.get_tensor(output_details[0]["index"])
     17 

/usr/local/lib/python3.6/dist-packages/tensorflow/lite/python/interpreter.py in invoke(self)
    537     """
    538     self._ensure_safe()
--> 539     self._interpreter.Invoke()
    540 
    541   def reset_all_variables(self):

RuntimeError: tensorflow/lite/kernels/concatenation.cc:76 t->dims->data[d] != t0->dims->data[d] (13 != 1)Node number 23 (CONCATENATION) failed to prepare.
Node number 10 (WHILE) failed to invoke.
tensorflow/tensorflow

Answer questions renjie-liu

Can you try set batch size to 1? Then inference one by one?

Thank you for your answer. I also tried this method, but this method of cyclically calling inference will cause serious calculation delays. We know that the mobile terminal has high requirements for speed, and we hope that items can be placed in a batch for parallel calculation to reduce inference time.

TFL rnn/lstm kernel is stateful (the states are maintained internally), so it's hard to change batch_size during inference time.

If you're fine with binary size, maybe it's possible to have multiple models with different batch_size.

useful!

Related questions

ModuleNotFoundError: No module named 'tensorflow.contrib' hot 9
Tf.Keras metrics issue hot 8
Error occurred when finalizing GeneratorDataset iterator hot 7
Error loading tensorflow hot 6
module 'tensorflow' has no attribute 'ConfigProto' hot 6
TF 2.0 'Tensor' object has no attribute 'numpy' while using .numpy() although eager execution enabled by default hot 6
tensorflow-gpu CUPTI errors
Lossy conversion from float32 to uint8. Range [0, 1]. Convert image to uint8 prior to saving to suppress this warning.
ModuleNotFoundError: No module named 'tensorflow.contrib'
When importing TensorFlow, error loading Hadoop
OSError: SavedModel file does not exist at: saved_model_dir/{saved_model.pbtxt|saved_model.pb}
AttributeError: module &#39;tensorflow.python.framework.op_def_registry&#39; has no attribute &#39;register_op_list&#39;
tf.keras.layers.Conv1DTranspose ?
[TF 2.0] tf.keras.optimizers.Adam hot 4
TF2.0 AutoGraph issue hot 4
source:https://uonfu.com/
Github User Rank List