profile
viewpoint

shendaw/CRNN-Keras 0

CRNN (CNN+RNN) for OCR using Keras / License Plate Recognition

shendaw/examples 0

TensorFlow examples

shendaw/xiaoshen 0

flask-study

startedPaddlePaddle/PaddleSeg

started time in a day

issue closedtensorflow/tensorflow

[RNN] Invoke the tflite model for inference with dynamic batchsize

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows10
  • TensorFlow installed from (source or binary): source
  • TensorFlow version (use command below): tf-nightly(2.4.0.dev20200826)
  • Python version: 3.7.7
  • Bazel version (if compiling from source): no
  • GCC/Compiler version (if compiling from source): no
  • CUDA/cuDNN version: 7.6.5
  • GPU model and memory: GTX1050/2G

Describe the current behavior

When I want to invoke the tflite_model for inference(with dynamic batchsize), An error occurred:

RuntimeError: tensorflow/lite/kernels/concatenation.cc:76 t->dims->data[d] != t0->dims->data[d] (13 != 1)Node number 23 (CONCATENATION) failed to prepare.
Node number 10 (WHILE) failed to invoke.

Describe the expected behavior

Inference with dynamic batchsize

Standalone code to reproduce the issue

Here is the link to my Colab to reproduce the issue:

https://colab.research.google.com/drive/13fr-C53JjRxIKFC9d96H9iwwexkGeH2O?usp=sharing

Also here is the code segment where issure occured:

# Run the model with TensorFlow Lite
interpreter = tf.lite.Interpreter(model_content=tflite_model)

input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()

PREDICT_BATCH_SIZE = 13

# resize the input tensor
interpreter.resize_tensor_input(input_details[0]['index'], (PREDICT_BATCH_SIZE,28,28))
interpreter.allocate_tensors()

interpreter.set_tensor(input_details[0]["index"], x_test[0:PREDICT_BATCH_SIZE, :, :])
interpreter.invoke()
result = interpreter.get_tensor(output_details[0]["index"])

print(result)

interpreter.reset_all_variables()

Here is the error occured:

RuntimeError: tensorflow/lite/kernels/concatenation.cc:76 t->dims->data[d] != t0->dims->data[d] (13 != 1)Node number 23 (CONCATENATION) failed to prepare.
Node number 10 (WHILE) failed to invoke.

Other info / logs the whole error:

RuntimeError                              Traceback (most recent call last)
<ipython-input-28-6198c0bfcdb3> in <module>()
     13 
     14 interpreter.set_tensor(input_details[0]["index"], x_test[0:PREDICT_BATCH_SIZE, :, :])
---> 15 interpreter.invoke()
     16 result = interpreter.get_tensor(output_details[0]["index"])
     17 

/usr/local/lib/python3.6/dist-packages/tensorflow/lite/python/interpreter.py in invoke(self)
    537     """
    538     self._ensure_safe()
--> 539     self._interpreter.Invoke()
    540 
    541   def reset_all_variables(self):

RuntimeError: tensorflow/lite/kernels/concatenation.cc:76 t->dims->data[d] != t0->dims->data[d] (13 != 1)Node number 23 (CONCATENATION) failed to prepare.
Node number 10 (WHILE) failed to invoke.

closed time in a month

shendaw

issue commenttensorflow/tensorflow

[RNN] Invoke the tflite model for inference with dynamic batchsize

Can you try set batch size to 1? Then inference one by one?

Thank you for your answer. I also tried this method, but this method of cyclically calling inference will cause serious calculation delays. We know that the mobile terminal has high requirements for speed, and we hope that items can be placed in a batch for parallel calculation to reduce inference time.

TFL rnn/lstm kernel is stateful (the states are maintained internally), so it's hard to change batch_size during inference time. If you're fine with binary size, maybe it's possible to have multiple models with different batch_size.

I see, thank you for your answer. Since dynamic batchsize can be supported when applying inference in TF, does TFlite have plans to support dynamic batchsize when applying inference too?

TFL hasn't modeled resource variables yet, so we don't have any near-term plan. Sorry about that.

Ok I see, thank you very much for your answers and help!

shendaw

comment created time in a month

issue commenttensorflow/tensorflow

[RNN] Invoke the tflite model for inference with dynamic batchsize

Can you try set batch size to 1? Then inference one by one?

Thank you for your answer. I also tried this method, but this method of cyclically calling inference will cause serious calculation delays. We know that the mobile terminal has high requirements for speed, and we hope that items can be placed in a batch for parallel calculation to reduce inference time.

TFL rnn/lstm kernel is stateful (the states are maintained internally), so it's hard to change batch_size during inference time.

If you're fine with binary size, maybe it's possible to have multiple models with different batch_size.

I see, thank you for your answer. Since dynamic batchsize can be supported when applying inference in TF, does TFlite have plans to support dynamic batchsize when applying inference too?

shendaw

comment created time in a month

issue commenttensorflow/tensorflow

[RNN] Invoke the tflite model for inference with dynamic batchsize

Can you try set batch size to 1?

Then inference one by one?

Thank you for your answer. I also tried this method, but this method of cyclically calling inference will cause serious calculation delays. We know that the mobile terminal has high requirements for speed, and we hope that items can be placed in a batch for parallel calculation to reduce inference time.

shendaw

comment created time in a month

issue commenttensorflow/tensorflow

[RNN] Invoke the tflite model for inference with dynamic batchsize

How about specifying the batch size when you create a TF model?

Thank you for your reply. I have tried this, and when I specify the batch size as 13(for example) while creating my TF model , the batch size for inference must also be set to13, or the same error will occur. Since the number of items in each batch is dynamic in my project, I hope that the batch size can be set dynamically before inference.

shendaw

comment created time in a month

issue commenttensorflow/tensorflow

[RNN] Invoke the tflite model for inference with dynamic batchsize

How about specifying the batch size when you create a TF model? Thank you for your reply. I have tried this, and when I specify the batch size as 13(for example) while creating my TF model , the batch size for inference must also be set to13, or the same error will occur. Since the number of items in each batch is dynamic in my project, I hope that the batch size can be set dynamically before inference.

shendaw

comment created time in a month

issue openedtensorflow/tensorflow

[RNN] Invoke the tflite model for inference with dynamic batchsize

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows10
  • TensorFlow installed from (source or binary): source
  • TensorFlow version (use command below): tf-nightly(2.4.0.dev20200826)
  • Python version: 3.7.7
  • Bazel version (if compiling from source): no
  • GCC/Compiler version (if compiling from source): no
  • CUDA/cuDNN version: 7.6.5
  • GPU model and memory: GTX1050/2G

Describe the current behavior

When I want to invoke the tflite_model for inference(with dynamic batchsize), An error occurred:

RuntimeError: tensorflow/lite/kernels/concatenation.cc:76 t->dims->data[d] != t0->dims->data[d] (13 != 1)Node number 23 (CONCATENATION) failed to prepare.
Node number 10 (WHILE) failed to invoke.

Describe the expected behavior

Inference with dynamic batchsize

Standalone code to reproduce the issue

Here is the link to my Colab to reproduce the issue:

https://colab.research.google.com/drive/13fr-C53JjRxIKFC9d96H9iwwexkGeH2O?usp=sharing

Also here is the code segment where issure occured:

# Run the model with TensorFlow Lite
interpreter = tf.lite.Interpreter(model_content=tflite_model)

input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()

PREDICT_BATCH_SIZE = 13

# resize the input tensor
interpreter.resize_tensor_input(input_details[0]['index'], (PREDICT_BATCH_SIZE,28,28))
interpreter.allocate_tensors()

interpreter.set_tensor(input_details[0]["index"], x_test[0:PREDICT_BATCH_SIZE, :, :])
interpreter.invoke()
result = interpreter.get_tensor(output_details[0]["index"])

print(result)

interpreter.reset_all_variables()

Here is the error occured:

RuntimeError: tensorflow/lite/kernels/concatenation.cc:76 t->dims->data[d] != t0->dims->data[d] (13 != 1)Node number 23 (CONCATENATION) failed to prepare.
Node number 10 (WHILE) failed to invoke.

Other info / logs the whole error:

RuntimeError                              Traceback (most recent call last)
<ipython-input-28-6198c0bfcdb3> in <module>()
     13 
     14 interpreter.set_tensor(input_details[0]["index"], x_test[0:PREDICT_BATCH_SIZE, :, :])
---> 15 interpreter.invoke()
     16 result = interpreter.get_tensor(output_details[0]["index"])
     17 

/usr/local/lib/python3.6/dist-packages/tensorflow/lite/python/interpreter.py in invoke(self)
    537     """
    538     self._ensure_safe()
--> 539     self._interpreter.Invoke()
    540 
    541   def reset_all_variables(self):

RuntimeError: tensorflow/lite/kernels/concatenation.cc:76 t->dims->data[d] != t0->dims->data[d] (13 != 1)Node number 23 (CONCATENATION) failed to prepare.
Node number 10 (WHILE) failed to invoke.

created time in a month

fork shendaw/CRNN-Keras

CRNN (CNN+RNN) for OCR using Keras / License Plate Recognition

fork in 2 months

push eventshendaw/xiaoshen

xiao tang

commit sha bc7e17018e1d80cc597f46b74467fe6f7243507e

the 1 commit

view details

xiao tang

commit sha 7aa98578ac8306f55263b3eed4607c26a81726f3

Merge branch 'master' of https://github.com/shendaw/xiaoshen

view details

push time in 3 months

create barnchshendaw/xiaoshen

branch : master

created branch time in 3 months

created repositoryshendaw/xiaoshen

flask-study

created time in 3 months

more