profile
viewpoint

issue openedJaidedAI/EasyOCR

Any tutorial to train / fine-tune the model for more fonts (new dataset)?

Thanks for publishing this great EASYOCR model! I am wondering if I can find a tutorial to train EASYOCR or finetune it on a custom dataset ( where I need to add a complex background for texts and support new fonts).

What do you think? is there any link for that?

created time in 10 days

issue openedJaidedAI/EasyOCR

EasyOCR VS Tesseract

EasyOCR is not only for scanned images, isn't it? because I know Tesseract needs pre-processing for images that are not scanned to make them look like scanned images to have a good performance.

created time in 10 days

issue commentJaidedAI/EasyOCR

Arabic Language

Hi @rkcosmos Thank you for sharing such a great OCR which is really doing well for Arabic. But I have a question, regarding the separator (newline). I expected to receive lines of words based on what is extracted from the image (as it is the case for English easier is able to return lines very well).

However, I received words each word in the individual item in the list, I check the option to get (paragraph = True) which is not the case I am looking for it.

Is there any way to receive lines as in the image for Arabic ocr?

Thanks

MohamedAliRashad

comment created time in a month

issue commenttensorflow/tensorflow

Android - Drawback of Fixing Error (Regular TensorFlow ops) is increasing the app size to 195MB!

@srjoglekar246 but why the performance of the model is not natural, I mean the percentage of scores + when there is no object is trained on is visible it then detect the view as on of the trained objects with a high score :( -_-

hahmad2008

comment created time in a month

issue closedtensorflow/tensorflow

Android - Drawback of Fixing Error (Regular TensorFlow ops) is increasing the app size to 195MB!

Object detection Android TF LITE EXAMPLE

I am running the example of object detection Android version, which need about 14MB storage on the physical phone.

My trained Object detection Android TF LITE

However, when I used my object detection trained model, (which is converted to TF LITE using this link), I got the following exception when the app started:

org.tensorflow.lite.examples.detection E/AndroidRuntime: FATAL EXCEPTION: inference
    Process: org.tensorflow.lite.examples.detection, PID: 5086
    java.lang.IllegalArgumentException: Internal error: Failed to run on the given Interpreter: Regular TensorFlow ops are not supported by this interpreter. Make sure you apply/link the Flex delegate before inference.
    Node number 223 (FlexSize) failed to prepare.
    
        at org.tensorflow.lite.NativeInterpreterWrapper.run(Native Method)
        at org.tensorflow.lite.NativeInterpreterWrapper.run(NativeInterpreterWrapper.java:158)
        at org.tensorflow.lite.Interpreter.runForMultipleInputsOutputs(Interpreter.java:347)
        at org.tensorflow.lite.examples.detection.tflite.TFLiteObjectDetectionAPIModel.recognizeImage(TFLiteObjectDetectionAPIModel.java:196)
        at org.tensorflow.lite.examples.detection.DetectorActivity$2.run(DetectorActivity.java:181)
        at android.os.Handler.handleCallback(Handler.java:938)
        at android.os.Handler.dispatchMessage(Handler.java:99)
        at android.os.Looper.loop(Looper.java:223)
        at android.os.HandlerThread.run(HandlerThread.java:67)
  • My tf-Lite model size: 5MB
  • The app size on the mobile is about 15 MB

To Solve This Error:

  • I include the following dependency on the build.gradle (Modle app):
    // This dependency adds the necessary TF op support.
    implementation 'org.tensorflow:tensorflow-lite-select-tf-ops:0.0.0-nightly'

Screen Shot 2020-08-28 at 3 23 41 PM

That solves the problem and my object detection app is working very well.

However, The app is now 195MB size (storage) on the physical mobile! Does this (-tf-ops) dependency has this large size! If so how can I fix the previous exception without including this dependency in the android app?

Screen Shot 2020-08-28 at 3 21 26 PM

closed time in a month

hahmad2008

issue commenttensorflow/tensorflow

Android - Drawback of Fixing Error (Regular TensorFlow ops) is increasing the app size to 195MB!

@srjoglekar246 Thank you so much. it works now. the app is now 15 MB size 👍

hahmad2008

comment created time in a month

issue commenttensorflow/tensorflow

Android - Drawback of Fixing Error (Regular TensorFlow ops) is increasing the app size to 195MB!

@srjoglekar246

I have successfully converted the tf model, but now the output detection was 100 now is only 10! the performance is so bad now! I had converted the model previously using supported_ops and the performance was really good but now is very bad :(

hahmad2008

comment created time in 2 months

issue commenttensorflow/models

convert TF2 ssd_mobilenet_v2 to tflite

@srjoglekar246 @sajjadaemmi I have successfully converted the tf model, but now the output detection was 100 now is only 10! the performance is so bad now! I had converted the model previously using supported_ops and the performance was really good but now is very bad :(

sajjadaemmi

comment created time in 2 months

issue commenttensorflow/models

convert TF2 ssd_mobilenet_v2 to tflite

@srjoglekar246 model.tflite that you sent for me, works on my inference. but still i cant convert to tflite and my output is 500 Byles.

this is my command:

tflite_convert \
  --saved_model_dir='/inference_graph_tflite_tf2/saved_model'\
  --output_file='/tflite_tf2/model.tflite' \
  --experimental_new_converter

@sajjadaemmi I used the same script to export inference graph (to be input to the tf lite converter) But after converting I have almost as you had 500byte tf model!!! Could you please help me with what is wrong with this?

btw I used the latest version of 'nightly'

import tensorflow as tf saved_model_dir = 'path/saved_model' converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir) converter.optimizations = [tf.lite.Optimize.DEFAULT] tflite_quant_model = converter.convert()

sajjadaemmi

comment created time in 2 months

issue commenttensorflow/models

Using TF2.4 TFLite Model in Android App - Error

@srjoglekar246 after converting using the latest script you mentioned, I got this error

mihir-chauhan

comment created time in 2 months

issue commenttensorflow/models

convert TF2 ssd_mobilenet_v2 to tflite

Hi @srjoglekar246 I got this error after converting, check this

sajjadaemmi

comment created time in 2 months

issue commenttensorflow/tensorflow

Android - Drawback of Fixing Error (Regular TensorFlow ops) is increasing the app size to 195MB!

@srjoglekar246 I used the same script.

# From the tensorflow/models/research/ directory
python object_detection/export_tflite_graph_tf2.py \
    --pipeline_config_path path/to/ssd_model/pipeline.config \
    --trained_checkpoint_dir path/to/ssd_model/checkpoint \
    --output_directory path/to/exported_model_directory

I got a 544 byte lite model, but when I tried to test the lite model, I got the error:

interpreter = tf.lite.Interpreter(model_content=tflite_quant_model)
interpreter.allocate_tensors()

# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()

# Test the model on random input data.
input_shape = input_details[0]['shape']
print(input_shape)
  • Error for this line interpreter = tf.lite.Interpreter(model_content=tflite_quant_model)
/usr/local/lib/python3.6/dist-packages/tensorflow/lite/python/interpreter.py in __init__(self, model_path, model_content, experimental_delegates, num_threads)
    206               custom_op_registerers_by_func))
    207       if not self._interpreter:
--> 208         raise ValueError('Failed to open {}'.format(model_path))
    209     elif model_content and not model_path:
    210       custom_op_registerers_by_name = [

ValueError: Did not get operators, tensors, or buffers in subgraph 0.

Any idea? how to solve this?

hahmad2008

comment created time in 2 months

issue commenttensorflow/models

convert TF2 ssd_mobilenet_v2 to tflite

@mihir-chauhan , @sajjadaemmi Hi Guys: Could you please help me with where to start? I need to convert the model into tflite without using TF-Select ops.

sajjadaemmi

comment created time in 2 months

issue commenttensorflow/models

convert TF2 ssd_mobilenet_v2 to tflite

And use the latest exporting script in the README.

what do you mean with latest exporting script? shall I need to update tensorflow source and build it from my side? or you meant the following?

# From the tensorflow/models/research/ directory
python object_detection/export_tflite_graph_tf2.py \
    --pipeline_config_path path/to/ssd_model/pipeline.config \
    --trained_checkpoint_dir path/to/ssd_model/checkpoint \
    --output_directory path/to/exported_model_directory
sajjadaemmi

comment created time in 2 months

issue commenttensorflow/models

convert TF2 ssd_mobilenet_v2 to tflite

And use the latest exporting script in the README.

This one? or there is anything else?

# From the tensorflow/models/research/ directory
python object_detection/export_tflite_graph_tf2.py \
    --pipeline_config_path path/to/ssd_model/pipeline.config \
    --trained_checkpoint_dir path/to/ssd_model/checkpoint \
    --output_directory path/to/exported_model_directory
sajjadaemmi

comment created time in 2 months

issue commenttensorflow/models

convert TF2 ssd_mobilenet_v2 to tflite

@hahmad2008 Please take a look at the README. You will need to use the latest nightly for tflite_convert.

So it is now pushed? this nightly (tb-nightly-2.4) version? only the latest version of nightly I need to install?

sajjadaemmi

comment created time in 2 months

issue commenttensorflow/models

convert TF2 ssd_mobilenet_v2 to tflite

@mihir-chauhan Your post-export SavedModel seems to be correct, so looks like there is some inconsistency with the converter.

@sajjadaemmi @hahmad2008

We had to land a change to our converter, which might not be present in your downloaded version. Could you update it to the latest nightly and check?

.

@srjoglekar246, As in my ISSUE, I can convert the model to TF LITE using (TF-Select ops). Can we now convert ssd model without using (TF-slect ops)??

If so could you please provide a script to convert the model such as this one I used previously? What to download or update?

sajjadaemmi

comment created time in 2 months

issue commenttensorflow/models

convert TF2 ssd_mobilenet_v2 to tflite

@srjoglekar246 can I use same script now?

sajjadaemmi

comment created time in 2 months

issue commenttensorflow/models

convert TF2 ssd_mobilenet_v2 to tflite

The code is in internal review, might take a week to land

Hello @srjoglekar246 , Any update?

sajjadaemmi

comment created time in 2 months

issue commenttensorflow/tensorflow

Android - Drawback of Fixing Error (Regular TensorFlow ops) is increasing the app size to 195MB!

Thanks @srjoglekar246 But where can I see that script?

hahmad2008

comment created time in 2 months

issue commenttensorflow/tensorflow

Android - Drawback of Fixing Error (Regular TensorFlow ops) is increasing the app size to 195MB!

@srjoglekar246 This is TF-Select ops converting script I used to convert the model to the lite.

hahmad2008

comment created time in 2 months

issue commenttensorflow/tensorflow

Android - Drawback of Fixing Error (Regular TensorFlow ops) is increasing the app size to 195MB!

@srjoglekar246 I follow this tutorial to train object detection on a custom dataset.

  • In the tutorial, the following model is used as a backbone model.

wget http://download.tensorflow.org/models/object_detection/tf2/20200711/ssd_mobilenet_v2_320x320_coco17_tpu-8.tar.gz

hahmad2008

comment created time in 2 months

issue openedtensorflow/tensorflow

Android - Drawback of Fixing Error (Regular TensorFlow ops) is increasing the app size to 195MB!

Object detection Android TF LITE EXAMPLE

I am running the example of object detection Android version, which need about 14MB storage on the physical phone.

My trained Object detection Android TF LITE

However, when I used my object detection trained model, (which is converted to TF LITE using this link), I got the following exception when the app started:

org.tensorflow.lite.examples.detection E/AndroidRuntime: FATAL EXCEPTION: inference
    Process: org.tensorflow.lite.examples.detection, PID: 5086
    java.lang.IllegalArgumentException: Internal error: Failed to run on the given Interpreter: Regular TensorFlow ops are not supported by this interpreter. Make sure you apply/link the Flex delegate before inference.
    Node number 223 (FlexSize) failed to prepare.
    
        at org.tensorflow.lite.NativeInterpreterWrapper.run(Native Method)
        at org.tensorflow.lite.NativeInterpreterWrapper.run(NativeInterpreterWrapper.java:158)
        at org.tensorflow.lite.Interpreter.runForMultipleInputsOutputs(Interpreter.java:347)
        at org.tensorflow.lite.examples.detection.tflite.TFLiteObjectDetectionAPIModel.recognizeImage(TFLiteObjectDetectionAPIModel.java:196)
        at org.tensorflow.lite.examples.detection.DetectorActivity$2.run(DetectorActivity.java:181)
        at android.os.Handler.handleCallback(Handler.java:938)
        at android.os.Handler.dispatchMessage(Handler.java:99)
        at android.os.Looper.loop(Looper.java:223)
        at android.os.HandlerThread.run(HandlerThread.java:67)
  • My tf-Lite model size: 5MB
  • The app size on the mobile is about 15 MB

To Solve This Error:

  • I include the following dependency on the build.gradle (Modle app):
    // This dependency adds the necessary TF op support.
    implementation 'org.tensorflow:tensorflow-lite-select-tf-ops:0.0.0-nightly'

Screen Shot 2020-08-28 at 3 23 41 PM

That solves the problem and my object detection app is working very well.

However, The app is now 195MB size (storage) on the physical mobile! Does this (-tf-ops) dependency has this large size! If so how can I fix the previous exception without including this dependency in the android app?

Screen Shot 2020-08-28 at 3 21 26 PM

created time in 2 months

issue openedargman/EAST

EAST detect some texture as a text!

I am running the demo using the pre-trained model

python eval.py --test_data_path=/tmp/images/ --gpu_list=0 --checkpoint_path=/tmp/east_icdar2015_resnet_v1_50_rbox/ \ --output_dir=/tmp/

However, EAST detect some features as text, is there any reason behind that? Is there any configuration to tune or change?

Samples:

Screen Shot 2020-08-24 at 9 41 49 PM Screen Shot 2020-08-24 at 10 53 06 PM Screen Shot 2020-08-24 at 9 46 43 PM

created time in 2 months

issue closedabdelrahman-gaber/tf2-object-detection-api-tutorial

Error while converting model to tensorflow lite

Thank you, Abdelrahman, so much for the well-organized tutorial, I really spent time searching for good tutorials for object detection and up to date.

I had successfully train and test the model as you explained, but when I tried to convert it to tensorflow lite, I got the following error: Could you please help me?

tflite_convert \ --saved_model_dir=models/ssd_mobilenet_v2_raccoon/exported_model/saved_model \ --output_file=models/ssd_mobilenet_v2_raccoon/ssd_mobilenet_v2_raccoon.tflite

I got this error:

2020-08-05 10:19:30.217661: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:799] constant_folding: Graph size after: 1394 nodes (0), 2116 edges (0), time = 77.785ms. Traceback (most recent call last): File "/opt/anaconda3/envs/v_python3.6/bin/tflite_convert", line 8, in <module> sys.exit(main()) File "/opt/anaconda3/envs/v_python3.6/lib/python3.6/site-packages/tensorflow/lite/python/tflite_convert.py", line 638, in main app.run(main=run_main, argv=sys.argv[:1]) File "/opt/anaconda3/envs/v_python3.6/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 40, in run _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef) File "/opt/anaconda3/envs/v_python3.6/lib/python3.6/site-packages/absl/app.py", line 299, in run _run_main(main, args) File "/opt/anaconda3/envs/v_python3.6/lib/python3.6/site-packages/absl/app.py", line 250, in _run_main sys.exit(main(argv)) File "/opt/anaconda3/envs/v_python3.6/lib/python3.6/site-packages/tensorflow/lite/python/tflite_convert.py", line 621, in run_main _convert_tf2_model(tflite_flags) File "/opt/anaconda3/envs/v_python3.6/lib/python3.6/site-packages/tensorflow/lite/python/tflite_convert.py", line 237, in _convert_tf2_model tflite_model = converter.convert() File "/opt/anaconda3/envs/v_python3.6/lib/python3.6/site-packages/tensorflow/lite/python/lite.py", line 483, in convert _get_tensor_name(tensor), shape_list)) ValueError: None is only supported in the 1st dimension. Tensor 'input_tensor' has invalid shape '[1, None, None, 3]'.

closed time in 2 months

hahmad2008

issue commentabdelrahman-gaber/tf2-object-detection-api-tutorial

Error while converting model to tensorflow lite

For testing TFLITE model on images check this code: LINK

hahmad2008

comment created time in 2 months

issue commentabdelrahman-gaber/tf2-object-detection-api-tutorial

Error while converting model to tensorflow lite

@abdelrahman-gaber Ya sure!

hahmad2008

comment created time in 2 months

issue commentabdelrahman-gaber/tf2-object-detection-api-tutorial

Error while converting model to tensorflow lite

Thanks, @abdelrahman-gaber . No need. I tried the following and it works :D

image

hahmad2008

comment created time in 2 months

issue commentabdelrahman-gaber/tf2-object-detection-api-tutorial

Error while converting model to tensorflow lite

Thank you @abdelrahman-gaber I also tried this solution it works as well.

But I am trying to test the tflite model. could u please check that?

hahmad2008

comment created time in 2 months

issue commenttensorflow/tensorflow

Input size of converted lite model doesn't match the original model input size

@MeghnaNatraj Thank you so much :) I really appreciate it. it works now 👯

hahmad2008

comment created time in 2 months

issue commenttensorflow/tensorflow

Convert saved model - issue generated shapes

@amahendrakar So there is a problem with mismatching the input-output shapes, right?

hahmad2008

comment created time in 3 months

issue openedtensorflow/tensorflow

Convert saved model - issue generated shapes

System information

  • OS: MAC
  • TensorFlow version 2.4.0-dev20200805

Command used to run the converter or code if you’re using the Python API

!pip install tf-nightly

import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_saved_model('ssd_mobilenet_v2_320x320_coco17_tpu-8/saved_model')
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.experimental_new_converter = True

converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
tflite_model = converter.convert()

open("m.tflite", "wb").write(tflite_model)

Original Model Input & Output shape

image

Converted Model Input & Output shape

image

Also, please include a link to the saved model {{LINK MODEL}}

Please let me know if I have missed anything here, I feel that the model has something wrong! (Shapes are not matched) As I replaced the model in the Android version of object detection, it gives me errors.

created time in 3 months

issue commenttensorflow/tensorflow

tensorflow.lite.python.convert.ConverterError: <unknown>:0: error: loc("Func/StatefulPartitionedCall/input/_0"): requires all operands and results to have compatible element types

@Saduf2019 Hi I tried same code you shared, but I got this exception:


!pip install tf-nightly

import tensorflow as tf
saved_model_dir='ssd_mobilenet_v2_320x320_coco17_tpu-8/saved_model/'

model = tf.saved_model.load(saved_model_dir)
concrete_func = model.signatures[tf.saved_model.DEFAULT_SERVING_SIGNATURE_DEF_KEY]
concrete_func.inputs[0].set_shape([1, 300, 300, 3])
converter = tf.lite.TFLiteConverter.from_concrete_functions([concrete_func])


# These lines are necessary for the issue fix https://github.com/tensorflow/tensorflow/issues/41877
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.experimental_new_converter = True
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]

tflite_model = converter.convert()
open('tflite_model_coco17o.tflite','wb').write(tflite_model)

ConverterError: <unknown>:0: error: loc("Func/StatefulPartitionedCall/input/_0"): requires all operands and results to have compatible element types <unknown>:0: note: loc("Func/StatefulPartitionedCall/input/_0"): see current operation: %1 = "tf.Identity"(%arg0) {device = ""} : (tensor<1x300x300x3x!tf.quint8>) -> tensor<1x300x300x3xui8>

Exception                                 Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/tensorflow/lite/python/convert.py in toco_convert_protos(model_flags_str, toco_flags_str, input_data_str, debug_info_str, enable_mlir_converter)
    198                                                  debug_info_str,
--> 199                                                  enable_mlir_converter)
    200       return model_str

6 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/lite/python/wrap_toco.py in wrapped_toco_convert(model_flags_str, toco_flags_str, input_data_str, debug_info_str, enable_mlir_converter)
     37       debug_info_str,
---> 38       enable_mlir_converter)
     39 

Exception: <unknown>:0: error: loc("Func/StatefulPartitionedCall/input/_0"): requires all operands and results to have compatible element types
<unknown>:0: note: loc("Func/StatefulPartitionedCall/input/_0"): see current operation: %1 = "tf.Identity"(%arg0) {device = ""} : (tensor<1x300x300x3x!tf.quint8>) -> tensor<1x300x300x3xui8>


During handling of the above exception, another exception occurred:

ConverterError                            Traceback (most recent call last)
<ipython-input-9-2ee2db71f5c7> in <module>()
----> 1 tflite_model = converter.convert()
      2 open('tflite_model_coco17o.tflite','wb').write(tflite_model)

/usr/local/lib/python3.6/dist-packages/tensorflow/lite/python/lite.py in convert(self)
   1116         Invalid quantization parameters.
   1117     """
-> 1118     return super(TFLiteConverterV2, self).convert()
   1119 
   1120 

/usr/local/lib/python3.6/dist-packages/tensorflow/lite/python/lite.py in convert(self)
    940 
    941     return super(TFLiteFrozenGraphConverterV2,
--> 942                  self).convert(graph_def, input_tensors, output_tensors)
    943 
    944 

/usr/local/lib/python3.6/dist-packages/tensorflow/lite/python/lite.py in convert(self, graph_def, input_tensors, output_tensors)
    667         input_tensors=input_tensors,
    668         output_tensors=output_tensors,
--> 669         **converter_kwargs)
    670 
    671     calibrate_and_quantize, flags = quant_mode.quantizer_flags(

/usr/local/lib/python3.6/dist-packages/tensorflow/lite/python/convert.py in toco_convert_impl(input_data, input_tensors, output_tensors, enable_mlir_converter, *args, **kwargs)
    572       input_data.SerializeToString(),
    573       debug_info_str=debug_info_str,
--> 574       enable_mlir_converter=enable_mlir_converter)
    575   return data
    576 

/usr/local/lib/python3.6/dist-packages/tensorflow/lite/python/convert.py in toco_convert_protos(model_flags_str, toco_flags_str, input_data_str, debug_info_str, enable_mlir_converter)
    200       return model_str
    201     except Exception as e:
--> 202       raise ConverterError(str(e))
    203 
    204   if distutils.spawn.find_executable(_toco_from_proto_bin) is None:

ConverterError: <unknown>:0: error: loc("Func/StatefulPartitionedCall/input/_0"): requires all operands and results to have compatible element types
<unknown>:0: note: loc("Func/StatefulPartitionedCall/input/_0"): see current operation: %1 = "tf.Identity"(%arg0) {device = ""} : (tensor<1x300x300x3x!tf.quint8>) -> tensor<1x300x300x3xui8>
clgilbe

comment created time in 3 months

issue commentabdelrahman-gaber/tf2-object-detection-api-tutorial

Error while converting model to tensorflow lite

Thank you @abdelrahman-gaber have you had a chance to convert the model to TensorFlow lite?

hahmad2008

comment created time in 3 months

issue openedtensorflow/tensorflow

Input size of converted lite model doesn't match the original model input size

System information

  • COLAB:
  • TensorFlow version 2.4.0-dev20200805:

Command used to run the converter or code if you’re using the Python API this Link for code I used in converting the saved model to TensorFlow lite

!pip install tf-nightly
model_dir=saved_model'
converter = tf.lite.TFLiteConverter.from_saved_model(model_dir,signature_keys=['serving_default'],)

converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.experimental_new_converter = True
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]

tflite_model = converter.convert()

#open("saved_model/converted_model.tflite", "wb").write(tflite_model)
open('tflite_model.tflite','wb').write(tflite_model)

The output from the converter invocation

  • successfully generate the model.tflite
  • But When I checked the input for both the original and converted model, I found the following mismatching:
    • checked the input for the model I got it as [1,1,1,3] instead of [1,300,300,3]

The original model I used is saved model, It is the same as all models in the model zoo.

I was training the object detection API and run it successfully with original .pb saved model (using this backbone model)

Any recommendations here? Thanks.

created time in 3 months

issue commentabdelrahman-gaber/tf2-object-detection-api-tutorial

Error while converting model to tensorflow lite

what type of model you generated in the exported_model/saved_model is it a saved or frozen model?

hahmad2008

comment created time in 3 months

issue commenttensorflow/tensorflow

ValueError: None is only supported in the 1st dimension. Tensor 'image_tensor' has invalid shape '[None, None, None, 3]'.

@Ringhu Sorry for the delay in my response. I tried converting the mobilenet model and was successful. Can you please check the gist here.

Please verify once and close the issue if this was resolved for you. Thanks

Hi @jvishnuvardhan I tried to convert the checkpoint (saved_model) using this solution, it is successfully converted but I think it is incorrect since I checked the input for the model I got it as [1,1,1,3] instead of [1,300,300,3]

the saved model directory has:

image

Ringhu

comment created time in 3 months

issue commentabdelrahman-gaber/tf2-object-detection-api-tutorial

Error while converting model to tensorflow lite

@abdelrahman-gaber Thank you! I tried to convert the checkpoint (saved_model) using this solution, it is successfully converted but I think it is incorrect since I checked the input for the model I got it as [1,1,1,3] instead of [1,300,300,3]

hahmad2008

comment created time in 3 months

issue commenttensorflow/tensorflow

Toco/TFLite_Convert for TFLite Problem

So as a follow up I was able to deal with this issue. For anyone wondering here's what I did:

  1. I checked out r1.10 for both the models and tensorflow repos
  2. Used bazel to clean the tensorflow repo, that way whenever I use bazel commands we'll use the r1.10 binaries. This by far is most likely what solved my problem, since I was trying different versions of tensorflow as I was dealing with this issue.
  3. I trained using a command similar to this:
python ~/tensorflow/models/research/object_detection/model_main.py \
       --pipeline_config_path=${PIPELINE_CONFIG_PATH} \
       --model_dir=${MODEL_DIR} \
       --num_train_steps=${NUM_TRAIN_STEPS} \
       --num_eval_steps=${NUM_EVAL_STEPS} \
       --alsologtostderr
  1. For specifically tflite I needed to use export_tflite_ssd_graph.py, not export_inference_graph. So the next command was something like:
python ~/tensorflow/models/research/object_detection/export_tflite_ssd_graph.py \
--pipeline_config_path=$CONFIG_FILE \
--trained_checkpoint_prefix=$CHECKPOINT_PATH \
--output_directory=$EXPORT_OUTPUT_DIR \
--add_postprocessing_op=true
  1. Then we have the toco command. Similar to the blog, but I needed to add a few parameters:
./bazel-bin/tensorflow/contrib/lite/toco/toco \
  --input_file=$INPUT_PB_GRAPH \
  --output_file=$OUTPUT_TFLITE_FILE \
  --input_format=TENSORFLOW_GRAPHDEF --output_format=TFLITE \
  --inference_type=QUANTIZED_UINT8 \
  --input_shapes="1,300, 300,3" \
  --input_arrays=normalized_input_image_tensor \
--output_arrays='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1','TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3' \
  --std_values=128.0 --mean_values=128.0 \
  --allow_custom_ops --default_ranges_min=0 --default_ranges_max=6
  1. Then when loading into the tflite example (so tensorflow/tensorflow/contrib/lite/examples/android) I needed some changes to compile or get past runtime errors and other behaviour:
git diff tensorflow/contrib/lite/examples/android/app/src/main/java/org/tensorflow/demo/TFLiteObjectDetectionAPIModel.java
diff --git a/tensorflow/contrib/lite/examples/android/app/src/main/java/org/tensorflow/demo/TFLiteObjectDetectionAPIModel.java b/tensorflow/contrib/lite/examples/android/app/src/main/java/org/tensorflow/demo/TFLiteObjectDetectionAPIModel.java
index 9eb21de..2cfa7e0 100644
--- a/tensorflow/contrib/lite/examples/android/app/src/main/java/org/tensorflow/demo/TFLiteObjectDetectionAPIModel.java
+++ b/tensorflow/contrib/lite/examples/android/app/src/main/java/org/tensorflow/demo/TFLiteObjectDetectionAPIModel.java
@@ -208,17 +208,24 @@ public class TFLiteObjectDetectionAPIModel implements Classifier {
       // in label file and class labels start from 1 to number_of_classes+1,
       // while outputClasses correspond to class index from 0 to number_of_classes
       int labelOffset = 1;
-      recognitions.add(
-          new Recognition(
-              "" + i,
-              labels.get((int) outputClasses[0][i] + labelOffset),
-              outputScores[0][i],
-              detection));
+        final int classLabel = (int) outputClasses[0][i] + labelOffset;
+        if (inRange(classLabel, labels.size(), 0) && inRange(outputScores[0][i], 1, 0)) {
+            recognitions.add(
+                    new Recognition(
+                            "" + i,
+                            labels.get(classLabel),
+                            outputScores[0][i],
+                            detection));
+        }
     }
     Trace.endSection(); // "recognizeImage"
     return recognitions;
   }
 
+  private boolean inRange(float number, float max, float min) {
+    return number < max && number >= min;
+  }
+

And then I was able to run the tflite example on my phone! Thanks to @achowdhery and @jdduke for responding and the help!

Hi @BryanRansil I used step 6 and change the TFLiteObjectDetectionAPIModel.java .But I still got the following error:

2020-08-05 16:09:30.781 28332-28442/org.tensorflow.lite.examples.detection E/AndroidRuntime: FATAL EXCEPTION: inference Process: org.tensorflow.lite.examples.detection, PID: 28332 java.lang.IllegalArgumentException: Cannot copy to a TensorFlowLite tensor (serving_default_input_tensor:0) with 3 bytes from a Java Buffer with 1080000 bytes. at org.tensorflow.lite.Tensor.throwIfSrcShapeIsIncompatible(Tensor.java:444) at org.tensorflow.lite.Tensor.setTo(Tensor.java:189) at org.tensorflow.lite.NativeInterpreterWrapper.run(NativeInterpreterWrapper.java:154) at org.tensorflow.lite.Interpreter.runForMultipleInputsOutputs(Interpreter.java:347) at org.tensorflow.lite.examples.detection.tflite.TFLiteObjectDetectionAPIModel.recognizeImage(TFLiteObjectDetectionAPIModel.java:196) at org.tensorflow.lite.examples.detection.DetectorActivity$2.run(DetectorActivity.java:181) at android.os.Handler.handleCallback(Handler.java:883) at android.os.Handler.dispatchMessage(Handler.java:100) at android.os.Looper.loop(Looper.java:237) at android.os.HandlerThread.run(HandlerThread.java:67)

Any help is appreciated.

BryanRansil

comment created time in 3 months

issue openedabdelrahman-gaber/tf2-object-detection-api-tutorial

Error while converting model to tensorflow lite

Thank you, Abdelrahman, so much for the well-organized tutorial, I really spent time searching for good tutorials for object detection and up to date.

I had successfully train and test the model as you explained, but when I tried to convert it to tensorflow lite, I got the following error: Could you please help me?

tflite_convert \ --saved_model_dir=models/ssd_mobilenet_v2_raccoon/exported_model/saved_model \ --output_file=models/ssd_mobilenet_v2_raccoon/ssd_mobilenet_v2_raccoon.tflite

I got this error:

2020-08-05 10:19:30.217661: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:799] constant_folding: Graph size after: 1394 nodes (0), 2116 edges (0), time = 77.785ms. Traceback (most recent call last): File "/opt/anaconda3/envs/v_python3.6/bin/tflite_convert", line 8, in <module> sys.exit(main()) File "/opt/anaconda3/envs/v_python3.6/lib/python3.6/site-packages/tensorflow/lite/python/tflite_convert.py", line 638, in main app.run(main=run_main, argv=sys.argv[:1]) File "/opt/anaconda3/envs/v_python3.6/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 40, in run _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef) File "/opt/anaconda3/envs/v_python3.6/lib/python3.6/site-packages/absl/app.py", line 299, in run _run_main(main, args) File "/opt/anaconda3/envs/v_python3.6/lib/python3.6/site-packages/absl/app.py", line 250, in _run_main sys.exit(main(argv)) File "/opt/anaconda3/envs/v_python3.6/lib/python3.6/site-packages/tensorflow/lite/python/tflite_convert.py", line 621, in run_main _convert_tf2_model(tflite_flags) File "/opt/anaconda3/envs/v_python3.6/lib/python3.6/site-packages/tensorflow/lite/python/tflite_convert.py", line 237, in _convert_tf2_model tflite_model = converter.convert() File "/opt/anaconda3/envs/v_python3.6/lib/python3.6/site-packages/tensorflow/lite/python/lite.py", line 483, in convert _get_tensor_name(tensor), shape_list)) ValueError: None is only supported in the 1st dimension. Tensor 'input_tensor' has invalid shape '[1, None, None, 3]'.

created time in 3 months

startedabdelrahman-gaber/tf2-object-detection-api-tutorial

started time in 3 months

more