profile
viewpoint

batulrangwala/BPI-R64-bsp-4.19 0

Supports Banana Pi BPI-R64 (MT7622) (Kernel 4.19)

issue commenttensorflow/tensorflow

Failed to load delegate from libedgetpu.so.1 on PCIe EdgeTPU [SOLVED]

Coral Tech Support had indicated from the dmesg logs that the memory was not assigned for BAR 0

[ 1.604440] pci 0000:01:00.0: [1ac1:089a] type 00 class 0x0000ff 
[ 1.610743] pci 0000:01:00.0: reg 0x10: [mem 0x00000000-0x00003fff 64bit pref] 
[ 1.618078] pci 0000:01:00.0: reg 0x18: [mem 0x00000000-0x000fffff 64bit pref] 
[ 1.626285] pci 0000:01:00.0: 2.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s x1 link at 0000:00:00.0 (capable of 4.000 Gb/s with 5 GT/s x1 link) 
[ 1.641429] pci_bus 0000:01: fixups for bus 
[ 1.645617] pci_bus 0000:01: bus scan returning with max=01 
[ 1.651195] pci_bus 0000:01: busn_res: [bus 01-ff] end is updated to 01 
[ 1.657816] pci_bus 0000:00: bus scan returning with max=01 
[ 1.663407] pci 0000:00:00.0: BAR 0: no space for [mem size 0x200000000 64bit pref] 
[ 1.671071] pci 0000:00:00.0: BAR 0: failed to assign [mem size 0x200000000 64bit pref] 
[ 1.679079] pci 0000:00:00.0: BAR 8: assigned [mem 0x20000000-0x201fffff] 
[ 1.685872] pci 0000:00:00.0: PCI bridge to [bus 01] 
[ 1.690846] pci 0000:00:00.0: bridge window [mem 0x20000000-0x201fffff] 

The BAR0 for pcie bridge didn't get memory range assigned, is is an issue with the driver in the SoC. This "BAR 0 failed to assign" is preventing our device from accessing the host's memory. This is exactly why you get this issue right during load delegate. Unfortunately, this is a little outside of our hand. On another note, the swiotlb=512 from your kernel command line is very low, I suggest increasing to swiotlb=65536 for possible unrelated issues.

I have now update the swiotlb=65536 and gasket.dma_bit_mask=32 in the kernel command line and also managed to resolve the BAR0 memory assignment issue. Following is the log.

pi@bpi-iot-ros-ai:~$ lsmod
Module                  Size  Used by
mt7622                 45056  -2
mt76                   45056  -2
mac80211              364544  -2
libarc4                16384  -2
apex                   24576  -2
gasket                 98304  -2
cfg80211              258048  -2
btmtkuart              24576  -2
pi@bpi-iot-ros-ai:~$ dmesg | grep apex
[    8.381572] apex 0000:01:00.0: assign IRQ: got 140
[    8.381703]  apex_pci_probe+0x38/0x468 [apex]
[    8.381762]  apex_init+0x44/0x1000 [apex]
[    8.381906]  apex_pci_probe+0x38/0x468 [apex]
[    8.381960]  apex_init+0x44/0x1000 [apex]
[    8.382002] apex 0000:01:00.0: Assigned....BAR 0 [mem 0x20100000-0x20103fff 64bit pref]........
[    8.382008] apex 0000:01:00.0: Assigned and claimed....BAR 0 [mem 0x20100000-0x20103fff 64bit pref]........
[    8.382014] apex 0000:01:00.0: Assigned....BAR 2 [mem 0x20000000-0x200fffff 64bit pref]........
[    8.382020] apex 0000:01:00.0: Assigned and claimed....BAR 2 [mem 0x20000000-0x200fffff 64bit pref]........
[    8.382025] apex 0000:01:00.0: enabling device (0000 -> 0002)
[    8.382078] apex 0000:01:00.0: enabling bus mastering
[   13.538841] apex 0000:01:00.0: Apex performance not throttled due to temperature
pi@bpi-iot-ros-ai:~$ dmesg | grep pci
[    1.494119] mtk-pcie 1a143000.pcie: host bridge /pcie@1a143000 ranges:
[    1.505642] mtk-pcie 1a143000.pcie: Parsing ranges property...
[    1.515992] mtk-pcie 1a143000.pcie:   MEM 0x20000000..0x2fffffff -> 0x20000000
[    1.556439] mtk-pcie 1a143000.pcie: PCI host bridge to bus 0000:00
[    1.568448] pci_bus 0000:00: root bus resource [bus 00-ff]
[    1.573946] pci_bus 0000:00: root bus resource [mem 0x20000000-0x2fffffff]
[    1.580841] pci_bus 0000:00: scanning bus
[    1.584897] pci 0000:00:00.0: [14c3:3258] type 01 class 0x060400
[    1.590957] pci 0000:00:00.0: reg 0x10: [mem 0x00000000-0x1ffffffff 64bit pref]
[    1.599829] pci_bus 0000:00: fixups for bus
[    1.604022] pci 0000:00:00.0: scanning [bus 00-00] behind bridge, pass 0
[    1.610729] pci 0000:00:00.0: bridge configuration invalid ([bus 00-00]), reconfiguring
[    1.618747] pci 0000:00:00.0: scanning [bus 00-00] behind bridge, pass 1
[    1.625563] pci_bus 0000:01: scanning bus
[    1.629683] pci 0000:01:00.0: [1ac1:089a] type 00 class 0x0000ff
[    1.635984] pci 0000:01:00.0: reg 0x10: [mem 0x00000000-0x00003fff 64bit pref]
[    1.643318] pci 0000:01:00.0: reg 0x18: [mem 0x00000000-0x000fffff 64bit pref]
[    1.651524] pci 0000:01:00.0: 2.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s x1 link at 0000:00:00.0 (capable of 4.000 Gb/s with 5 GT/s x1 link)
[    1.666709] pci_bus 0000:01: fixups for bus
[    1.670896] pci_bus 0000:01: bus scan returning with max=01
[    1.676472] pci_bus 0000:01: busn_res: [bus 01-ff] end is updated to 01
[    1.683093] pci_bus 0000:00: bus scan returning with max=01
[    1.688684] pci 0000:00:00.0: BAR 0: no space for [mem size 0x200000000 64bit pref]
[    1.696347] pci 0000:00:00.0: BAR 0: failed to assign [mem size 0x200000000 64bit pref]
[    1.704355] pci 0000:00:00.0: BAR 8: assigned [mem 0x20000000-0x201fffff]
[    1.711151] pci 0000:01:00.0: BAR 2: assigned [mem 0x20000000-0x200fffff 64bit pref]
[    1.718983] pci 0000:01:00.0: BAR 0: assigned [mem 0x20100000-0x20103fff 64bit pref]
[    1.726814] pci 0000:00:00.0: PCI bridge to [bus 01]
[    1.731787] pci 0000:00:00.0:   bridge window [mem 0x20000000-0x201fffff]
[    1.738888] mtk-pcie 1a145000.pcie: host bridge /pcie@1a145000 ranges:
[    1.745425] mtk-pcie 1a145000.pcie: Parsing ranges property...
[    1.751269] mtk-pcie 1a145000.pcie:   MEM 0x28000000..0x2fffffff -> 0x28000000
[    1.758508] mtk-pcie 1a145000.pcie: resource collision: [mem 0x28000000-0x2fffffff] conflicts with pcie@1a143000 [mem 0x20000000-0x2fffffff]
[    1.771141] mtk-pcie: probe of 1a145000.pcie failed with error -16
[    8.381659]  pci_enable_resources+0x68/0x18c
[    8.381665]  pcibios_enable_device+0xc/0x14
[    8.381670]  do_pci_enable_device+0x50/0xd8
[    8.381675]  pci_enable_device_flags+0x100/0x15c
[    8.381680]  pci_enable_device+0x10/0x18
[    8.381684]  pci_enable_bridge+0x50/0x78
[    8.381689]  pci_enable_device_flags+0x98/0x15c
[    8.381694]  pci_enable_device+0x10/0x18
[    8.381703]  apex_pci_probe+0x38/0x468 [apex]
[    8.381709]  pci_device_probe+0xa0/0x144
[    8.381755]  __pci_register_driver+0x40/0x48
[    8.381811] pci 0000:00:00.0: Assigned....BAR 8 [mem 0x20000000-0x201fffff]........
[    8.381817] pci 0000:00:00.0: Assigned and claimed....BAR 8 [mem 0x20000000-0x201fffff]........
[    8.381823] pci 0000:00:00.0: enabling device (0000 -> 0002)
[    8.381837] pci 0000:00:00.0: enabling bus mastering
[    8.381880]  pci_enable_resources+0x68/0x18c
[    8.381884]  pcibios_enable_device+0xc/0x14
[    8.381889]  do_pci_enable_device+0x50/0xd8
[    8.381893]  pci_enable_device_flags+0x100/0x15c
[    8.381898]  pci_enable_device+0x10/0x18
[    8.381906]  apex_pci_probe+0x38/0x468 [apex]
[    8.381911]  pci_device_probe+0xa0/0x144
[    8.381953]  __pci_register_driver+0x40/0x48
pi@bpi-iot-ros-ai:~$ 
pi@bpi-iot-ros-ai:~$ dmesg | grep gasket
[    0.000000] Kernel command line: board=bpi-r64 console=ttyS0,115200n1 earlyprintk root=/dev/mmcblk1p2 rootfstype=ext4 rootwait service=linux debug=7 initcall_debug=0 androidboot.hardware=mt7622 swiotlb=65536 gasket.dma_bit_mask=32
[    8.360957] gasket: loading out-of-tree module taints kernel.
pi@bpi-iot-ros-ai:~$ cd coral/tflite/python/examples/classification/
pi@bpi-iot-ros-ai:~/coral/tflite/python/examples/classification$ python3 classify_image.py --model models/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite --labels models/inat_bird_labels.txt --input images/parrot.jpg
E :237] HIB Error. hib_error_status = 0000000000000001, hib_first_error_status = 0000000000000001
E :237] HIB Error. hib_error_status = 0000000000000001, hib_first_error_status = 0000000000000001
----INFERENCE TIME----
Note: The first inference on Edge TPU is slow because it includes loading the model into Edge TPU memory.

Inspite of this I am still getting this error

Any advice to resolve this

asw-v4

comment created time in a month

fork batulrangwala/BPI-R64-bsp-4.19

Supports Banana Pi BPI-R64 (MT7622) (Kernel 4.19)

fork in a month

issue commenttensorflow/tensorflow

2020-01-09 12:25:17.491189: F tensorflow/lite/toco/graph_transformations/quantize.cc:611] Check failed: is_rnn_state_array

There is Docker provided by Coral which has all the necessary environment set. Docker Link : https://coral.ai/docs/edgetpu/retrain-detection/#set-up-the-docker-container

However the conversion is successfull only when using the standard parameters while tflite_convert.

When I try to convert my custom model by changing input_arrays, input_shapes I get the same error


root@1ae3dc55a8ca:~/base_model/exported_model_12k_quantized# tflite_convert   --output_file=tflite_graph_576_720.tflite   --graph_def_file=tflite_graph.pb   --inference_type=QUANTIZED_UINT8   --input_arrays=image_tensor   --output_arrays=TFLite_Detection_PostProcess,TFLite_Detection_PostProcess:1,TFLite_Detection_PostProcess:2,TFLite_Detection_PostProcess:3 --default_ranges_min=0 --default_ranges_max=6   --mean_values=128   --std_dev_values=128   --input_shapes=1,576,720,3   --change_concat_input_ranges=false   --allow_nudging_weights_to_use_fast_gemm_kernel=true   --allow_custom_ops
2020-01-22 05:12:46.931163: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Traceback (most recent call last):
  File "/usr/local/bin/tflite_convert", line 11, in <module>
    sys.exit(main())
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/lite/python/tflite_convert.py", line 412, in main
    app.run(main=run_main, argv=sys.argv[:1])
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 125, in run
    _sys.exit(main(argv))
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/lite/python/tflite_convert.py", line 408, in run_main
    _convert_model(tflite_flags)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/lite/python/tflite_convert.py", line 162, in _convert_model
    output_data = converter.convert()
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/lite/python/lite.py", line 464, in convert
    **converter_kwargs)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/lite/python/convert.py", line 311, in toco_convert_graph_def
    input_data.SerializeToString())
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/lite/python/convert.py", line 135, in toco_convert_protos
    (stdout, stderr))
RuntimeError: TOCO failed see console for info.
2020-01-22 05:12:48.742828: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1080] Converting unsupported operation: TFLite_Detection_PostProcess
2020-01-22 05:12:48.845288: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 1537 operators, 2267 arrays (0 quantized)
2020-01-22 05:12:48.915931: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 1537 operators, 2267 arrays (0 quantized)
2020-01-22 05:12:49.491249: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 1: 181 operators, 344 arrays (0 quantized)
2020-01-22 05:12:49.494982: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before pre-quantization graph transformations: 181 operators, 344 arrays (0 quantized)
2020-01-22 05:12:49.496398: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] After pre-quantization graph transformations pass 1: 99 operators, 262 arrays (0 quantized)
2020-01-22 05:12:49.497525: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before default min-max range propagation graph transformations: 99 operators, 262 arrays (0 quantized)
2020-01-22 05:12:49.498378: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] After default min-max range propagation graph transformations pass 1: 99 operators, 262 arrays (0 quantized)
2020-01-22 05:12:49.499539: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before quantization graph transformations: 99 operators, 262 arrays (0 quantized)
2020-01-22 05:12:49.499571: F tensorflow/contrib/lite/toco/graph_transformations/quantize.cc:589] Check failed: is_rnn_state_array
Aborted (core dumped)

None
batulrangwala

comment created time in 2 months

issue closedtensorflow/tensorflow

tflite_convert failed

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 18.04
  • TensorFlow installed from (source or binary): binary
  • TensorFlow version (or github SHA if from source): 1.14

Provide the text output from tflite_convert

(tf_gpu_clone) ridlr@ridlr107:~/TensorFlow/exported_model_12k_quantized$ tflite_convert --output_file tflite_graph.tflite --graph_def_file tflite_graph.pb --input_arrays image_tensor --output_arrays TFLite_Detection_PostProcess --input_shapes 1,576,720,3
2020-01-09 12:10:44.239300: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-01-09 12:10:44.262441: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2808000000 Hz
2020-01-09 12:10:44.262923: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x558c8fa667e0 executing computations on platform Host. Devices:
2020-01-09 12:10:44.262939: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): <undefined>, <undefined>
Traceback (most recent call last):
  File "/home/ridlr/anaconda3/bin/tflite_convert", line 10, in <module>
    sys.exit(main())
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/python/tflite_convert.py", line 503, in main
    app.run(main=run_main, argv=sys.argv[:1])
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/python/platform/app.py", line 40, in run
    _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/absl/app.py", line 299, in run
    _run_main(main, args)
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/absl/app.py", line 250, in _run_main
    sys.exit(main(argv))
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/python/tflite_convert.py", line 499, in run_main
    _convert_tf1_model(tflite_flags)
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/python/tflite_convert.py", line 193, in _convert_tf1_model
    output_data = converter.convert()
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/python/lite.py", line 904, in convert
    **converter_kwargs)
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/python/convert.py", line 373, in toco_convert_graph_def
    input_data.SerializeToString())
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/python/convert.py", line 172, in toco_convert_protos
    "TOCO failed. See console for info.\n%s\n%s\n" % (stdout, stderr))
tensorflow.lite.python.convert.ConverterError: TOCO failed. See console for info.
2020-01-09 12:10:45.667362: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: TFLite_Detection_PostProcess
2020-01-09 12:10:45.763812: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 1537 operators, 2264 arrays (0 quantized)
2020-01-09 12:10:45.824420: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 1537 operators, 2264 arrays (0 quantized)
2020-01-09 12:10:46.292215: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 1: 181 operators, 341 arrays (0 quantized)
2020-01-09 12:10:46.295908: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before Group bidirectional sequence lstm/rnn: 181 operators, 341 arrays (0 quantized)
2020-01-09 12:10:46.298914: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before dequantization graph transformations: 181 operators, 341 arrays (0 quantized)
2020-01-09 12:10:46.304648: I tensorflow/lite/toco/allocate_transient_arrays.cc:345] Total transient array allocated size: 20160000 bytes, theoretical optimal value: 17280000 bytes.
2020-01-09 12:10:46.305189: I tensorflow/lite/toco/toco_tooling.cc:433] Estimated count of arithmetic ops: 1.29335 billion (note that a multiply-add is counted as 2 ops).
2020-01-09 12:10:46.305598: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/Conv/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305607: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305629: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305633: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_1/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305636: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_1/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305641: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_1/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305645: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_2/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305650: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_2/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305654: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_2/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305658: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_2/post_activation_bypass_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305662: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_3/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305665: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_3/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305669: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_3/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305674: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_4/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305678: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_4/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305681: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_4/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305684: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_4/post_activation_bypass_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305688: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_5/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305692: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_5/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305696: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_5/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305700: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_5/post_activation_bypass_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305703: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_6/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305706: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_6/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305709: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_6/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305713: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_7/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305717: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_7/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305721: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_7/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305725: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_7/post_activation_bypass_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305729: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_8/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305733: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_8/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305737: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_8/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305741: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_8/post_activation_bypass_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305745: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_9/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305749: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_9/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305753: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_9/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305758: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_9/post_activation_bypass_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305762: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_10/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305766: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_10/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305770: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_10/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305774: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_11/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305778: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_11/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305782: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_11/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305786: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_11/post_activation_bypass_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305790: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_12/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305793: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_12/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305796: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_12/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305800: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_12/post_activation_bypass_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305803: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_13/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305807: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_13/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305811: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_13/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305815: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_14/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305819: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_14/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305823: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_14/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305827: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_14/post_activation_bypass_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305831: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_15/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305835: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_15/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305839: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_15/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305843: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_15/post_activation_bypass_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305847: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_16/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305851: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_16/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305855: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_16/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305859: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/Conv_1/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305863: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/layer_19_1_Conv2d_2_1x1_256/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305867: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_2_3x3_s2_512/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305871: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/layer_19_1_Conv2d_3_1x1_128/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305875: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_3_3x3_s2_256/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305879: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/layer_19_1_Conv2d_4_1x1_128/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305883: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_4_3x3_s2_256/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305887: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/layer_19_1_Conv2d_5_1x1_64/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305891: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_5_3x3_s2_128/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305896: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_0/BoxEncodingPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305900: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_0/ClassPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305904: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_1/BoxEncodingPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305908: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_1/ClassPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305912: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_2/BoxEncodingPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305916: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_2/ClassPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305920: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_3/BoxEncodingPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305924: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_3/ClassPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305928: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_4/BoxEncodingPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305932: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_4/ClassPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305936: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_5/BoxEncodingPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305940: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_5/ClassPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305998: E tensorflow/lite/toco/toco_tooling.cc:456] We are continually in the process of adding support to TensorFlow Lite for more ops. It would be helpful if you could inform us of how this conversion went by opening a github issue at https://github.com/tensorflow/tensorflow/issues/new?template=40-tflite-op-request.md
 and pasting the following:

Some of the operators in the model are not supported by the standard TensorFlow Lite runtime. If those are native TensorFlow operators, you might be able to use the extended runtime by passing --enable_select_tf_ops, or by setting target_ops=TFLITE_BUILTINS,SELECT_TF_OPS when calling tf.lite.TFLiteConverter(). Otherwise, if you have a custom implementation for them you can disable this error with --allow_custom_ops, or by setting allow_custom_ops=True when calling tf.lite.TFLiteConverter(). Here is a list of builtin operators you are using: ADD, CONCATENATION, CONV_2D, DEPTHWISE_CONV_2D, FAKE_QUANT, LOGISTIC, RESHAPE. Here is a list of operators for which you will need custom implementations: TFLite_Detection_PostProcess.
Traceback (most recent call last):
  File "/home/ridlr/anaconda3/bin/toco_from_protos", line 10, in <module>
    sys.exit(main())
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/toco/python/toco_from_protos.py", line 59, in main
    app.run(main=execute, argv=[sys.argv[0]] + unparsed)
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/python/platform/app.py", line 40, in run
    _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/absl/app.py", line 299, in run
    _run_main(main, args)
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/absl/app.py", line 250, in _run_main
    sys.exit(main(argv))
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/toco/python/toco_from_protos.py", line 33, in execute
    output_str = tensorflow_wrap_toco.TocoConvert(model_str, toco_str, input_str)
Exception: We are continually in the process of adding support to TensorFlow Lite for more ops. It would be helpful if you could inform us of how this conversion went by opening a github issue at https://github.com/tensorflow/tensorflow/issues/new?template=40-tflite-op-request.md
 and pasting the following:

Some of the operators in the model are not supported by the standard TensorFlow Lite runtime. If those are native TensorFlow operators, you might be able to use the extended runtime by passing --enable_select_tf_ops, or by setting target_ops=TFLITE_BUILTINS,SELECT_TF_OPS when calling tf.lite.TFLiteConverter(). Otherwise, if you have a custom implementation for them you can disable this error with --allow_custom_ops, or by setting allow_custom_ops=True when calling tf.lite.TFLiteConverter(). Here is a list of builtin operators you are using: ADD, CONCATENATION, CONV_2D, DEPTHWISE_CONV_2D, FAKE_QUANT, LOGISTIC, RESHAPE. Here is a list of operators for which you will need custom implementations: TFLite_Detection_PostProcess.

Also, please include a link to a GraphDef or the model if possible.

closed time in 2 months

batulrangwala

issue commenttensorflow/tensorflow

tflite_convert failed

Using a docker resolved this issue. The Docker is provided by Coral and has all the necessary environment set. Docker Link : https://coral.ai/docs/edgetpu/retrain-detection/#set-up-the-docker-container

batulrangwala

comment created time in 2 months

issue commenttensorflow/tensorflow

Determine input_arrays and output_arrays values for tflite_convert

Using a docker resolved this issue. The Docker is provided by Coral and has all the necessary environment set. Docker Link : https://coral.ai/docs/edgetpu/retrain-detection/#set-up-the-docker-container

batulrangwala

comment created time in 2 months

issue closedtensorflow/tensorflow

Determine input_arrays and output_arrays values for tflite_convert

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux ubuntu 18.04
  • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:
  • TensorFlow installed from (source or binary): binary
  • TensorFlow version:1.14
  • Python version:3.7.4
  • Installed using virtualenv? pip? conda?:conda
  • Bazel version (if compiling from source):NA
  • GCC/Compiler version (if compiling from source):7.4
  • CUDA/cuDNN version:10.2
  • GPU model and memory:GeForce GTX 960M/PCIe/SSE2, 16GB

Describe the problem tflite_convert : need to know the values for inout_arrays and --output_arrays Provide the exact sequence of commands / steps that you executed before running into the problem I have created a tflite_graph.pb from export_tflite_ssd_graph.py, quantized checkpoint and config files succesfully.

My task is to generate a .tflite file using the generated graph_def_file using the tflite_convert command. And then use this to generate a edgetpu.tflite file to run on Google coral. following is the log of the command

(tf_gpu_clone) ridlr@ridlr107:~/TensorFlow/exported_model_12k_quantized$ tflite_convert --output_file tflite_graph.tflite --graph_def_file tflite_graph.pb --input_arrays image_tensor --output_arrays detection_boxes --input_shapes 1,576,720,3
2020-01-09 11:05:56.913487: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-01-09 11:05:56.934582: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2808000000 Hz
2020-01-09 11:05:56.935469: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5569f40357d0 executing computations on platform Host. Devices:
2020-01-09 11:05:56.935514: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): <undefined>, <undefined>
Traceback (most recent call last):
  File "/home/ridlr/anaconda3/bin/tflite_convert", line 10, in <module>
    sys.exit(main())
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/python/tflite_convert.py", line 503, in main
    app.run(main=run_main, argv=sys.argv[:1])
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/python/platform/app.py", line 40, in run
    _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/absl/app.py", line 299, in run
    _run_main(main, args)
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/absl/app.py", line 250, in _run_main
    sys.exit(main(argv))
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/python/tflite_convert.py", line 499, in run_main
    _convert_tf1_model(tflite_flags)
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/python/tflite_convert.py", line 193, in _convert_tf1_model
    output_data = converter.convert()
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/python/lite.py", line 904, in convert
    **converter_kwargs)
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/python/convert.py", line 373, in toco_convert_graph_def
    input_data.SerializeToString())
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/python/convert.py", line 172, in toco_convert_protos
    "TOCO failed. See console for info.\n%s\n%s\n" % (stdout, stderr))
tensorflow.lite.python.convert.ConverterError: TOCO failed. See console for info.
2020-01-09 11:05:58.375534: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: TFLite_Detection_PostProcess
2020-01-09 11:05:58.456167: F tensorflow/lite/toco/tooling_util.cc:918] Check failed: GetOpWithOutput(model, output_array) Specified output array "detection_boxes" is not produced by any op in this graph. Is it a typo? This should not happen. If you trigger this error please send a bug report (with code to reporduce this error), to the TensorFlow Lite team.
Fatal Python error: Aborted

Current thread 0x00007f347d2e6740 (most recent call first):
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/toco/python/toco_from_protos.py", line 33 in execute
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/absl/app.py", line 250 in _run_main
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/absl/app.py", line 299 in run
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/python/platform/app.py", line 40 in run
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/toco/python/toco_from_protos.py", line 59 in main
  File "/home/ridlr/anaconda3/bin/toco_from_protos", line 10 in <module>
Aborted (core dumped)

How do I determine the correct value of --output_array.

Any other info / logs If I use the specifier --inference_type=QUANTIZED_UINT8 How do I determine the values of following specifiers? --std_dev_values --mean_values --default_ranges_min --default_ranges_max

closed time in 2 months

batulrangwala

issue closedtensorflow/tensorflow

Frozen Graph generation warning lead to error in running the model

System information

  • OS Platform and Distribution:Linux Ubuntu 18.04
  • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:
  • TensorFlow installed from (source or binary): binary
  • TensorFlow version: tensorflow-gpu 1.15.0
  • Python version: 3.6.10
  • Installed using virtualenv? pip? conda?: pip
  • Bazel version (if compiling from source): 0.26.1
  • GCC/Compiler version (if compiling from source): 7.4
  • CUDA/cuDNN version:10.2
  • GPU model and memory: GEFORCE GTX 960M - 16GB

Describe the problem Aim is to convert .ckpt and .config files to .tflite for ssd_mobilenet_v2_quantized_coco model. Following error is seen when converting from .ckpt and .config files to .pb INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/Conv/add_fold

Provide the exact sequence of commands / steps that you executed before running into the problem

Following is the sequence of commands

(py36_tf_gpu) ridlr@ridlr107:~/TensorFlow/models-master/research/object_detection$ python export_tflite_ssd_graph.py --pipeline_config_path /home/ridlr/TensorFlow/base_model/ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03/pipeline.config --trained_checkpoint_prefix /home/ridlr/TensorFlow/base_model/ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03/model.ckpt --output_directory /home/ridlr/TensorFlow/base_model/ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03/tflite_graph_export_tflite/

The above command generated a tflite_graph.pb file

tflite_convert --output_file tflite_graph.tflite --graph_def_file tflite_graph.pb --input_arrays normalized_input_image_tensor --output_arrays TFLite_Detection_PostProcess --input_shapes 1,300,300,3 --inference_type QUANTIZED_UINT8 --std_dev_values 0 --mean_values 1 --default_ranges_min 0 --default_ranges_max 6 --allow_custom_ops The above command generated a tflite_graph.tflite file

(py36_tf_gpu) ridlr@ridlr107:~/TensorFlow/base_model/ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03/tflite_graph_export_tflite$ edgetpu_compiler tflite_graph.tflite 
Edge TPU Compiler version 2.0.267685300
ERROR: :106 std::abs(input_product_scale - bias_scale) <= 1e-6 * std::min(input_product_scale, bias_scale) was not true.
ERROR: Node number 0 (CONV_2D) failed to prepare.


Model compiled successfully in 11 ms.

Input model: tflite_graph.tflite
Input size: 5.89MiB
Output model: tflite_graph_edgetpu.tflite
Output size: 5.88MiB
On-chip memory available for caching model parameters: 0.00B
On-chip memory used for caching model parameters: 0.00B
Off-chip memory used for streaming uncached model parameters: 0.00B
Number of Edge TPU subgraphs: 0
Total number of operations: 0
Operation log: tflite_graph_edgetpu.log
See the operation log file for individual operation details.

The above command generated a tflite_grapf_edgetpu.tflite file inspite of the error. When I run this model on the Coral I get the following error

INFO: Initialized TensorFlow Lite runtime.
Traceback (most recent call last):
  File "detect_image.py", line 124, in <module>
    main()
  File "detect_image.py", line 91, in main
    interpreter.allocate_tensors()
  File "/home/ankit/anaconda3/envs/py35/lib/python3.5/site-packages/tflite_runtime/interpreter.py", line 244, in allocate_tensors
    return self._interpreter.AllocateTensors()
  File "/home/ankit/anaconda3/envs/py35/lib/python3.5/site-packages/tflite_runtime/interpreter_wrapper.py", line 114, in AllocateTensors
    return _interpreter_wrapper.InterpreterWrapper_AllocateTensors(self)
RuntimeError: tensorflow/lite/kernels/kernel_util.cc:119 std::abs(input_product_scale - bias_scale) <= 1e-6 * std::min(input_product_scale, bias_scale) was not true.Node number 0 (CONV_2D) failed to prepare.

I suspect that there is warning/Info displayed when generating the tflite_graph.pb file which is leading to the above error. The build log for the same is below. The line from the beolow log that concerns me is INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/Conv/add_fold How Can I resolve this warning?

(py36_tf_gpu) ridlr@ridlr107:~/TensorFlow/models-master/research/object_detection$ python export_tflite_ssd_graph.py --pipeline_config_path /home/ridlr/TensorFlow/base_model/ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03/pipeline.config --trained_checkpoint_prefix /home/ridlr/TensorFlow/base_model/ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03/model.ckpt --output_directory /home/ridlr/TensorFlow/base_model/ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03/tflite_graph_export_tflite/
WARNING:tensorflow:
The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
  * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
  * https://github.com/tensorflow/addons
  * https://github.com/tensorflow/io (for I/O related ops)
If you depend on functionality not listed there, please file an issue.

WARNING:tensorflow:From /home/ridlr/TensorFlow/models-master/research/slim/nets/inception_resnet_v2.py:374: The name tf.GraphKeys is deprecated. Please use tf.compat.v1.GraphKeys instead.

WARNING:tensorflow:From /home/ridlr/TensorFlow/models-master/research/slim/nets/mobilenet/mobilenet.py:397: The name tf.nn.avg_pool is deprecated. Please use tf.nn.avg_pool2d instead.

WARNING:tensorflow:From export_tflite_ssd_graph.py:143: The name tf.app.run is deprecated. Please use tf.compat.v1.app.run instead.

WARNING:tensorflow:From export_tflite_ssd_graph.py:133: The name tf.gfile.GFile is deprecated. Please use tf.io.gfile.GFile instead.

W0121 12:09:58.011585 140284674922304 module_wrapper.py:139] From export_tflite_ssd_graph.py:133: The name tf.gfile.GFile is deprecated. Please use tf.io.gfile.GFile instead.

WARNING:tensorflow:From /home/ridlr/TensorFlow/models-master/research/object_detection/export_tflite_ssd_graph_lib.py:193: The name tf.gfile.MakeDirs is deprecated. Please use tf.io.gfile.makedirs instead.

W0121 12:09:58.016252 140284674922304 module_wrapper.py:139] From /home/ridlr/TensorFlow/models-master/research/object_detection/export_tflite_ssd_graph_lib.py:193: The name tf.gfile.MakeDirs is deprecated. Please use tf.io.gfile.makedirs instead.

WARNING:tensorflow:From /home/ridlr/TensorFlow/models-master/research/object_detection/export_tflite_ssd_graph_lib.py:237: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.

W0121 12:09:58.016631 140284674922304 module_wrapper.py:139] From /home/ridlr/TensorFlow/models-master/research/object_detection/export_tflite_ssd_graph_lib.py:237: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.

WARNING:tensorflow:From /home/ridlr/TensorFlow/models-master/research/object_detection/meta_architectures/ssd_meta_arch.py:597: The name tf.variable_scope is deprecated. Please use tf.compat.v1.variable_scope instead.

W0121 12:09:58.020018 140284674922304 module_wrapper.py:139] From /home/ridlr/TensorFlow/models-master/research/object_detection/meta_architectures/ssd_meta_arch.py:597: The name tf.variable_scope is deprecated. Please use tf.compat.v1.variable_scope instead.

WARNING:tensorflow:From /home/ridlr/anaconda3/envs/py36_tf_gpu/lib/python3.6/site-packages/tensorflow_core/contrib/layers/python/layers/layers.py:1057: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.
Instructions for updating:
Please use `layer.__call__` method instead.
W0121 12:09:58.022555 140284674922304 deprecation.py:323] From /home/ridlr/anaconda3/envs/py36_tf_gpu/lib/python3.6/site-packages/tensorflow_core/contrib/layers/python/layers/layers.py:1057: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.
Instructions for updating:
Please use `layer.__call__` method instead.
WARNING:tensorflow:From /home/ridlr/TensorFlow/models-master/research/object_detection/core/anchor_generator.py:171: The name tf.assert_equal is deprecated. Please use tf.compat.v1.assert_equal instead.

W0121 12:09:59.660506 140284674922304 module_wrapper.py:139] From /home/ridlr/TensorFlow/models-master/research/object_detection/core/anchor_generator.py:171: The name tf.assert_equal is deprecated. Please use tf.compat.v1.assert_equal instead.

WARNING:tensorflow:From /home/ridlr/TensorFlow/models-master/research/object_detection/predictors/convolutional_box_predictor.py:150: The name tf.logging.info is deprecated. Please use tf.compat.v1.logging.info instead.

W0121 12:09:59.668454 140284674922304 module_wrapper.py:139] From /home/ridlr/TensorFlow/models-master/research/object_detection/predictors/convolutional_box_predictor.py:150: The name tf.logging.info is deprecated. Please use tf.compat.v1.logging.info instead.

INFO:tensorflow:depth of additional conv before box predictor: 0
I0121 12:09:59.668638 140284674922304 convolutional_box_predictor.py:151] depth of additional conv before box predictor: 0
INFO:tensorflow:depth of additional conv before box predictor: 0
I0121 12:09:59.696758 140284674922304 convolutional_box_predictor.py:151] depth of additional conv before box predictor: 0
INFO:tensorflow:depth of additional conv before box predictor: 0
I0121 12:09:59.722609 140284674922304 convolutional_box_predictor.py:151] depth of additional conv before box predictor: 0
INFO:tensorflow:depth of additional conv before box predictor: 0
I0121 12:09:59.747994 140284674922304 convolutional_box_predictor.py:151] depth of additional conv before box predictor: 0
INFO:tensorflow:depth of additional conv before box predictor: 0
I0121 12:09:59.774057 140284674922304 convolutional_box_predictor.py:151] depth of additional conv before box predictor: 0
INFO:tensorflow:depth of additional conv before box predictor: 0
I0121 12:09:59.801787 140284674922304 convolutional_box_predictor.py:151] depth of additional conv before box predictor: 0
WARNING:tensorflow:From /home/ridlr/TensorFlow/models-master/research/object_detection/export_tflite_ssd_graph_lib.py:52: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

W0121 12:09:59.837386 140284674922304 module_wrapper.py:139] From /home/ridlr/TensorFlow/models-master/research/object_detection/export_tflite_ssd_graph_lib.py:52: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

2020-01-21 12:09:59.838240: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2020-01-21 12:09:59.847343: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-01-21 12:09:59.847611: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: 
name: GeForce GTX 960M major: 5 minor: 0 memoryClockRate(GHz): 1.176
pciBusID: 0000:01:00.0
2020-01-21 12:09:59.847750: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcudart.so.10.0'; dlerror: libcudart.so.10.0: cannot open shared object file: No such file or directory
2020-01-21 12:09:59.847852: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcublas.so.10.0'; dlerror: libcublas.so.10.0: cannot open shared object file: No such file or directory
2020-01-21 12:09:59.847962: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcufft.so.10.0'; dlerror: libcufft.so.10.0: cannot open shared object file: No such file or directory
2020-01-21 12:09:59.848044: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcurand.so.10.0'; dlerror: libcurand.so.10.0: cannot open shared object file: No such file or directory
2020-01-21 12:09:59.848109: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcusolver.so.10.0'; dlerror: libcusolver.so.10.0: cannot open shared object file: No such file or directory
2020-01-21 12:09:59.848188: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcusparse.so.10.0'; dlerror: libcusparse.so.10.0: cannot open shared object file: No such file or directory
2020-01-21 12:09:59.851251: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-01-21 12:09:59.851275: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1641] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
2020-01-21 12:09:59.851543: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-01-21 12:09:59.875285: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2808000000 Hz
2020-01-21 12:09:59.876004: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x559d0e122aa0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-01-21 12:09:59.876041: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
2020-01-21 12:09:59.903778: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-01-21 12:09:59.904104: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x559d0e124900 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-01-21 12:09:59.904122: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): GeForce GTX 960M, Compute Capability 5.0
2020-01-21 12:09:59.904227: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-01-21 12:09:59.904235: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165]      
WARNING:tensorflow:From /home/ridlr/TensorFlow/models-master/research/object_detection/export_tflite_ssd_graph_lib.py:267: The name tf.train.get_or_create_global_step is deprecated. Please use tf.compat.v1.train.get_or_create_global_step instead.

W0121 12:10:00.032385 140284674922304 module_wrapper.py:139] From /home/ridlr/TensorFlow/models-master/research/object_detection/export_tflite_ssd_graph_lib.py:267: The name tf.train.get_or_create_global_step is deprecated. Please use tf.compat.v1.train.get_or_create_global_step instead.

WARNING:tensorflow:From /home/ridlr/TensorFlow/models-master/research/object_detection/builders/graph_rewriter_builder.py:41: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.

W0121 12:10:00.034862 140284674922304 module_wrapper.py:139] From /home/ridlr/TensorFlow/models-master/research/object_detection/builders/graph_rewriter_builder.py:41: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.

INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/Conv/add_fold
I0121 12:10:01.108431 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/Conv/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv/depthwise/add_fold
I0121 12:10:01.108743 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv/depthwise/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_1/expand/add_fold
I0121 12:10:01.108995 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_1/expand/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_1/depthwise/add_fold
I0121 12:10:01.109181 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_1/depthwise/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_2/expand/add_fold
I0121 12:10:01.109415 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_2/expand/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_2/depthwise/add_fold
I0121 12:10:01.109586 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_2/depthwise/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_3/expand/add_fold
I0121 12:10:01.109818 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_3/expand/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_3/depthwise/add_fold
I0121 12:10:01.109977 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_3/depthwise/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_4/expand/add_fold
I0121 12:10:01.110195 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_4/expand/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_4/depthwise/add_fold
I0121 12:10:01.110363 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_4/depthwise/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_5/expand/add_fold
I0121 12:10:01.110591 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_5/expand/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_5/depthwise/add_fold
I0121 12:10:01.110761 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_5/depthwise/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_6/expand/add_fold
I0121 12:10:01.111005 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_6/expand/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_6/depthwise/add_fold
I0121 12:10:01.111187 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_6/depthwise/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_7/expand/add_fold
I0121 12:10:01.111420 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_7/expand/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_7/depthwise/add_fold
I0121 12:10:01.111594 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_7/depthwise/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_8/expand/add_fold
I0121 12:10:01.111815 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_8/expand/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_8/depthwise/add_fold
I0121 12:10:01.112004 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_8/depthwise/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_9/expand/add_fold
I0121 12:10:01.112233 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_9/expand/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_9/depthwise/add_fold
I0121 12:10:01.112395 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_9/depthwise/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_10/expand/add_fold
I0121 12:10:01.112610 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_10/expand/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_10/depthwise/add_fold
I0121 12:10:01.112753 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_10/depthwise/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_11/expand/add_fold
I0121 12:10:01.112978 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_11/expand/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_11/depthwise/add_fold
I0121 12:10:01.113132 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_11/depthwise/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_12/expand/add_fold
I0121 12:10:01.113349 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_12/expand/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_12/depthwise/add_fold
I0121 12:10:01.113510 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_12/depthwise/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_13/expand/add_fold
I0121 12:10:01.113732 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_13/expand/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_13/depthwise/add_fold
I0121 12:10:01.113906 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_13/depthwise/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_14/expand/add_fold
I0121 12:10:01.114142 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_14/expand/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_14/depthwise/add_fold
I0121 12:10:01.114297 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_14/depthwise/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_15/expand/add_fold
I0121 12:10:01.114525 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_15/expand/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_15/depthwise/add_fold
I0121 12:10:01.114687 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_15/depthwise/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_16/expand/add_fold
I0121 12:10:01.114912 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_16/expand/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_16/depthwise/add_fold
I0121 12:10:01.115073 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_16/depthwise/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/Conv_1/add_fold
I0121 12:10:01.115322 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/Conv_1/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/layer_19_1_Conv2d_2_1x1_256/add_fold
I0121 12:10:01.115458 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/layer_19_1_Conv2d_2_1x1_256/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_2_3x3_s2_512/add_fold
I0121 12:10:01.115600 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_2_3x3_s2_512/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/layer_19_1_Conv2d_3_1x1_128/add_fold
I0121 12:10:01.115738 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/layer_19_1_Conv2d_3_1x1_128/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_3_3x3_s2_256/add_fold
I0121 12:10:01.115870 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_3_3x3_s2_256/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/layer_19_1_Conv2d_4_1x1_128/add_fold
I0121 12:10:01.116004 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/layer_19_1_Conv2d_4_1x1_128/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_4_3x3_s2_256/add_fold
I0121 12:10:01.116144 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_4_3x3_s2_256/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/layer_19_1_Conv2d_5_1x1_64/add_fold
I0121 12:10:01.116278 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/layer_19_1_Conv2d_5_1x1_64/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_5_3x3_s2_128/add_fold
I0121 12:10:01.116429 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_5_3x3_s2_128/add_fold
2020-01-21 12:10:01.121862: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-01-21 12:10:01.121904: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165]      
WARNING:tensorflow:From /home/ridlr/TensorFlow/models-master/research/object_detection/exporter.py:111: The name tf.train.Saver is deprecated. Please use tf.compat.v1.train.Saver instead.

W0121 12:10:01.122092 140284674922304 module_wrapper.py:139] From /home/ridlr/TensorFlow/models-master/research/object_detection/exporter.py:111: The name tf.train.Saver is deprecated. Please use tf.compat.v1.train.Saver instead.

INFO:tensorflow:Restoring parameters from /home/ridlr/TensorFlow/base_model/ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03/model.ckpt
I0121 12:10:01.465342 140284674922304 saver.py:1284] Restoring parameters from /home/ridlr/TensorFlow/base_model/ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03/model.ckpt
WARNING:tensorflow:From /home/ridlr/anaconda3/envs/py36_tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/tools/freeze_graph.py:127: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix.
W0121 12:10:03.793860 140284674922304 deprecation.py:323] From /home/ridlr/anaconda3/envs/py36_tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/tools/freeze_graph.py:127: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix.
2020-01-21 12:10:04.520189: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-01-21 12:10:04.520234: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165]      
INFO:tensorflow:Restoring parameters from /tmp/tmpkmw_fvex
I0121 12:10:04.521256 140284674922304 saver.py:1284] Restoring parameters from /tmp/tmpkmw_fvex
WARNING:tensorflow:From /home/ridlr/anaconda3/envs/py36_tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/tools/freeze_graph.py:233: convert_variables_to_constants (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.graph_util.convert_variables_to_constants`
W0121 12:10:05.583929 140284674922304 deprecation.py:323] From /home/ridlr/anaconda3/envs/py36_tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/tools/freeze_graph.py:233: convert_variables_to_constants (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.graph_util.convert_variables_to_constants`
WARNING:tensorflow:From /home/ridlr/anaconda3/envs/py36_tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/framework/graph_util_impl.py:277: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.graph_util.extract_sub_graph`
W0121 12:10:05.584106 140284674922304 deprecation.py:323] From /home/ridlr/anaconda3/envs/py36_tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/framework/graph_util_impl.py:277: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.graph_util.extract_sub_graph`
INFO:tensorflow:Froze 632 variables.
I0121 12:10:06.121339 140284674922304 graph_util_impl.py:334] Froze 632 variables.
INFO:tensorflow:Converted 632 variables to const ops.
I0121 12:10:06.188232 140284674922304 graph_util_impl.py:394] Converted 632 variables to const ops.
2020-01-21 12:10:06.305956: I tensorflow/tools/graph_transforms/transform_graph.cc:317] Applying strip_unused_nodes

closed time in 2 months

batulrangwala

issue commenttensorflow/tensorflow

Frozen Graph generation warning lead to error in running the model

Using a docker resolved this issue. The Docker is provided by Coral and has all the necessary environment set. Docker Link : https://coral.ai/docs/edgetpu/retrain-detection/#set-up-the-docker-container

batulrangwala

comment created time in 2 months

issue openedtensorflow/tensorflow

Frozen Graph generation warning lead to error in running the model

System information

  • OS Platform and Distribution:Linux Ubuntu 18.04
  • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:
  • TensorFlow installed from (source or binary): binary
  • TensorFlow version: tensorflow-gpu 1.15.0
  • Python version: 3.6.10
  • Installed using virtualenv? pip? conda?: pip
  • Bazel version (if compiling from source): 0.26.1
  • GCC/Compiler version (if compiling from source): 7.4
  • CUDA/cuDNN version:10.2
  • GPU model and memory: GEFORCE GTX 960M - 16GB

Describe the problem Aim is to convert .ckpt and .config files to .tflite for ssd_mobilenet_v2_quantized_coco model. Following error is seen when converting from .ckpt and .config files to .pb INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/Conv/add_fold

Provide the exact sequence of commands / steps that you executed before running into the problem

Following is the sequence of commands

(py36_tf_gpu) ridlr@ridlr107:~/TensorFlow/models-master/research/object_detection$ python export_tflite_ssd_graph.py --pipeline_config_path /home/ridlr/TensorFlow/base_model/ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03/pipeline.config --trained_checkpoint_prefix /home/ridlr/TensorFlow/base_model/ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03/model.ckpt --output_directory /home/ridlr/TensorFlow/base_model/ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03/tflite_graph_export_tflite/

The above command generated a tflite_graph.pb file

tflite_convert --output_file tflite_graph.tflite --graph_def_file tflite_graph.pb --input_arrays normalized_input_image_tensor --output_arrays TFLite_Detection_PostProcess --input_shapes 1,300,300,3 --inference_type QUANTIZED_UINT8 --std_dev_values 0 --mean_values 1 --default_ranges_min 0 --default_ranges_max 6 --allow_custom_ops The above command generated a tflite_graph.tflite file

(py36_tf_gpu) ridlr@ridlr107:~/TensorFlow/base_model/ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03/tflite_graph_export_tflite$ edgetpu_compiler tflite_graph.tflite 
Edge TPU Compiler version 2.0.267685300
ERROR: :106 std::abs(input_product_scale - bias_scale) <= 1e-6 * std::min(input_product_scale, bias_scale) was not true.
ERROR: Node number 0 (CONV_2D) failed to prepare.


Model compiled successfully in 11 ms.

Input model: tflite_graph.tflite
Input size: 5.89MiB
Output model: tflite_graph_edgetpu.tflite
Output size: 5.88MiB
On-chip memory available for caching model parameters: 0.00B
On-chip memory used for caching model parameters: 0.00B
Off-chip memory used for streaming uncached model parameters: 0.00B
Number of Edge TPU subgraphs: 0
Total number of operations: 0
Operation log: tflite_graph_edgetpu.log
See the operation log file for individual operation details.

The above command generated a tflite_grapf_edgetpu.tflite file inspite of the error. When I run this model on the Coral I get the following error

INFO: Initialized TensorFlow Lite runtime.
Traceback (most recent call last):
  File "detect_image.py", line 124, in <module>
    main()
  File "detect_image.py", line 91, in main
    interpreter.allocate_tensors()
  File "/home/ankit/anaconda3/envs/py35/lib/python3.5/site-packages/tflite_runtime/interpreter.py", line 244, in allocate_tensors
    return self._interpreter.AllocateTensors()
  File "/home/ankit/anaconda3/envs/py35/lib/python3.5/site-packages/tflite_runtime/interpreter_wrapper.py", line 114, in AllocateTensors
    return _interpreter_wrapper.InterpreterWrapper_AllocateTensors(self)
RuntimeError: tensorflow/lite/kernels/kernel_util.cc:119 std::abs(input_product_scale - bias_scale) <= 1e-6 * std::min(input_product_scale, bias_scale) was not true.Node number 0 (CONV_2D) failed to prepare.

I suspect that there is warning/Info displayed when generating the tflite_graph.pb file which is leading to the above error. The build log for the same is below. The line from the beolow log that concerns me is INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/Conv/add_fold How Can I resolve this warning?

(py36_tf_gpu) ridlr@ridlr107:~/TensorFlow/models-master/research/object_detection$ python export_tflite_ssd_graph.py --pipeline_config_path /home/ridlr/TensorFlow/base_model/ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03/pipeline.config --trained_checkpoint_prefix /home/ridlr/TensorFlow/base_model/ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03/model.ckpt --output_directory /home/ridlr/TensorFlow/base_model/ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03/tflite_graph_export_tflite/
WARNING:tensorflow:
The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
  * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
  * https://github.com/tensorflow/addons
  * https://github.com/tensorflow/io (for I/O related ops)
If you depend on functionality not listed there, please file an issue.

WARNING:tensorflow:From /home/ridlr/TensorFlow/models-master/research/slim/nets/inception_resnet_v2.py:374: The name tf.GraphKeys is deprecated. Please use tf.compat.v1.GraphKeys instead.

WARNING:tensorflow:From /home/ridlr/TensorFlow/models-master/research/slim/nets/mobilenet/mobilenet.py:397: The name tf.nn.avg_pool is deprecated. Please use tf.nn.avg_pool2d instead.

WARNING:tensorflow:From export_tflite_ssd_graph.py:143: The name tf.app.run is deprecated. Please use tf.compat.v1.app.run instead.

WARNING:tensorflow:From export_tflite_ssd_graph.py:133: The name tf.gfile.GFile is deprecated. Please use tf.io.gfile.GFile instead.

W0121 12:09:58.011585 140284674922304 module_wrapper.py:139] From export_tflite_ssd_graph.py:133: The name tf.gfile.GFile is deprecated. Please use tf.io.gfile.GFile instead.

WARNING:tensorflow:From /home/ridlr/TensorFlow/models-master/research/object_detection/export_tflite_ssd_graph_lib.py:193: The name tf.gfile.MakeDirs is deprecated. Please use tf.io.gfile.makedirs instead.

W0121 12:09:58.016252 140284674922304 module_wrapper.py:139] From /home/ridlr/TensorFlow/models-master/research/object_detection/export_tflite_ssd_graph_lib.py:193: The name tf.gfile.MakeDirs is deprecated. Please use tf.io.gfile.makedirs instead.

WARNING:tensorflow:From /home/ridlr/TensorFlow/models-master/research/object_detection/export_tflite_ssd_graph_lib.py:237: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.

W0121 12:09:58.016631 140284674922304 module_wrapper.py:139] From /home/ridlr/TensorFlow/models-master/research/object_detection/export_tflite_ssd_graph_lib.py:237: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.

WARNING:tensorflow:From /home/ridlr/TensorFlow/models-master/research/object_detection/meta_architectures/ssd_meta_arch.py:597: The name tf.variable_scope is deprecated. Please use tf.compat.v1.variable_scope instead.

W0121 12:09:58.020018 140284674922304 module_wrapper.py:139] From /home/ridlr/TensorFlow/models-master/research/object_detection/meta_architectures/ssd_meta_arch.py:597: The name tf.variable_scope is deprecated. Please use tf.compat.v1.variable_scope instead.

WARNING:tensorflow:From /home/ridlr/anaconda3/envs/py36_tf_gpu/lib/python3.6/site-packages/tensorflow_core/contrib/layers/python/layers/layers.py:1057: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.
Instructions for updating:
Please use `layer.__call__` method instead.
W0121 12:09:58.022555 140284674922304 deprecation.py:323] From /home/ridlr/anaconda3/envs/py36_tf_gpu/lib/python3.6/site-packages/tensorflow_core/contrib/layers/python/layers/layers.py:1057: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.
Instructions for updating:
Please use `layer.__call__` method instead.
WARNING:tensorflow:From /home/ridlr/TensorFlow/models-master/research/object_detection/core/anchor_generator.py:171: The name tf.assert_equal is deprecated. Please use tf.compat.v1.assert_equal instead.

W0121 12:09:59.660506 140284674922304 module_wrapper.py:139] From /home/ridlr/TensorFlow/models-master/research/object_detection/core/anchor_generator.py:171: The name tf.assert_equal is deprecated. Please use tf.compat.v1.assert_equal instead.

WARNING:tensorflow:From /home/ridlr/TensorFlow/models-master/research/object_detection/predictors/convolutional_box_predictor.py:150: The name tf.logging.info is deprecated. Please use tf.compat.v1.logging.info instead.

W0121 12:09:59.668454 140284674922304 module_wrapper.py:139] From /home/ridlr/TensorFlow/models-master/research/object_detection/predictors/convolutional_box_predictor.py:150: The name tf.logging.info is deprecated. Please use tf.compat.v1.logging.info instead.

INFO:tensorflow:depth of additional conv before box predictor: 0
I0121 12:09:59.668638 140284674922304 convolutional_box_predictor.py:151] depth of additional conv before box predictor: 0
INFO:tensorflow:depth of additional conv before box predictor: 0
I0121 12:09:59.696758 140284674922304 convolutional_box_predictor.py:151] depth of additional conv before box predictor: 0
INFO:tensorflow:depth of additional conv before box predictor: 0
I0121 12:09:59.722609 140284674922304 convolutional_box_predictor.py:151] depth of additional conv before box predictor: 0
INFO:tensorflow:depth of additional conv before box predictor: 0
I0121 12:09:59.747994 140284674922304 convolutional_box_predictor.py:151] depth of additional conv before box predictor: 0
INFO:tensorflow:depth of additional conv before box predictor: 0
I0121 12:09:59.774057 140284674922304 convolutional_box_predictor.py:151] depth of additional conv before box predictor: 0
INFO:tensorflow:depth of additional conv before box predictor: 0
I0121 12:09:59.801787 140284674922304 convolutional_box_predictor.py:151] depth of additional conv before box predictor: 0
WARNING:tensorflow:From /home/ridlr/TensorFlow/models-master/research/object_detection/export_tflite_ssd_graph_lib.py:52: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

W0121 12:09:59.837386 140284674922304 module_wrapper.py:139] From /home/ridlr/TensorFlow/models-master/research/object_detection/export_tflite_ssd_graph_lib.py:52: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

2020-01-21 12:09:59.838240: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2020-01-21 12:09:59.847343: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-01-21 12:09:59.847611: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: 
name: GeForce GTX 960M major: 5 minor: 0 memoryClockRate(GHz): 1.176
pciBusID: 0000:01:00.0
2020-01-21 12:09:59.847750: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcudart.so.10.0'; dlerror: libcudart.so.10.0: cannot open shared object file: No such file or directory
2020-01-21 12:09:59.847852: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcublas.so.10.0'; dlerror: libcublas.so.10.0: cannot open shared object file: No such file or directory
2020-01-21 12:09:59.847962: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcufft.so.10.0'; dlerror: libcufft.so.10.0: cannot open shared object file: No such file or directory
2020-01-21 12:09:59.848044: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcurand.so.10.0'; dlerror: libcurand.so.10.0: cannot open shared object file: No such file or directory
2020-01-21 12:09:59.848109: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcusolver.so.10.0'; dlerror: libcusolver.so.10.0: cannot open shared object file: No such file or directory
2020-01-21 12:09:59.848188: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcusparse.so.10.0'; dlerror: libcusparse.so.10.0: cannot open shared object file: No such file or directory
2020-01-21 12:09:59.851251: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-01-21 12:09:59.851275: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1641] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
2020-01-21 12:09:59.851543: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-01-21 12:09:59.875285: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2808000000 Hz
2020-01-21 12:09:59.876004: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x559d0e122aa0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-01-21 12:09:59.876041: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
2020-01-21 12:09:59.903778: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-01-21 12:09:59.904104: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x559d0e124900 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-01-21 12:09:59.904122: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): GeForce GTX 960M, Compute Capability 5.0
2020-01-21 12:09:59.904227: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-01-21 12:09:59.904235: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165]      
WARNING:tensorflow:From /home/ridlr/TensorFlow/models-master/research/object_detection/export_tflite_ssd_graph_lib.py:267: The name tf.train.get_or_create_global_step is deprecated. Please use tf.compat.v1.train.get_or_create_global_step instead.

W0121 12:10:00.032385 140284674922304 module_wrapper.py:139] From /home/ridlr/TensorFlow/models-master/research/object_detection/export_tflite_ssd_graph_lib.py:267: The name tf.train.get_or_create_global_step is deprecated. Please use tf.compat.v1.train.get_or_create_global_step instead.

WARNING:tensorflow:From /home/ridlr/TensorFlow/models-master/research/object_detection/builders/graph_rewriter_builder.py:41: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.

W0121 12:10:00.034862 140284674922304 module_wrapper.py:139] From /home/ridlr/TensorFlow/models-master/research/object_detection/builders/graph_rewriter_builder.py:41: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.

INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/Conv/add_fold
I0121 12:10:01.108431 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/Conv/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv/depthwise/add_fold
I0121 12:10:01.108743 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv/depthwise/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_1/expand/add_fold
I0121 12:10:01.108995 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_1/expand/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_1/depthwise/add_fold
I0121 12:10:01.109181 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_1/depthwise/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_2/expand/add_fold
I0121 12:10:01.109415 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_2/expand/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_2/depthwise/add_fold
I0121 12:10:01.109586 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_2/depthwise/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_3/expand/add_fold
I0121 12:10:01.109818 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_3/expand/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_3/depthwise/add_fold
I0121 12:10:01.109977 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_3/depthwise/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_4/expand/add_fold
I0121 12:10:01.110195 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_4/expand/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_4/depthwise/add_fold
I0121 12:10:01.110363 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_4/depthwise/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_5/expand/add_fold
I0121 12:10:01.110591 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_5/expand/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_5/depthwise/add_fold
I0121 12:10:01.110761 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_5/depthwise/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_6/expand/add_fold
I0121 12:10:01.111005 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_6/expand/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_6/depthwise/add_fold
I0121 12:10:01.111187 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_6/depthwise/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_7/expand/add_fold
I0121 12:10:01.111420 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_7/expand/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_7/depthwise/add_fold
I0121 12:10:01.111594 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_7/depthwise/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_8/expand/add_fold
I0121 12:10:01.111815 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_8/expand/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_8/depthwise/add_fold
I0121 12:10:01.112004 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_8/depthwise/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_9/expand/add_fold
I0121 12:10:01.112233 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_9/expand/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_9/depthwise/add_fold
I0121 12:10:01.112395 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_9/depthwise/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_10/expand/add_fold
I0121 12:10:01.112610 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_10/expand/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_10/depthwise/add_fold
I0121 12:10:01.112753 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_10/depthwise/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_11/expand/add_fold
I0121 12:10:01.112978 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_11/expand/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_11/depthwise/add_fold
I0121 12:10:01.113132 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_11/depthwise/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_12/expand/add_fold
I0121 12:10:01.113349 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_12/expand/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_12/depthwise/add_fold
I0121 12:10:01.113510 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_12/depthwise/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_13/expand/add_fold
I0121 12:10:01.113732 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_13/expand/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_13/depthwise/add_fold
I0121 12:10:01.113906 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_13/depthwise/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_14/expand/add_fold
I0121 12:10:01.114142 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_14/expand/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_14/depthwise/add_fold
I0121 12:10:01.114297 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_14/depthwise/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_15/expand/add_fold
I0121 12:10:01.114525 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_15/expand/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_15/depthwise/add_fold
I0121 12:10:01.114687 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_15/depthwise/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_16/expand/add_fold
I0121 12:10:01.114912 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_16/expand/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_16/depthwise/add_fold
I0121 12:10:01.115073 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/expanded_conv_16/depthwise/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/Conv_1/add_fold
I0121 12:10:01.115322 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/Conv_1/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/layer_19_1_Conv2d_2_1x1_256/add_fold
I0121 12:10:01.115458 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/layer_19_1_Conv2d_2_1x1_256/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_2_3x3_s2_512/add_fold
I0121 12:10:01.115600 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_2_3x3_s2_512/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/layer_19_1_Conv2d_3_1x1_128/add_fold
I0121 12:10:01.115738 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/layer_19_1_Conv2d_3_1x1_128/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_3_3x3_s2_256/add_fold
I0121 12:10:01.115870 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_3_3x3_s2_256/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/layer_19_1_Conv2d_4_1x1_128/add_fold
I0121 12:10:01.116004 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/layer_19_1_Conv2d_4_1x1_128/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_4_3x3_s2_256/add_fold
I0121 12:10:01.116144 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_4_3x3_s2_256/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/layer_19_1_Conv2d_5_1x1_64/add_fold
I0121 12:10:01.116278 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/layer_19_1_Conv2d_5_1x1_64/add_fold
INFO:tensorflow:Skipping quant after FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_5_3x3_s2_128/add_fold
I0121 12:10:01.116429 140284674922304 quantize.py:299] Skipping quant after FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_5_3x3_s2_128/add_fold
2020-01-21 12:10:01.121862: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-01-21 12:10:01.121904: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165]      
WARNING:tensorflow:From /home/ridlr/TensorFlow/models-master/research/object_detection/exporter.py:111: The name tf.train.Saver is deprecated. Please use tf.compat.v1.train.Saver instead.

W0121 12:10:01.122092 140284674922304 module_wrapper.py:139] From /home/ridlr/TensorFlow/models-master/research/object_detection/exporter.py:111: The name tf.train.Saver is deprecated. Please use tf.compat.v1.train.Saver instead.

INFO:tensorflow:Restoring parameters from /home/ridlr/TensorFlow/base_model/ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03/model.ckpt
I0121 12:10:01.465342 140284674922304 saver.py:1284] Restoring parameters from /home/ridlr/TensorFlow/base_model/ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03/model.ckpt
WARNING:tensorflow:From /home/ridlr/anaconda3/envs/py36_tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/tools/freeze_graph.py:127: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix.
W0121 12:10:03.793860 140284674922304 deprecation.py:323] From /home/ridlr/anaconda3/envs/py36_tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/tools/freeze_graph.py:127: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix.
2020-01-21 12:10:04.520189: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-01-21 12:10:04.520234: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165]      
INFO:tensorflow:Restoring parameters from /tmp/tmpkmw_fvex
I0121 12:10:04.521256 140284674922304 saver.py:1284] Restoring parameters from /tmp/tmpkmw_fvex
WARNING:tensorflow:From /home/ridlr/anaconda3/envs/py36_tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/tools/freeze_graph.py:233: convert_variables_to_constants (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.graph_util.convert_variables_to_constants`
W0121 12:10:05.583929 140284674922304 deprecation.py:323] From /home/ridlr/anaconda3/envs/py36_tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/tools/freeze_graph.py:233: convert_variables_to_constants (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.graph_util.convert_variables_to_constants`
WARNING:tensorflow:From /home/ridlr/anaconda3/envs/py36_tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/framework/graph_util_impl.py:277: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.graph_util.extract_sub_graph`
W0121 12:10:05.584106 140284674922304 deprecation.py:323] From /home/ridlr/anaconda3/envs/py36_tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/framework/graph_util_impl.py:277: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.graph_util.extract_sub_graph`
INFO:tensorflow:Froze 632 variables.
I0121 12:10:06.121339 140284674922304 graph_util_impl.py:334] Froze 632 variables.
INFO:tensorflow:Converted 632 variables to const ops.
I0121 12:10:06.188232 140284674922304 graph_util_impl.py:394] Converted 632 variables to const ops.
2020-01-21 12:10:06.305956: I tensorflow/tools/graph_transforms/transform_graph.cc:317] Applying strip_unused_nodes

created time in 2 months

issue commenttensorflow/tensorflow

2020-01-09 12:25:17.491189: F tensorflow/lite/toco/graph_transformations/quantize.cc:611] Check failed: is_rnn_state_array

The input and output arrays value is given by

(tf_gpu_clone) ridlr@ridlr107:~/TensorFlow/base_model/ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03/tflite_graph_export_tflite$ /home/ridlr/tensorflow/bazel-bin/tensorflow/tools/graph_transforms/summarize_graph --in_graph=/home/ridlr/TensorFlow/base_model/ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03/tflite_graph_export_tflite/tflite_graph.pb --print_structure=true
Found 1 possible inputs: (name=normalized_input_image_tensor, type=float(1), shape=[1,300,300,3]) 
No variables spotted.
Found 1 possible outputs: (name=TFLite_Detection_PostProcess, op=TFLite_Detection_PostProcess) 
Found 6112114 (6.11M) const parameters, 0 (0) variable parameters, and 0 control_edges
Op types used: 725 Const, 719 Identity, 180 Mul, 154 FakeQuantWithMinMaxVars, 130 Add, 60 Sub, 60 Rsqrt, 55 Conv2D, 43 Relu6, 29 Reshape, 17 DepthwiseConv2dNative, 12 BiasAdd, 2 ConcatV2, 1 TFLite_Detection_PostProcess, 1 Squeeze, 1 Sigmoid, 1 RealDiv, 1 Placeholder
To use with tensorflow/tools/benchmark:benchmark_model try these arguments:
bazel run tensorflow/tools/benchmark:benchmark_model -- --graph=/home/ridlr/TensorFlow/base_model/ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03/tflite_graph_export_tflite/tflite_graph.pb --show_flops --input_layer=normalized_input_image_tensor --input_layer_type=float --input_layer_shape=1,300,300,3 --output_layer=TFLite_Detection_PostProcess
batulrangwala

comment created time in 3 months

issue commenttensorflow/tensorflow

2020-01-09 12:25:17.491189: F tensorflow/lite/toco/graph_transformations/quantize.cc:611] Check failed: is_rnn_state_array

I have tried to convert a base model ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03. I have converted this model to a frozen graph as follows successfully

python object_detection/export_tflite_ssd_graph.py --pipeline_config_path /home/ridlr/TensorFlow/base_model/ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03/pipeline.config --trained_checkpoint_prefix /home/ridlr/TensorFlow/base_model/ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03/model.ckpt --output_directory /home/ridlr/TensorFlow/base_model/ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03/tflite_graph_export_tflite/ --add_postprocessing_op=true

Now I am converting this frozen to graph to tflite with the command as follows

(tf_gpu_clone) ridlr@ridlr107:~/TensorFlow/base_model/ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03/tflite_graph_export_tflite$ tflite_convert --output_file tflite_graph.tflite --graph_def_file tflite_graph.pb --input_arrays normalized_input_image_tensor --output_arrays TFLite_Detection_PostProcess --input_shapes 1,300,300,3 --inference_type QUANTIZED_UINT8 --std_dev_values 127 --mean_values 128 --default_ranges_min 0 --default_ranges_max 6
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
2020-01-16 16:24:24.250854: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcuda.so.1
2020-01-16 16:24:24.259891: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-01-16 16:24:24.260244: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties: 
name: GeForce GTX 960M major: 5 minor: 0 memoryClockRate(GHz): 1.176
pciBusID: 0000:01:00.0
2020-01-16 16:24:24.260418: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.1
2020-01-16 16:24:24.261524: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcublas.so.10
2020-01-16 16:24:24.262887: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcufft.so.10
2020-01-16 16:24:24.263114: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcurand.so.10
2020-01-16 16:24:24.264526: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusolver.so.10
2020-01-16 16:24:24.265295: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusparse.so.10
2020-01-16 16:24:24.268320: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudnn.so.7
2020-01-16 16:24:24.268471: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-01-16 16:24:24.268943: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-01-16 16:24:24.269179: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0
2020-01-16 16:24:24.269493: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2020-01-16 16:24:24.294484: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2808000000 Hz
2020-01-16 16:24:24.295098: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55e96396fac0 executing computations on platform Host. Devices:
2020-01-16 16:24:24.295120: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): <undefined>, <undefined>
2020-01-16 16:24:24.295298: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-01-16 16:24:24.295575: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties: 
name: GeForce GTX 960M major: 5 minor: 0 memoryClockRate(GHz): 1.176
pciBusID: 0000:01:00.0
2020-01-16 16:24:24.295631: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.1
2020-01-16 16:24:24.295681: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcublas.so.10
2020-01-16 16:24:24.295700: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcufft.so.10
2020-01-16 16:24:24.295730: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcurand.so.10
2020-01-16 16:24:24.295760: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusolver.so.10
2020-01-16 16:24:24.295777: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusparse.so.10
2020-01-16 16:24:24.295796: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudnn.so.7
2020-01-16 16:24:24.295853: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-01-16 16:24:24.296084: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-01-16 16:24:24.296279: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0
2020-01-16 16:24:24.296313: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.1
2020-01-16 16:24:24.327472: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-01-16 16:24:24.327529: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187]      0 
2020-01-16 16:24:24.327538: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 0:   N 
2020-01-16 16:24:24.327722: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-01-16 16:24:24.328031: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-01-16 16:24:24.328235: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-01-16 16:24:24.328418: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 764 MB memory) -> physical GPU (device: 0, name: GeForce GTX 960M, pci bus id: 0000:01:00.0, compute capability: 5.0)
2020-01-16 16:24:24.329712: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55e966f209c0 executing computations on platform CUDA. Devices:
2020-01-16 16:24:24.329726: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): GeForce GTX 960M, Compute Capability 5.0
Traceback (most recent call last):
  File "/home/ridlr/anaconda3/envs/tf_gpu_clone/bin/tflite_convert", line 11, in <module>
    sys.exit(main())
  File "/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorflow/lite/python/tflite_convert.py", line 503, in main
    app.run(main=run_main, argv=sys.argv[:1])
  File "/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 40, in run
    _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
  File "/home/ridlr/.local/lib/python3.6/site-packages/absl/app.py", line 299, in run
    _run_main(main, args)
  File "/home/ridlr/.local/lib/python3.6/site-packages/absl/app.py", line 250, in _run_main
    sys.exit(main(argv))
  File "/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorflow/lite/python/tflite_convert.py", line 499, in run_main
    _convert_tf1_model(tflite_flags)
  File "/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorflow/lite/python/tflite_convert.py", line 193, in _convert_tf1_model
    output_data = converter.convert()
  File "/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorflow/lite/python/lite.py", line 904, in convert
    **converter_kwargs)
  File "/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorflow/lite/python/convert.py", line 373, in toco_convert_graph_def
    input_data.SerializeToString())
  File "/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorflow/lite/python/convert.py", line 172, in toco_convert_protos
    "TOCO failed. See console for info.\n%s\n%s\n" % (stdout, stderr))
tensorflow.lite.python.convert.ConverterError: TOCO failed. See console for info.
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
2020-01-16 16:24:25.878202: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: TFLite_Detection_PostProcess
2020-01-16 16:24:25.980693: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 1537 operators, 2263 arrays (0 quantized)
2020-01-16 16:24:26.040496: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 1537 operators, 2263 arrays (0 quantized)
2020-01-16 16:24:26.511304: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 1: 182 operators, 341 arrays (1 quantized)
2020-01-16 16:24:26.514996: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before pre-quantization graph transformations: 182 operators, 341 arrays (1 quantized)
2020-01-16 16:24:26.516329: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After pre-quantization graph transformations pass 1: 100 operators, 259 arrays (1 quantized)
2020-01-16 16:24:26.517532: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before Group bidirectional sequence lstm/rnn: 100 operators, 259 arrays (1 quantized)
2020-01-16 16:24:26.518674: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before default min-max range propagation graph transformations: 100 operators, 259 arrays (1 quantized)
2020-01-16 16:24:26.519589: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After default min-max range propagation graph transformations pass 1: 100 operators, 259 arrays (1 quantized)
2020-01-16 16:24:26.520845: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before quantization graph transformations: 100 operators, 259 arrays (1 quantized)
2020-01-16 16:24:26.586776: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After quantization graph transformations pass 1: 106 operators, 265 arrays (236 quantized)
2020-01-16 16:24:26.592195: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After quantization graph transformations pass 2: 106 operators, 265 arrays (240 quantized)
2020-01-16 16:24:26.596841: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After quantization graph transformations pass 3: 101 operators, 260 arrays (242 quantized)
2020-01-16 16:24:26.601577: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After quantization graph transformations pass 4: 101 operators, 260 arrays (243 quantized)
2020-01-16 16:24:26.605346: W tensorflow/lite/toco/graph_transformations/quantize.cc:132] Constant array anchors lacks MinMax information. To make up for that, we will now compute the MinMax from actual array elements. That will result in quantization parameters that probably do not match whichever arithmetic was used during training, and thus will probably be a cause of poor inference accuracy.
2020-01-16 16:24:26.605481: W tensorflow/lite/toco/graph_transformations/quantize.cc:622] (Unsupported TensorFlow op: TFLite_Detection_PostProcess) is a quantized op but it has a model flag that sets the output arrays to float.
2020-01-16 16:24:26.605488: W tensorflow/lite/toco/graph_transformations/quantize.cc:622] (Unsupported TensorFlow op: TFLite_Detection_PostProcess) is a quantized op but it has a model flag that sets the output arrays to float.
2020-01-16 16:24:26.606381: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After quantization graph transformations pass 5: 99 operators, 258 arrays (244 quantized)
2020-01-16 16:24:26.606729: W tensorflow/lite/toco/graph_transformations/quantize.cc:622] (Unsupported TensorFlow op: TFLite_Detection_PostProcess) is a quantized op but it has a model flag that sets the output arrays to float.
2020-01-16 16:24:26.610982: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before shuffling of FC weights: 99 operators, 258 arrays (244 quantized)
2020-01-16 16:24:26.614144: I tensorflow/lite/toco/allocate_transient_arrays.cc:345] Total transient array allocated size: 3060032 bytes, theoretical optimal value: 2700032 bytes.
2020-01-16 16:24:26.614533: I tensorflow/lite/toco/toco_tooling.cc:433] Estimated count of arithmetic ops: 1.56954 billion (note that a multiply-add is counted as 2 ops).
2020-01-16 16:24:26.614956: E tensorflow/lite/toco/toco_tooling.cc:456] We are continually in the process of adding support to TensorFlow Lite for more ops. It would be helpful if you could inform us of how this conversion went by opening a github issue at https://github.com/tensorflow/tensorflow/issues/new?template=40-tflite-op-request.md
 and pasting the following:

Some of the operators in the model are not supported by the standard TensorFlow Lite runtime. If those are native TensorFlow operators, you might be able to use the extended runtime by passing --enable_select_tf_ops, or by setting target_ops=TFLITE_BUILTINS,SELECT_TF_OPS when calling tf.lite.TFLiteConverter(). Otherwise, if you have a custom implementation for them you can disable this error with --allow_custom_ops, or by setting allow_custom_ops=True when calling tf.lite.TFLiteConverter(). Here is a list of builtin operators you are using: ADD, CONCATENATION, CONV_2D, DEPTHWISE_CONV_2D, LOGISTIC, RESHAPE. Here is a list of operators for which you will need custom implementations: TFLite_Detection_PostProcess.
Traceback (most recent call last):
  File "/home/ridlr/anaconda3/envs/tf_gpu_clone/bin/toco_from_protos", line 11, in <module>
    sys.exit(main())
  File "/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorflow/lite/toco/python/toco_from_protos.py", line 59, in main
    app.run(main=execute, argv=[sys.argv[0]] + unparsed)
  File "/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 40, in run
    _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
  File "/home/ridlr/.local/lib/python3.6/site-packages/absl/app.py", line 299, in run
    _run_main(main, args)
  File "/home/ridlr/.local/lib/python3.6/site-packages/absl/app.py", line 250, in _run_main
    sys.exit(main(argv))
  File "/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorflow/lite/toco/python/toco_from_protos.py", line 33, in execute
    output_str = tensorflow_wrap_toco.TocoConvert(model_str, toco_str, input_str)
Exception: We are continually in the process of adding support to TensorFlow Lite for more ops. It would be helpful if you could inform us of how this conversion went by opening a github issue at https://github.com/tensorflow/tensorflow/issues/new?template=40-tflite-op-request.md
 and pasting the following:

Some of the operators in the model are not supported by the standard TensorFlow Lite runtime. If those are native TensorFlow operators, you might be able to use the extended runtime by passing --enable_select_tf_ops, or by setting target_ops=TFLITE_BUILTINS,SELECT_TF_OPS when calling tf.lite.TFLiteConverter(). Otherwise, if you have a custom implementation for them you can disable this error with --allow_custom_ops, or by setting allow_custom_ops=True when calling tf.lite.TFLiteConverter(). Here is a list of builtin operators you are using: ADD, CONCATENATION, CONV_2D, DEPTHWISE_CONV_2D, LOGISTIC, RESHAPE. Here is a list of operators for which you will need custom implementations: TFLite_Detection_PostProcess.

When I change the --input_arrays and --input_shapes the output of the command is as follows and results in error Check failed:is_rnn_state_array.

(tf_gpu_clone) ridlr@ridlr107:~/TensorFlow/base_model/ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03/tflite_graph_export_tflite$ tflite_convert --output_file tflite_graph.tflite --graph_def_file tflite_graph.pb --input_arrays image_tensor --output_arrays TFLite_Detection_PostProcess --input_shapes 1,576,720,3 --inference_type QUANTIZED_UINT8 --std_dev_values 127 --mean_values 128 --default_ranges_min 0 --default_ranges_max 6
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
2020-01-16 16:29:49.244407: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcuda.so.1
2020-01-16 16:29:49.256539: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-01-16 16:29:49.256795: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties: 
name: GeForce GTX 960M major: 5 minor: 0 memoryClockRate(GHz): 1.176
pciBusID: 0000:01:00.0
2020-01-16 16:29:49.256980: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.1
2020-01-16 16:29:49.258169: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcublas.so.10
2020-01-16 16:29:49.259466: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcufft.so.10
2020-01-16 16:29:49.259715: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcurand.so.10
2020-01-16 16:29:49.261109: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusolver.so.10
2020-01-16 16:29:49.261899: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusparse.so.10
2020-01-16 16:29:49.265052: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudnn.so.7
2020-01-16 16:29:49.265205: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-01-16 16:29:49.265495: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-01-16 16:29:49.265709: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0
2020-01-16 16:29:49.266019: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2020-01-16 16:29:49.290476: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2808000000 Hz
2020-01-16 16:29:49.291137: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55766d722a80 executing computations on platform Host. Devices:
2020-01-16 16:29:49.291160: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): <undefined>, <undefined>
2020-01-16 16:29:49.291357: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-01-16 16:29:49.291576: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties: 
name: GeForce GTX 960M major: 5 minor: 0 memoryClockRate(GHz): 1.176
pciBusID: 0000:01:00.0
2020-01-16 16:29:49.291608: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.1
2020-01-16 16:29:49.291619: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcublas.so.10
2020-01-16 16:29:49.291660: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcufft.so.10
2020-01-16 16:29:49.291669: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcurand.so.10
2020-01-16 16:29:49.291680: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusolver.so.10
2020-01-16 16:29:49.291690: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusparse.so.10
2020-01-16 16:29:49.291700: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudnn.so.7
2020-01-16 16:29:49.291745: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-01-16 16:29:49.291956: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-01-16 16:29:49.292135: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0
2020-01-16 16:29:49.292158: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.1
2020-01-16 16:29:49.327820: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-01-16 16:29:49.327863: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187]      0 
2020-01-16 16:29:49.327870: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 0:   N 
2020-01-16 16:29:49.328012: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-01-16 16:29:49.328261: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-01-16 16:29:49.328479: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-01-16 16:29:49.328679: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 758 MB memory) -> physical GPU (device: 0, name: GeForce GTX 960M, pci bus id: 0000:01:00.0, compute capability: 5.0)
2020-01-16 16:29:49.329922: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x557670cd3560 executing computations on platform CUDA. Devices:
2020-01-16 16:29:49.329941: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): GeForce GTX 960M, Compute Capability 5.0
Traceback (most recent call last):
  File "/home/ridlr/anaconda3/envs/tf_gpu_clone/bin/tflite_convert", line 11, in <module>
    sys.exit(main())
  File "/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorflow/lite/python/tflite_convert.py", line 503, in main
    app.run(main=run_main, argv=sys.argv[:1])
  File "/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 40, in run
    _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
  File "/home/ridlr/.local/lib/python3.6/site-packages/absl/app.py", line 299, in run
    _run_main(main, args)
  File "/home/ridlr/.local/lib/python3.6/site-packages/absl/app.py", line 250, in _run_main
    sys.exit(main(argv))
  File "/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorflow/lite/python/tflite_convert.py", line 499, in run_main
    _convert_tf1_model(tflite_flags)
  File "/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorflow/lite/python/tflite_convert.py", line 193, in _convert_tf1_model
    output_data = converter.convert()
  File "/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorflow/lite/python/lite.py", line 904, in convert
    **converter_kwargs)
  File "/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorflow/lite/python/convert.py", line 373, in toco_convert_graph_def
    input_data.SerializeToString())
  File "/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorflow/lite/python/convert.py", line 172, in toco_convert_protos
    "TOCO failed. See console for info.\n%s\n%s\n" % (stdout, stderr))
tensorflow.lite.python.convert.ConverterError: TOCO failed. See console for info.
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
2020-01-16 16:29:50.875328: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: TFLite_Detection_PostProcess
2020-01-16 16:29:50.977286: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 1537 operators, 2264 arrays (0 quantized)
2020-01-16 16:29:51.036947: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 1537 operators, 2264 arrays (0 quantized)
2020-01-16 16:29:51.503252: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 1: 181 operators, 341 arrays (0 quantized)
2020-01-16 16:29:51.506779: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before pre-quantization graph transformations: 181 operators, 341 arrays (0 quantized)
2020-01-16 16:29:51.508019: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After pre-quantization graph transformations pass 1: 99 operators, 259 arrays (0 quantized)
2020-01-16 16:29:51.509210: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before Group bidirectional sequence lstm/rnn: 99 operators, 259 arrays (0 quantized)
2020-01-16 16:29:51.510286: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before default min-max range propagation graph transformations: 99 operators, 259 arrays (0 quantized)
2020-01-16 16:29:51.511177: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After default min-max range propagation graph transformations pass 1: 99 operators, 259 arrays (0 quantized)
2020-01-16 16:29:51.512363: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before quantization graph transformations: 99 operators, 259 arrays (0 quantized)
2020-01-16 16:29:51.512393: F tensorflow/lite/toco/graph_transformations/quantize.cc:611] Check failed: is_rnn_state_array 
Fatal Python error: Aborted

Current thread 0x00007f44ec596740 (most recent call first):
  File "/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorflow/lite/toco/python/toco_from_protos.py", line 33 in execute
  File "/home/ridlr/.local/lib/python3.6/site-packages/absl/app.py", line 250 in _run_main
  File "/home/ridlr/.local/lib/python3.6/site-packages/absl/app.py", line 299 in run
  File "/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 40 in run
  File "/home/ridlr/anaconda3/envs/tf_gpu_clone/lib/python3.6/site-packages/tensorflow/lite/toco/python/toco_from_protos.py", line 59 in main
  File "/home/ridlr/anaconda3/envs/tf_gpu_clone/bin/toco_from_protos", line 11 in <module>
Aborted (core dumped)


batulrangwala

comment created time in 3 months

issue commenttensorflow/tensorflow

ModuleNotFoundError: No module named 'tflite_runtime'

I was working in the environment only.

However if I do not use sudo it was giving me Permission denied error. Hence I had to use sudo with the pip install command

batulrangwala

comment created time in 3 months

issue closedtensorflow/tensorflow

ModuleNotFoundError: No module named 'tflite_runtime'

Hi,

I am working a x86 Laptop and have installed tensorflow using https://www.tensorflow.org/lite/guide/python Following is the list of tflite installed


ankit@HP:~$ pip3 list | grep tflite
tflite                        1.15.0                       
tflite-runtime                1.14.0 

my aim is to get a Google coral example running from this link https://coral.ai/docs/accelerator/get-started/#3-run-a-model-using-the-tensorflow-lite-api

When I execute the command for inference I get the folllowing error

Traceback (most recent call last):
  File "classify_image.py", line 36, in <module>
    import tflite_runtime.interpreter as tflite
ModuleNotFoundError: No module named 'tflite_runtime'

Is there anything else that I need to install. I already have installed the libedgetpu1-std.

closed time in 3 months

batulrangwala

issue commenttensorflow/tensorflow

ModuleNotFoundError: No module named 'tflite_runtime'

Hi I have created a new conda environment with python=3.5 I have also loaded the tflite as follows

(py35) ankit@HP:~/Desktop/coral/tflite/python/examples/classification$ sudo pip3 install /home/ankit/tflite_runtime-1.14.0-cp35-cp35m-linux_x86_64.whl 
WARNING: The directory '/home/ankit/.cache/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
WARNING: The directory '/home/ankit/.cache/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Processing /home/ankit/tflite_runtime-1.14.0-cp35-cp35m-linux_x86_64.whl
Installing collected packages: tflite-runtime
Successfully installed tflite-runtime-1.14.0

however I am still getting the same error.

(py35) ankit@HP:~/Desktop/coral/tflite/python/examples/classification$ python3 classify_image.py --model models/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite --labels models/inat_bird_labels.txt --input images/parrot.jpg
Traceback (most recent call last):
  File "classify_image.py", line 36, in <module>
    import tflite_runtime.interpreter as tflite
ImportError: No module named 'tflite_runtime'

After careful investigation I observed that the tflite_runtime module was not getting loaded in the env lib path but the base /usr/local/lib.

Resolved it by copying the tflite_runtime folders in my conda env.

I now have a query that why pip3 install command is not installing the module in the conda environment that I am working, instead its installed in the base environment.

batulrangwala

comment created time in 3 months

issue commenttensorflow/tensorflow

Determine input_arrays and output_arrays values for tflite_convert

I have used tensorflow/tools/graph_transforms/summarize_graph_main.cc from tensorflow (github : https://github.com/tensorflow/tensorflow) to determine the inputs/outputs of the model (.pb) file

https://colab.sandbox.google.com/drive/1A9X34s4LkhtkcoFqB6PYCCaGCLAcJRbp#scrollTo=5d7KEDIr6a_t I have requested access now for this. have to wait!

https://medium.com/@daj/how-to-inspect-a-pre-trained-tensorflow-model-5fd2ee79ced0 I am unable to view this link currently.

How can I use netron to determine the input/output of the model (.pb) file. Currently I am using following vales (determined from tensorflow tool summarize_graph). --input_arrays image_tensor --output_arrays TFLite_Detection_PostProcess --input_shapes 1,576,720,3

But to convert to a quantized model (.tflite) I have to add the following parameters --inference_type=QUANTIZED_UINT8 --std_dev_values --mean_values --default_ranges_min --default_ranges_max

How do I determine the values of following specifiers? will these be available from netron? --std_dev_values --mean_values --default_ranges_min --default_ranges_max

When I use dummy values it leads me to 2020-01-09 12:25:17.491189: F tensorflow/lite/toco/graph_transformations/quantize.cc:611] Check failed: is_rnn_state_array #35695

batulrangwala

comment created time in 3 months

issue commenttensorflow/tensorflow

tflite_convert failed

Kindly explain what do expect when you ask for standalone code.

tflite_graph.pb is generated using the following command python object_detection/export_tflite_ssd_graph.py --pipeline_config_path /home/ridlr/TensorFlow/exported_model_12k_quantized/pipeline.config --trained_checkpoint_prefix /home/ridlr/TensorFlow/exported_model_12k_quantized/model.ckpt --output_directory /home/ridlr/TensorFlow/exported_model_12k_quantized/

batulrangwala

comment created time in 3 months

issue commenttensorflow/tensorflow

tflite_convert failed

Adding --allow_custom_ops in the command resolved the issue and I was able to generate a .tflite file.

However this file is not quantized tflite and not relevant for me even though it is converted from a quantized graph def file. This issue is reported in https://github.com/tensorflow/tensorflow/issues/35690

batulrangwala

comment created time in 3 months

issue commenttensorflow/tensorflow

2020-01-09 12:25:17.491189: F tensorflow/lite/toco/graph_transformations/quantize.cc:611] Check failed: is_rnn_state_array

Additional Information

I have created tflite_graph.pb from export_tflite_ssd_graph.py, quantized checkpoint and config files. Using the following command

python object_detection/export_tflite_ssd_graph.py --pipeline_config_path /home/ridlr/TensorFlow/exported_model_12k_quantized/pipeline.config --trained_checkpoint_prefix /home/ridlr/TensorFlow/exported_model_12k_quantized/model.ckpt --output_directory /home/ridlr/TensorFlow/exported_model_12k_quantized/

batulrangwala

comment created time in 3 months

issue openedtensorflow/tensorflow

2020-01-09 12:25:17.491189: F tensorflow/lite/toco/graph_transformations/quantize.cc:611] Check failed: is_rnn_state_array

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 18.04
  • TensorFlow installed from (source or binary):binary
  • TensorFlow version:1.14
  • Python version:3.7.4
  • Installed using virtualenv? pip? conda?:conda
  • Bazel version (if compiling from source):
  • GCC/Compiler version (if compiling from source):7.4
  • CUDA/cuDNN version:10.2
  • GPU model and memory:GeForce GTX 960M/PCIe/SSE2, 16GB

Describe the problem 2020-01-09 12:25:17.491189: F tensorflow/lite/toco/graph_transformations/quantize.cc:611] Check failed: is_rnn_state_array

Provide the exact sequence of commands / steps that you executed before running into the problem I am converting a quantized graph def (.pb ) to a quantized tflite (.tflite) using the dummy quantization and encounter error as follows

(tf_gpu_clone) ridlr@ridlr107:~/TensorFlow/exported_model_12k_quantized$ tflite_convert --output_file tflite_graph.tflite --graph_def_file tflite_graph.pb --input_arrays image_tensor --output_arrays TFLite_Detection_PostProcess --input_shapes 1,576,720,3 --allow_custom_ops --inference_type QUANTIZED_UINT8 --std_dev_values 127 --mean_values 128 --default_ranges_min 0 --default_ranges_max 6
2020-01-09 12:25:15.452049: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-01-09 12:25:15.474575: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2808000000 Hz
2020-01-09 12:25:15.475004: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x561bb6736540 executing computations on platform Host. Devices:
2020-01-09 12:25:15.475031: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): <undefined>, <undefined>
Traceback (most recent call last):
  File "/home/ridlr/anaconda3/bin/tflite_convert", line 10, in <module>
    sys.exit(main())
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/python/tflite_convert.py", line 503, in main
    app.run(main=run_main, argv=sys.argv[:1])
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/python/platform/app.py", line 40, in run
    _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/absl/app.py", line 299, in run
    _run_main(main, args)
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/absl/app.py", line 250, in _run_main
    sys.exit(main(argv))
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/python/tflite_convert.py", line 499, in run_main
    _convert_tf1_model(tflite_flags)
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/python/tflite_convert.py", line 193, in _convert_tf1_model
    output_data = converter.convert()
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/python/lite.py", line 904, in convert
    **converter_kwargs)
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/python/convert.py", line 373, in toco_convert_graph_def
    input_data.SerializeToString())
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/python/convert.py", line 172, in toco_convert_protos
    "TOCO failed. See console for info.\n%s\n%s\n" % (stdout, stderr))
tensorflow.lite.python.convert.ConverterError: TOCO failed. See console for info.
2020-01-09 12:25:16.861669: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: TFLite_Detection_PostProcess
2020-01-09 12:25:16.957738: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 1537 operators, 2264 arrays (0 quantized)
2020-01-09 12:25:17.017901: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 1537 operators, 2264 arrays (0 quantized)
2020-01-09 12:25:17.482076: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 1: 181 operators, 341 arrays (0 quantized)
2020-01-09 12:25:17.485583: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before pre-quantization graph transformations: 181 operators, 341 arrays (0 quantized)
2020-01-09 12:25:17.486877: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After pre-quantization graph transformations pass 1: 99 operators, 259 arrays (0 quantized)
2020-01-09 12:25:17.488034: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before Group bidirectional sequence lstm/rnn: 99 operators, 259 arrays (0 quantized)
2020-01-09 12:25:17.489088: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before default min-max range propagation graph transformations: 99 operators, 259 arrays (0 quantized)
2020-01-09 12:25:17.489972: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After default min-max range propagation graph transformations pass 1: 99 operators, 259 arrays (0 quantized)
2020-01-09 12:25:17.491160: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before quantization graph transformations: 99 operators, 259 arrays (0 quantized)
2020-01-09 12:25:17.491189: F tensorflow/lite/toco/graph_transformations/quantize.cc:611] Check failed: is_rnn_state_array 
Fatal Python error: Aborted

Current thread 0x00007fb839eed740 (most recent call first):
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/toco/python/toco_from_protos.py", line 33 in execute
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/absl/app.py", line 250 in _run_main
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/absl/app.py", line 299 in run
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/python/platform/app.py", line 40 in run
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/toco/python/toco_from_protos.py", line 59 in main
  File "/home/ridlr/anaconda3/bin/toco_from_protos", line 10 in <module>
Aborted (core dumped)

However if I do not include the following specifiers a *.tflite is created. --inference_type QUANTIZED_UINT8 --std_dev_values 127 --mean_values 128 --default_ranges_min 0 --default_ranges_max 6

This *.tflite file when used to convert to *_edgetpu.tflite (this model is used to run inference on Google coral) gives the following error

(tf_gpu_clone) ridlr@ridlr107:~/TensorFlow/exported_model_12k_quantized$ edgetpu_compiler tflite_graph.tflite 
Edge TPU Compiler version 2.0.267685300
Invalid model: tflite_graph.tflite
Model not quantized

Hence it is necessary to include the specifiers for quantization.

created time in 3 months

issue openedtensorflow/tensorflow

tflite_convert failed

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 18.04
  • TensorFlow installed from (source or binary): source
  • TensorFlow version (or github SHA if from source): 1.14

Provide the text output from tflite_convert

(tf_gpu_clone) ridlr@ridlr107:~/TensorFlow/exported_model_12k_quantized$ tflite_convert --output_file tflite_graph.tflite --graph_def_file tflite_graph.pb --input_arrays image_tensor --output_arrays TFLite_Detection_PostProcess --input_shapes 1,576,720,3
2020-01-09 12:10:44.239300: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-01-09 12:10:44.262441: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2808000000 Hz
2020-01-09 12:10:44.262923: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x558c8fa667e0 executing computations on platform Host. Devices:
2020-01-09 12:10:44.262939: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): <undefined>, <undefined>
Traceback (most recent call last):
  File "/home/ridlr/anaconda3/bin/tflite_convert", line 10, in <module>
    sys.exit(main())
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/python/tflite_convert.py", line 503, in main
    app.run(main=run_main, argv=sys.argv[:1])
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/python/platform/app.py", line 40, in run
    _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/absl/app.py", line 299, in run
    _run_main(main, args)
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/absl/app.py", line 250, in _run_main
    sys.exit(main(argv))
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/python/tflite_convert.py", line 499, in run_main
    _convert_tf1_model(tflite_flags)
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/python/tflite_convert.py", line 193, in _convert_tf1_model
    output_data = converter.convert()
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/python/lite.py", line 904, in convert
    **converter_kwargs)
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/python/convert.py", line 373, in toco_convert_graph_def
    input_data.SerializeToString())
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/python/convert.py", line 172, in toco_convert_protos
    "TOCO failed. See console for info.\n%s\n%s\n" % (stdout, stderr))
tensorflow.lite.python.convert.ConverterError: TOCO failed. See console for info.
2020-01-09 12:10:45.667362: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: TFLite_Detection_PostProcess
2020-01-09 12:10:45.763812: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 1537 operators, 2264 arrays (0 quantized)
2020-01-09 12:10:45.824420: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 1537 operators, 2264 arrays (0 quantized)
2020-01-09 12:10:46.292215: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 1: 181 operators, 341 arrays (0 quantized)
2020-01-09 12:10:46.295908: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before Group bidirectional sequence lstm/rnn: 181 operators, 341 arrays (0 quantized)
2020-01-09 12:10:46.298914: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before dequantization graph transformations: 181 operators, 341 arrays (0 quantized)
2020-01-09 12:10:46.304648: I tensorflow/lite/toco/allocate_transient_arrays.cc:345] Total transient array allocated size: 20160000 bytes, theoretical optimal value: 17280000 bytes.
2020-01-09 12:10:46.305189: I tensorflow/lite/toco/toco_tooling.cc:433] Estimated count of arithmetic ops: 1.29335 billion (note that a multiply-add is counted as 2 ops).
2020-01-09 12:10:46.305598: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/Conv/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305607: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305629: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305633: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_1/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305636: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_1/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305641: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_1/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305645: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_2/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305650: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_2/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305654: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_2/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305658: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_2/post_activation_bypass_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305662: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_3/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305665: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_3/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305669: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_3/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305674: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_4/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305678: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_4/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305681: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_4/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305684: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_4/post_activation_bypass_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305688: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_5/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305692: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_5/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305696: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_5/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305700: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_5/post_activation_bypass_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305703: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_6/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305706: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_6/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305709: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_6/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305713: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_7/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305717: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_7/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305721: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_7/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305725: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_7/post_activation_bypass_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305729: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_8/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305733: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_8/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305737: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_8/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305741: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_8/post_activation_bypass_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305745: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_9/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305749: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_9/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305753: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_9/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305758: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_9/post_activation_bypass_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305762: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_10/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305766: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_10/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305770: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_10/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305774: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_11/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305778: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_11/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305782: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_11/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305786: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_11/post_activation_bypass_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305790: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_12/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305793: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_12/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305796: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_12/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305800: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_12/post_activation_bypass_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305803: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_13/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305807: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_13/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305811: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_13/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305815: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_14/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305819: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_14/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305823: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_14/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305827: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_14/post_activation_bypass_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305831: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_15/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305835: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_15/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305839: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_15/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305843: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_15/post_activation_bypass_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305847: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_16/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305851: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_16/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305855: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_16/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305859: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/Conv_1/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305863: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/layer_19_1_Conv2d_2_1x1_256/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305867: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_2_3x3_s2_512/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305871: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/layer_19_1_Conv2d_3_1x1_128/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305875: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_3_3x3_s2_256/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305879: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/layer_19_1_Conv2d_4_1x1_128/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305883: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_4_3x3_s2_256/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305887: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/layer_19_1_Conv2d_5_1x1_64/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305891: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_5_3x3_s2_128/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305896: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_0/BoxEncodingPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305900: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_0/ClassPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305904: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_1/BoxEncodingPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305908: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_1/ClassPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305912: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_2/BoxEncodingPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305916: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_2/ClassPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305920: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_3/BoxEncodingPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305924: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_3/ClassPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305928: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_4/BoxEncodingPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305932: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_4/ClassPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305936: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_5/BoxEncodingPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305940: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_5/ClassPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305998: E tensorflow/lite/toco/toco_tooling.cc:456] We are continually in the process of adding support to TensorFlow Lite for more ops. It would be helpful if you could inform us of how this conversion went by opening a github issue at https://github.com/tensorflow/tensorflow/issues/new?template=40-tflite-op-request.md
 and pasting the following:

Some of the operators in the model are not supported by the standard TensorFlow Lite runtime. If those are native TensorFlow operators, you might be able to use the extended runtime by passing --enable_select_tf_ops, or by setting target_ops=TFLITE_BUILTINS,SELECT_TF_OPS when calling tf.lite.TFLiteConverter(). Otherwise, if you have a custom implementation for them you can disable this error with --allow_custom_ops, or by setting allow_custom_ops=True when calling tf.lite.TFLiteConverter(). Here is a list of builtin operators you are using: ADD, CONCATENATION, CONV_2D, DEPTHWISE_CONV_2D, FAKE_QUANT, LOGISTIC, RESHAPE. Here is a list of operators for which you will need custom implementations: TFLite_Detection_PostProcess.
Traceback (most recent call last):
  File "/home/ridlr/anaconda3/bin/toco_from_protos", line 10, in <module>
    sys.exit(main())
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/toco/python/toco_from_protos.py", line 59, in main
    app.run(main=execute, argv=[sys.argv[0]] + unparsed)
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/python/platform/app.py", line 40, in run
    _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/absl/app.py", line 299, in run
    _run_main(main, args)
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/absl/app.py", line 250, in _run_main
    sys.exit(main(argv))
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/toco/python/toco_from_protos.py", line 33, in execute
    output_str = tensorflow_wrap_toco.TocoConvert(model_str, toco_str, input_str)
Exception: We are continually in the process of adding support to TensorFlow Lite for more ops. It would be helpful if you could inform us of how this conversion went by opening a github issue at https://github.com/tensorflow/tensorflow/issues/new?template=40-tflite-op-request.md
 and pasting the following:

Some of the operators in the model are not supported by the standard TensorFlow Lite runtime. If those are native TensorFlow operators, you might be able to use the extended runtime by passing --enable_select_tf_ops, or by setting target_ops=TFLITE_BUILTINS,SELECT_TF_OPS when calling tf.lite.TFLiteConverter(). Otherwise, if you have a custom implementation for them you can disable this error with --allow_custom_ops, or by setting allow_custom_ops=True when calling tf.lite.TFLiteConverter(). Here is a list of builtin operators you are using: ADD, CONCATENATION, CONV_2D, DEPTHWISE_CONV_2D, FAKE_QUANT, LOGISTIC, RESHAPE. Here is a list of operators for which you will need custom implementations: TFLite_Detection_PostProcess.

Also, please include a link to a GraphDef or the model if possible.

created time in 3 months

issue openedtensorflow/tensorflow

Determine input_arrays and output_arrays values for tflite_convert

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux ubuntu 18.04
  • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:
  • TensorFlow installed from (source or binary): binary
  • TensorFlow version:1.14
  • Python version:3.7.4
  • Installed using virtualenv? pip? conda?:conda
  • Bazel version (if compiling from source):NA
  • GCC/Compiler version (if compiling from source):7.4
  • CUDA/cuDNN version:10.2
  • GPU model and memory:GeForce GTX 960M/PCIe/SSE2, 16GB

Describe the problem tflite_convert : need to know the values for inout_arrays and --output_arrays Provide the exact sequence of commands / steps that you executed before running into the problem I have created a tflite_graph.pb from export_tflite_ssd_graph.py, quantized checkpoint and config files succesfully.

My task is to generate a .tflite file using the generated graph_def_file using the tflite_convert command. And then use this to generate a edgetpu.tflite file to run on Google coral. following is the log of the command

(tf_gpu_clone) ridlr@ridlr107:~/TensorFlow/exported_model_12k_quantized$ tflite_convert --output_file tflite_graph.tflite --graph_def_file tflite_graph.pb --input_arrays image_tensor --output_arrays detection_boxes --input_shapes 1,576,720,3
2020-01-09 11:05:56.913487: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-01-09 11:05:56.934582: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2808000000 Hz
2020-01-09 11:05:56.935469: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5569f40357d0 executing computations on platform Host. Devices:
2020-01-09 11:05:56.935514: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): <undefined>, <undefined>
Traceback (most recent call last):
  File "/home/ridlr/anaconda3/bin/tflite_convert", line 10, in <module>
    sys.exit(main())
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/python/tflite_convert.py", line 503, in main
    app.run(main=run_main, argv=sys.argv[:1])
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/python/platform/app.py", line 40, in run
    _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/absl/app.py", line 299, in run
    _run_main(main, args)
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/absl/app.py", line 250, in _run_main
    sys.exit(main(argv))
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/python/tflite_convert.py", line 499, in run_main
    _convert_tf1_model(tflite_flags)
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/python/tflite_convert.py", line 193, in _convert_tf1_model
    output_data = converter.convert()
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/python/lite.py", line 904, in convert
    **converter_kwargs)
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/python/convert.py", line 373, in toco_convert_graph_def
    input_data.SerializeToString())
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/python/convert.py", line 172, in toco_convert_protos
    "TOCO failed. See console for info.\n%s\n%s\n" % (stdout, stderr))
tensorflow.lite.python.convert.ConverterError: TOCO failed. See console for info.
2020-01-09 11:05:58.375534: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: TFLite_Detection_PostProcess
2020-01-09 11:05:58.456167: F tensorflow/lite/toco/tooling_util.cc:918] Check failed: GetOpWithOutput(model, output_array) Specified output array "detection_boxes" is not produced by any op in this graph. Is it a typo? This should not happen. If you trigger this error please send a bug report (with code to reporduce this error), to the TensorFlow Lite team.
Fatal Python error: Aborted

Current thread 0x00007f347d2e6740 (most recent call first):
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/toco/python/toco_from_protos.py", line 33 in execute
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/absl/app.py", line 250 in _run_main
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/absl/app.py", line 299 in run
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/python/platform/app.py", line 40 in run
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/toco/python/toco_from_protos.py", line 59 in main
  File "/home/ridlr/anaconda3/bin/toco_from_protos", line 10 in <module>
Aborted (core dumped)

How do I determine the correct value of --output_array.

Any other info / logs If I use the specifier --inference_type=QUANTIZED_UINT8 How do I determine the values of following specifiers? --std_dev_values --mean_values --default_ranges_min --default_ranges_max

created time in 3 months

issue commenttensorflow/tensorflow

ModuleNotFoundError: No module named 'tflite_runtime'

What exactly are you using to run the inference command? Are you sure that is the same version of python? i.e. if you are using pip3 but then using python you will probably be using python2 which doesn't have it installed.

I am using python3 command as follows (same is mentioned on the example in website link given before) python3 classify_image.py --model models/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite --labels models/inat_bird_labels.txt --input images/parrot.jpg

I am not working with python2 and I am sure of that. Actually the example was working initially (For this I had never installed anything related to python or tensorflow. It was a handover of laptop with all of this done). However I was perfoming a task to convert frozen model (.pb) to tflite model (.tflite) and ended up uninstalling and reinstalling different versions of tensorflow and messed up the system As I am newbie to python and tensorflow I was not aware about conda environments and did everything on the base. However I have now set up a conda environment with python3.6 and tensorflow as mentioned. but the example faills at tflite_runtime.

batulrangwala

comment created time in 3 months

issue commenttensorflow/tensorflow

ModuleNotFoundError: No module named 'tflite_runtime'

Maybe you can use import tflite-runtime.interpreter as tflite instead of import tflite_runtime.interpreter as tflite .

This results in syntax error

batulrangwala

comment created time in 3 months

issue openedtensorflow/tensorflow

ModuleNotFoundError: No module named 'tflite_runtime'

Hi,

I am working a x86 Laptop and have installed tensorflow using https://www.tensorflow.org/lite/guide/python Following is the list of tflite installed


ankit@HP:~$ pip3 list | grep tflite
tflite                        1.15.0                       
tflite-runtime                1.14.0 

my aim is to get a Google coral example running from this link https://coral.ai/docs/accelerator/get-started/#3-run-a-model-using-the-tensorflow-lite-api

When I execute the command for inference I get the folllowing error

Traceback (most recent call last):
  File "classify_image.py", line 36, in <module>
    import tflite_runtime.interpreter as tflite
ModuleNotFoundError: No module named 'tflite_runtime'

Is there anything else that I need to install. I already have installed the libedgetpu1-std.

created time in 3 months

issue commenttensorflow/tensorflow

tflite_runtime supports for arm (pi3 model b+)

Hi I am using a Google Coral Mini PCIe on Banana PI 64 and have a similiar issue My linux distribution is Linux bpi-iot-ros-ai 5.4.0-bpi-r64 #1 SMP PREEMPT Mon Dec 16 16:00:08 IST 2019 aarch64 aarch64 aarch64 GNU/Linux TensorFlow installed tflite_runtime-1.14.0-cp35-cp35m-linux_aarch64.whl Python3 Version : 3.5.2

I have loaded the libedgetpu as per the instructions here https://coral.ai/news/updates-04-2019/. I am running the Demo model https://coral.ai/docs/m2/get-started/#4-run-a-model-using-the-tensorflow-lite-api which gives me the following error

Traceback (most recent call last):
  File "classify_image.py", line 122, in <module>
    main()
  File "classify_image.py", line 98, in main
    interpreter = make_interpreter(args.model)
  File "classify_image.py", line 71, in make_interpreter
    {'device': device[0]} if device else {})
  File "/usr/local/lib/python3.5/dist-packages/tflite_runtime/interpreter.py", line 165, in load_delegate
    delegate = Delegate(library, options)
  File "/usr/local/lib/python3.5/dist-packages/tflite_runtime/interpreter.py", line 89, in __init__
    self._library = ctypes.pydll.LoadLibrary(library)
  File "/usr/lib/python3.5/ctypes/__init__.py", line 425, in LoadLibrary
    return self._dlltype(name)
  File "/usr/lib/python3.5/ctypes/__init__.py", line 347, in __init__
    self._handle = _dlopen(self._name, mode)
OSError: /usr/lib/aarch64-linux-gnu/libc++abi.so.1: undefined symbol: _Unwind_GetRegionStart
Exception ignored in: <bound method Delegate.__del__ of <tflite_runtime.interpreter.Delegate object at 0x7f99af6550>>
Traceback (most rent call last):
  File "/usr/local/lib/python3.5/dist-packages/tflite_runtime/interpreter.py", line 124, in __del__
    if self._library is not None:
AttributeError: 'Delegate' object has no attribute '_library'

I have updated the g++ and gcc to version 6 and installed libunwind8. but the same error is still present.

Namburger

comment created time in 3 months

issue commenttensorflow/tensorflow

tflite.load_delegate() failed when running Demo API on Google Coral mini PCIe

I have now loaded the libedgetpu as per the instructions here https://coral.ai/news/updates-04-2019/. Now I do not get the HIB error but the follwoing error


`Traceback (most recent call last):
  File "classify_image.py", line 122, in <module>
    main()
  File "classify_image.py", line 98, in main
    interpreter = make_interpreter(args.model)
  File "classify_image.py", line 71, in make_interpreter
    {'device': device[0]} if device else {})
  File "/usr/local/lib/python3.5/dist-packages/tflite_runtime/interpreter.py", line 165, in load_delegate
    delegate = Delegate(library, options)
  File "/usr/local/lib/python3.5/dist-packages/tflite_runtime/interpreter.py", line 89, in __init__
    self._library = ctypes.pydll.LoadLibrary(library)
  File "/usr/lib/python3.5/ctypes/__init__.py", line 425, in LoadLibrary
    return self._dlltype(name)
  File "/usr/lib/python3.5/ctypes/__init__.py", line 347, in __init__
    self._handle = _dlopen(self._name, mode)
OSError: /usr/lib/aarch64-linux-gnu/libc++abi.so.1: undefined symbol: _Unwind_GetRegionStart
Exception ignored in: <bound method Delegate.__del__ of <tflite_runtime.interpreter.Delegate object at 0x7f99af6550>>
Traceback (most recent call last):
  File "/usr/local/lib/python3.5/dist-packages/tflite_runtime/interpreter.py", line 124, in __del__
    if self._library is not None:
AttributeError: 'Delegate' object has no attribute '_library'

` I have updated the g++ and gcc to version 6 and installed libunwind8. but the same error is still present.

batulrangwala

comment created time in 3 months

issue commenttensorflow/tensorflow

tflite.load_delegate() failed when running Demo API on Google Coral mini PCIe

As suggested by the Coral support and Troubleshooting https://coral.ai/docs/m2/get-started/#troubleshooting The error still persist.

batulrangwala

comment created time in 3 months

issue openedtensorflow/tensorflow

tflite.load_delegate() failed when running Demo API on Google Coral mini PCIe

I am also facing a similar issue. The demo API gives error at tflite.load_delegate.

pi@bpi-iot-ros-ai:~/coral/tflite/python/examples/classification$ python3 classify_image.py --model models/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite --labels models/inat_bird_labels.txt --input images/parrot.jpg
E :248] HIB Error. hib_error_status = 0000000000000001, hib_first_error_status = 0000000000000001
E :248] HIB Error. hib_error_status = 0000000000000001, hib_first_error_status = 0000000000000001
INFO: Initialized TensorFlow Lite runtime.
----INFERENCE TIME----
Note: The first inference on Edge TPU is slow because it includes loading the model into Edge TPU memory.



pi@bpi-iot-ros-ai:~$ uname -a
Linux bpi-iot-ros-ai 5.4.0-bpi-r64 #1 SMP PREEMPT Mon Dec 16 16:00:08 IST 2019 aarch64 aarch64 aarch64 GNU/Linux
pi@bpi-iot-ros-ai:~$ lscpu
Architecture:          aarch64
Byte Order:            Little Endian
CPU(s):                2
On-line CPU(s) list:   0,1
Thread(s) per core:    1
Core(s) per socket:    2
Socket(s):             1
CPU max MHz:           1350.0000
CPU min MHz:           30.0000
pi@bpi-iot-ros-ai:~$ ls -l /usr/lib/aarch64-linux-gnu/libedge*
lrwxrwxrwx 1 root root     17 Sep 17 04:27 /usr/lib/aarch64-linux-gnu/libedgetpu.so.1 -> libedgetpu.so.1.0
-rwxrwxrwx 1 root root 792376 Sep 17 04:27 /usr/lib/aarch64-linux-gnu/libedgetpu.so.1.0
pi@bpi-iot-ros-ai:~$ lspci
00:00.0 PCI bridge: MEDIATEK Corp. Device 3258
01:00.0 System peripheral: Device 1ac1:089a
pi@bpi-iot-ros-ai:~$ ls /dev/apex_0 
/dev/apex_0
pi@bpi-iot-ros-ai:~$ sudo sh -c "echo 'SUBSYSTEM==\"apex\", MODE=\"0660\", GROUP=\"apex\"' >> /etc/udev/rules.d/65-apex.rules"
pi@bpi-iot-ros-ai:~$ sudo groupadd apex
groupadd: group 'apex' already exists
pi@bpi-iot-ros-ai:~$ sudo adduser $USER apex
The user `pi' is already a member of `apex'.

created time in 3 months

issue commenttensorflow/tensorflow

Failed to load delegate from libedgetpu.so.1 on PCIe EdgeTPU [SOLVED]

I am facing the same issue. The demo API gives error at tflite.load_delegate.

pi@bpi-iot-ros-ai:~/coral/tflite/python/examples/classification$ python3 classify_image.py --model models/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite --labels models/inat_bird_labels.txt --input images/parrot.jpg
E :248] HIB Error. hib_error_status = 0000000000000001, hib_first_error_status = 0000000000000001
E :248] HIB Error. hib_error_status = 0000000000000001, hib_first_error_status = 0000000000000001
INFO: Initialized TensorFlow Lite runtime.
----INFERENCE TIME----
Note: The first inference on Edge TPU is slow because it includes loading the model into Edge TPU memory.



pi@bpi-iot-ros-ai:~$ uname -a
Linux bpi-iot-ros-ai 5.4.0-bpi-r64 #1 SMP PREEMPT Mon Dec 16 16:00:08 IST 2019 aarch64 aarch64 aarch64 GNU/Linux
pi@bpi-iot-ros-ai:~$ lscpu
Architecture:          aarch64
Byte Order:            Little Endian
CPU(s):                2
On-line CPU(s) list:   0,1
Thread(s) per core:    1
Core(s) per socket:    2
Socket(s):             1
CPU max MHz:           1350.0000
CPU min MHz:           30.0000
pi@bpi-iot-ros-ai:~$ ls -l /usr/lib/aarch64-linux-gnu/libedge*
lrwxrwxrwx 1 root root     17 Sep 17 04:27 /usr/lib/aarch64-linux-gnu/libedgetpu.so.1 -> libedgetpu.so.1.0
-rwxrwxrwx 1 root root 792376 Sep 17 04:27 /usr/lib/aarch64-linux-gnu/libedgetpu.so.1.0
pi@bpi-iot-ros-ai:~$ lspci
00:00.0 PCI bridge: MEDIATEK Corp. Device 3258
01:00.0 System peripheral: Device 1ac1:089a
pi@bpi-iot-ros-ai:~$ ls /dev/apex_0 
/dev/apex_0
pi@bpi-iot-ros-ai:~$ sudo sh -c "echo 'SUBSYSTEM==\"apex\", MODE=\"0660\", GROUP=\"apex\"' >> /etc/udev/rules.d/65-apex.rules"
pi@bpi-iot-ros-ai:~$ sudo groupadd apex
groupadd: group 'apex' already exists
pi@bpi-iot-ros-ai:~$ sudo adduser $USER apex
The user `pi' is already a member of `apex'.
asw-v4

comment created time in 3 months

more