profile
viewpoint

tf-coreml/tf-coreml 1181

TensorFlow to CoreML Converter

Epiphane/CPE430_Asg5 0

Making GUCI3 in Clojure!

gargn/csc-tutoring-website 0

Website for Cal Poly Computer Science Tutoring Center.

gargn/donkey 0

self driving car

gargn/tensorflow 0

Computation using data flow graphs for scalable machine learning

gargn/thesis 0

Cal Poly CSC Masters thesis 2016-2017.

issue commenttensorflow/tensorflow

TocoConvertor: converting keras models to tflite doesn't support custom objects

In 2.X you need to load your Keras model yourself with the custom_objects passed in and then call from_keras_model. Please reference this example on how to convert a model using from_keras_model. Please reference this document on details on compatibility between 1.X and 2.X.

ophiry

comment created time in 10 days

issue commenttensorflow/tensorflow

TFLITE MODEL INPUT DYNAMIC INPUT SIZE

Support for unknown dimensions in TensorFlow Lite was added last week (5591208).

Can you try converting your model again with the tf-nightly (pip install tf-nightly). Convert the model with experimental_new_converter = True.

When you load the model it should have an additional field shape_signature that contains the shape with any unknown dimensions marked with -1. shape will have those dimensions marked with 1.

You can then call ResizeInputTensor with the desired shape when running the interpreter. The generated model will only work on the latest TensorFlow version (i.e. the interpreter on the tf-nightly version you are running).

Currently, models using quantization are not supported.

tulasiram58827

comment created time in 18 days

issue commenttensorflow/tensorflow

TFLite not support Dynamic input size

Unfortunately, dynamic shapes are currently not supported with quantization.

We are hoping to enable dynamic shapes with weight-only quantization support in the near future (which seems to be sufficient for your use case). However, full integer post training quantization support won't be added in the immediate future.

WenguoLi

comment created time in 18 days

issue commenttensorflow/tensorflow

TFLite not support Dynamic input size

@harsh306 Yes. As long as the 1.X TensorFlow SavedModel takes in dynamic input shapes, then it will work with TFLite after reconverting the model. I recommend using TFLiteConverter.from_saved-model in the tf-nightly (which supports 1.X SavedModels) to try converting the model and testing this functionality.

WenguoLi

comment created time in 19 days

issue closedtensorflow/tensorflow

ValueError: This converter can only convert a single ConcreteFunction. Converting multiple functions is under development.

when I convert pre-trained mobilenet model to tflite, there is a error:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/project/pyenv/lib/python3.5/site-packages/tensorflow_core/lite/python/lite.py", line 400, in convert
    raise ValueError("This converter can only convert a single "
ValueError: This converter can only convert a single ConcreteFunction. Converting multiple functions is under development.

my code is

import tensorflow as tf
from tensorflow.keras.applications.mobilenet import MobileNet
model = MobileNet(weights='imagenet')
saved_model_path = "./saved_models/{}".format(int(time.time()))
tf.keras.experimental.export_saved_model(model, saved_model_path)
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
tflite_model = converter.convert()

my os is Ubuntu 16.04.6 LTS

closed time in 19 days

GodsDusk

issue commenttensorflow/tensorflow

TFLite not support Dynamic input size

We added support for unknown dimensions in TensorFlow Lite today (55912083e2f16087c2f29394acf8a6a4811a2ce0).

Can you try converting your model again with tonight's (1/31) tf-nightly once it's released (pip install tf-nightly). Convert the model with experimental_new_converter = True.

When you load the model it should have an additional field shape_signature that contains the shape with any unknown dimensions marked with -1. shape will have those dimensions marked with 1.

You can then call ResizeInputTensor with the desired shape when running the interpreter. The generated model will only work on the latest TensorFlow version (i.e. the interpreter on the tf-nightly version you are running).

If it does not work, can you provide a detailed error and repro instructions?

WenguoLi

comment created time in 23 days

issue commenttensorflow/tensorflow

Does tFlite support input shape=[1,32,None,3]

We added support for unknown dimensions in TensorFlow Lite today (55912083e2f16087c2f29394acf8a6a4811a2ce0).

Can you try converting your model again with tonight's (1/31) tf-nightly once it's released (pip install tf-nightly). Convert the model with experimental_new_converter = True.

When you load the model it should have an additional field shape_signature that contains the shape with any unknown dimensions marked with -1. shape will have those dimensions marked with 1.

You can then call ResizeInputTensor with the desired shape when running the interpreter. The generated model will only work on the latest TensorFlow version (i.e. the interpreter on the tf-nightly version you are running).

If it does not work, can you provide a detailed error and repro instructions?

gds101054108

comment created time in 23 days

issue commenttensorflow/tensorflow

ValueError: This converter can only convert a single ConcreteFunction. Converting multiple functions is under development.

First off, I recommend using experimental_new_converter given that the older converter seems to be resulting in additional errors. With the new converter, it becomes clear that your example uses an unsupported operation - tf.ParseExampleV2.

Given that tf.ParseExampleV2 is not supported as a built-in op in TFLite your options are to:

  1. Remove tf.ParseExampleV2 from your graph.
  2. Implement tf.ParseExampleV2 as a custom operation.
  3. [Recommended] Use TF Select ops. If you download tomorrow's tf-nightly (released on 01-28 PST), I have added ParseExampleV2 to the TF Select ops.

Assuming you choose option #3 (which will the easiest but will lead to a binary size increase), you should use the following flags during conversion:

converter.experimental_new_converter = True
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS,
                                       tf.lite.OpsSet.SELECT_TF_OPS]

If the suggestion does not fix your issue, can you provide a fully reproducible example with the imports and the conversion code. The example you provided had some errors, so I was not able to run it in order to debug the code.

GodsDusk

comment created time in a month

pull request commenttensorflow/tensorflow

Repair broken links

Don't think we should point to the old docs. Perhaps just point to the website: https://www.tensorflow.org/lite/convert

Or these pages in GitHub: https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/g3doc/convert

I think this is the README for the old converter. The new convert is based off of this: https://github.com/tensorflow/tensorflow/blob/3cfbf14e80a4b5feb9e1a786e02ff705b42f83ef/tensorflow/lite/g3doc/convert/index.md

z3dm4n

comment created time in a month

Pull request review commenttensorflow/tensorflow

Repair broken links

 the usage documentation.  Usage information is given in these documents: -*   [Command-line glossary](../g3doc/convert/cmdline_reference.md)-*   [Command-line examples](../g3doc/convert/cmdline_examples.md)+*   [Command-line glossary](../g3doc/r1/convert/cmdline_reference.md)+*   [Command-line examples](../g3doc/r1/convert/cmdline_examples.md) *   [Python API examples](../g3doc/convert/python_api.md)  ## Where the converter fits in the TensorFlow landscape

The line including the drawing (line 29) needs to link to here:

https://github.com/tensorflow/tensorflow/tree/3cfbf14e80a4b5feb9e1a786e02ff705b42f83ef/tensorflow/lite/g3doc/r1/images/convert (basically within the r1 directory instead of the main directory)

z3dm4n

comment created time in a month

issue commenttensorflow/tensorflow

ValueError: This converter can only convert a single ConcreteFunction. Converting multiple functions is under development.

For the first code snippet, try removing the concrete_func.inputs[0].set_shape([]) line. That was only necessary because the user in the previous example wanted to set the shape of their input. For your conversion, let's assume it is correct. If that still leads to an error, then can you add converter.experimental_new_converter = True since that converter supports more use cases.

For the last code snippet, try printing out saved_model_obj.signatures.keys() (the saved_model_obj is from the first snippet). Each of the keys is a signature that maps to a concrete function. Usually you want to use the one with serve in it.

GodsDusk

comment created time in a month

IssuesEvent

issue closedtensorflow/tensorflow

tflite_convert failed

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 18.04
  • TensorFlow installed from (source or binary): binary
  • TensorFlow version (or github SHA if from source): 1.14

Provide the text output from tflite_convert

(tf_gpu_clone) ridlr@ridlr107:~/TensorFlow/exported_model_12k_quantized$ tflite_convert --output_file tflite_graph.tflite --graph_def_file tflite_graph.pb --input_arrays image_tensor --output_arrays TFLite_Detection_PostProcess --input_shapes 1,576,720,3
2020-01-09 12:10:44.239300: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-01-09 12:10:44.262441: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2808000000 Hz
2020-01-09 12:10:44.262923: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x558c8fa667e0 executing computations on platform Host. Devices:
2020-01-09 12:10:44.262939: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): <undefined>, <undefined>
Traceback (most recent call last):
  File "/home/ridlr/anaconda3/bin/tflite_convert", line 10, in <module>
    sys.exit(main())
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/python/tflite_convert.py", line 503, in main
    app.run(main=run_main, argv=sys.argv[:1])
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/python/platform/app.py", line 40, in run
    _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/absl/app.py", line 299, in run
    _run_main(main, args)
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/absl/app.py", line 250, in _run_main
    sys.exit(main(argv))
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/python/tflite_convert.py", line 499, in run_main
    _convert_tf1_model(tflite_flags)
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/python/tflite_convert.py", line 193, in _convert_tf1_model
    output_data = converter.convert()
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/python/lite.py", line 904, in convert
    **converter_kwargs)
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/python/convert.py", line 373, in toco_convert_graph_def
    input_data.SerializeToString())
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/python/convert.py", line 172, in toco_convert_protos
    "TOCO failed. See console for info.\n%s\n%s\n" % (stdout, stderr))
tensorflow.lite.python.convert.ConverterError: TOCO failed. See console for info.
2020-01-09 12:10:45.667362: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: TFLite_Detection_PostProcess
2020-01-09 12:10:45.763812: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 1537 operators, 2264 arrays (0 quantized)
2020-01-09 12:10:45.824420: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 1537 operators, 2264 arrays (0 quantized)
2020-01-09 12:10:46.292215: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 1: 181 operators, 341 arrays (0 quantized)
2020-01-09 12:10:46.295908: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before Group bidirectional sequence lstm/rnn: 181 operators, 341 arrays (0 quantized)
2020-01-09 12:10:46.298914: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before dequantization graph transformations: 181 operators, 341 arrays (0 quantized)
2020-01-09 12:10:46.304648: I tensorflow/lite/toco/allocate_transient_arrays.cc:345] Total transient array allocated size: 20160000 bytes, theoretical optimal value: 17280000 bytes.
2020-01-09 12:10:46.305189: I tensorflow/lite/toco/toco_tooling.cc:433] Estimated count of arithmetic ops: 1.29335 billion (note that a multiply-add is counted as 2 ops).
2020-01-09 12:10:46.305598: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/Conv/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305607: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305629: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305633: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_1/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305636: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_1/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305641: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_1/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305645: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_2/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305650: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_2/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305654: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_2/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305658: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_2/post_activation_bypass_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305662: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_3/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305665: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_3/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305669: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_3/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305674: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_4/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305678: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_4/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305681: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_4/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305684: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_4/post_activation_bypass_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305688: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_5/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305692: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_5/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305696: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_5/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305700: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_5/post_activation_bypass_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305703: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_6/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305706: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_6/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305709: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_6/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305713: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_7/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305717: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_7/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305721: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_7/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305725: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_7/post_activation_bypass_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305729: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_8/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305733: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_8/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305737: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_8/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305741: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_8/post_activation_bypass_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305745: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_9/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305749: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_9/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305753: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_9/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305758: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_9/post_activation_bypass_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305762: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_10/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305766: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_10/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305770: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_10/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305774: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_11/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305778: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_11/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305782: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_11/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305786: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_11/post_activation_bypass_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305790: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_12/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305793: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_12/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305796: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_12/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305800: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_12/post_activation_bypass_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305803: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_13/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305807: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_13/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305811: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_13/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305815: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_14/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305819: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_14/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305823: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_14/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305827: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_14/post_activation_bypass_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305831: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_15/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305835: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_15/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305839: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_15/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305843: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_15/post_activation_bypass_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305847: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_16/expand/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305851: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_16/depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305855: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/expanded_conv_16/project/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305859: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/Conv_1/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305863: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/layer_19_1_Conv2d_2_1x1_256/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305867: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_2_3x3_s2_512/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305871: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/layer_19_1_Conv2d_3_1x1_128/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305875: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_3_3x3_s2_256/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305879: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/layer_19_1_Conv2d_4_1x1_128/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305883: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_4_3x3_s2_256/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305887: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/layer_19_1_Conv2d_5_1x1_64/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305891: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_5_3x3_s2_128/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305896: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_0/BoxEncodingPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305900: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_0/ClassPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305904: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_1/BoxEncodingPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305908: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_1/ClassPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305912: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_2/BoxEncodingPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305916: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_2/ClassPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305920: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_3/BoxEncodingPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305924: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_3/ClassPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305928: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_4/BoxEncodingPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305932: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_4/ClassPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305936: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_5/BoxEncodingPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305940: W tensorflow/lite/toco/tflite/export.cc:456] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_5/ClassPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.
2020-01-09 12:10:46.305998: E tensorflow/lite/toco/toco_tooling.cc:456] We are continually in the process of adding support to TensorFlow Lite for more ops. It would be helpful if you could inform us of how this conversion went by opening a github issue at https://github.com/tensorflow/tensorflow/issues/new?template=40-tflite-op-request.md
 and pasting the following:

Some of the operators in the model are not supported by the standard TensorFlow Lite runtime. If those are native TensorFlow operators, you might be able to use the extended runtime by passing --enable_select_tf_ops, or by setting target_ops=TFLITE_BUILTINS,SELECT_TF_OPS when calling tf.lite.TFLiteConverter(). Otherwise, if you have a custom implementation for them you can disable this error with --allow_custom_ops, or by setting allow_custom_ops=True when calling tf.lite.TFLiteConverter(). Here is a list of builtin operators you are using: ADD, CONCATENATION, CONV_2D, DEPTHWISE_CONV_2D, FAKE_QUANT, LOGISTIC, RESHAPE. Here is a list of operators for which you will need custom implementations: TFLite_Detection_PostProcess.
Traceback (most recent call last):
  File "/home/ridlr/anaconda3/bin/toco_from_protos", line 10, in <module>
    sys.exit(main())
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/toco/python/toco_from_protos.py", line 59, in main
    app.run(main=execute, argv=[sys.argv[0]] + unparsed)
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/python/platform/app.py", line 40, in run
    _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/absl/app.py", line 299, in run
    _run_main(main, args)
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/absl/app.py", line 250, in _run_main
    sys.exit(main(argv))
  File "/home/ridlr/anaconda3/lib/python3.7/site-packages/tensorflow/lite/toco/python/toco_from_protos.py", line 33, in execute
    output_str = tensorflow_wrap_toco.TocoConvert(model_str, toco_str, input_str)
Exception: We are continually in the process of adding support to TensorFlow Lite for more ops. It would be helpful if you could inform us of how this conversion went by opening a github issue at https://github.com/tensorflow/tensorflow/issues/new?template=40-tflite-op-request.md
 and pasting the following:

Some of the operators in the model are not supported by the standard TensorFlow Lite runtime. If those are native TensorFlow operators, you might be able to use the extended runtime by passing --enable_select_tf_ops, or by setting target_ops=TFLITE_BUILTINS,SELECT_TF_OPS when calling tf.lite.TFLiteConverter(). Otherwise, if you have a custom implementation for them you can disable this error with --allow_custom_ops, or by setting allow_custom_ops=True when calling tf.lite.TFLiteConverter(). Here is a list of builtin operators you are using: ADD, CONCATENATION, CONV_2D, DEPTHWISE_CONV_2D, FAKE_QUANT, LOGISTIC, RESHAPE. Here is a list of operators for which you will need custom implementations: TFLite_Detection_PostProcess.

Also, please include a link to a GraphDef or the model if possible.

closed time in a month

batulrangwala

issue closedtensorflow/tensorflow

FusedBatchNormV3 and AddV2 need custom implementations.

<em>Please make sure that this is a bug. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:bug_template</em>

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows 10 Pro, Version 1903
  • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: N/A
  • TensorFlow installed from (source or binary): binary
  • TensorFlow version (use command below): v2.1.0-rc2-17-ge5bf8de410 2.1.0
  • Python version: 3.7
  • Bazel version (if compiling from source): N/A
  • GCC/Compiler version (if compiling from source): N/A
  • CUDA/cuDNN version: 10.2.89
  • GPU model and memory: GeForce GTX 1050 2.00GiB

Describe the current behavior

I have created the simplest case of sequential neural network which only contains one Batch Normalization layer. Then I have tried to convert this model to Tensorflow Lite but then I got following message:

Here is a list of operators for which you will need custom implementations: FusedBatchNormV3.

When I tried to set parameter fused=False I got following message:

Here is a list of operators for which you will need custom implementations: AddV2.

Describe the expected behavior

I expect converted Tensorflow Lite model with optimized Batch Normalization weights.

Code to reproduce the issue

import tensorflow as tf

tflite_path = 'test.tflite'
keras_path = 'test.h5'

bn = tf.keras.layers.BatchNormalization(
    # fused=False,
    input_shape = (128,128,3)
)

# Create model
k_model = tf.keras.models.Sequential()
k_model.add(bn)
k_model.save(keras_path)

# Convert keras model to tflite
converter = tf.lite.TFLiteConverter.from_keras_model(k_model)
tflite_model = converter.convert()
open(tflite_path, 'wb').write(tflite_model)

Other info / logs Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.

closed time in a month

Jukyy

issue commenttensorflow/tensorflow

FusedBatchNormV3 and AddV2 need custom implementations.

Closing because it is resolved as an installation issue.

Jukyy

comment created time in a month

issue commenttensorflow/tensorflow

ValueError: This converter can only convert a single ConcreteFunction. Converting multiple functions is under development.

There is documentation on concrete functions here. You will need to use TFLiteConverter.from_concrete_function directly. There is an example of how to do that here.

GodsDusk

comment created time in a month

issue commenttensorflow/tensorflow

tf-lite 2.0 python

@dmitriykovalev might be able to provide additional help

peter197321

comment created time in a month

issue closedtensorflow/tensorflow

Error description is not clear with new experimental TF_lite_converter

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux, Colab
  • TensorFlow installed from (source or binary): binary
  • TensorFlow version (or github SHA if from source): tf-nightly

Command used to run the converter or code if you’re using the Python API

import tensorflow as tf
mnist = tf.keras.datasets.mnist

(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
# without the following two lines, it will throw
# ValueError: Cannot set tensor: Got value of type NOTYPE but expected type FLOAT32 for input 0, name: flatten_input 
#x_train = tf.dtypes.cast(x_train,tf.float32)
#x_test = tf.dtypes.cast(x_test,tf.float32)

model = tf.keras.models.Sequential([
  tf.keras.layers.Flatten(input_shape=(28, 28)),
  tf.keras.layers.Dense(128, activation='relu'),
  tf.keras.layers.Dropout(0.2),
  tf.keras.layers.Dense(10, activation='softmax')
])

model.compile(optimizer='adam',
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

model.fit(x_train, y_train, epochs=1)
model.evaluate(x_test, y_test)

converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.experimental_new_converter = True
#converter.experimental_enable_mlir_converter = True
tflite_model = converter.convert()

import numpy as np
expected = model.predict(x_test[0:1])

# Run the model with TensorFlow Lite
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
interpreter.set_tensor(input_details[0]["index"], x_test[0:1, :, :])
interpreter.invoke()
result = interpreter.get_tensor(output_details[0]["index"])

# Assert if the result of TFLite model is consistent with the TF model.
np.testing.assert_almost_equal(expected, result)
print("Done. The result of TensorFlow matches the result of TensorFLow Lite.")

The output from the converter invocation

ValueError: Cannot set tensor: Got value of type NOTYPE but expected type FLOAT32 for input 0, name: flatten_input

Failure details

Conversion is successful if the data type is float32. If the data type of input data is float64, then it will throw ValueError: Cannot set tensor: Got value of type NOTYPE but expected type FLOAT32 for input 0, name: flatten_input which is not clear. Most of the keras models In Tensorflow website under tutorials are with float64 datatype. So, if the user try to convert them into tf_lite model, they will end up in this ValueError. I think we need to update the Error description. Instead of showing NOTYPE, may it is better to use float64 or other data types that are not compatible.

Here is the link to colab gist

closed time in a month

jvishnuvardhan

issue commenttensorflow/tensorflow

Error description is not clear with new experimental TF_lite_converter

float64 isn't supported in the TFLite interpreter. The supported types are available here.

Because it's not trivial to change the error message that is shown here without adding support for float64, I am closing this issue.

jvishnuvardhan

comment created time in a month

issue commenttensorflow/models

The error about convert ssd mobilenet v2 coco models into tflite model.

Regarding the command line approach - TFLite doesn't currently support dynamic shapes. In order to fix this error, you need to use the --input_shapes command line argument (examples) (reference). However, once you do that, you will get the same error as the Python API.

Regarding the Python API approach - TFLite doesn't support control flow ops Switch and Merge. We have some tooling to rewrite the operations using control flow in some object detection models. A medium post detailing how to do that is available here. However, this approach does not work with all object detection models.

If you can generate the model in 2.0 then you can try using the new converter which is enabled with converter.experimental_new_converter = True. The model needs to be generated in 2.0 in order for the control flow conversion to work. This approach works consistently across models with control flow.

yangchaoFree

comment created time in a month

issue commenttensorflow/tensorflow

FusedBatchNormV3 and AddV2 need custom implementations.

Taking a closer look into the op, FusedBatchNormV3 has been supported since June 2019. Given that this issue is only occurring in one installation of TensorFlow, it appears this is an installation related issue. Can you try reinstalling TensorFlow on the machine giving the error or provide additional information about your installation that might help with debugging?

Jukyy

comment created time in a month

issue commenttensorflow/tensorflow

Here is a list of operators for which you will need custom implementations: BatchMatMul, Erf

You will need to implement Erf as a custom operator because it is not supported in TensorFlow Lite. There is some instruction here.

ChiuHsin

comment created time in a month

issue commenttensorflow/tensorflow

error when set converter.experimental_new_converter = True

Your model contains control flow and therefore is not supported in our old converter.

Regarding the new converter (i.e. when experimental_new_converter = True), it is not obvious where your error is coming from. Can you provide a reproducible example with functions like resnet_graph defined. Alternatively, can you provide the model file so we can reproduce your conversion on our end.

Reassigning to @karimnosseir who has more familiarity with the new converter errors.

xuchengggg

comment created time in a month

issue commenttensorflow/tensorflow

FusedBatchNormV3 and AddV2 need custom implementations.

This model appears to convert with tf-nightly (pip install tf-nightly) version 2.1.0.dev20200102. We also recommend using converter.experimental_new_converter = True. However, that didn't seem necessary for me to get the model converting.

Jukyy

comment created time in a month

issue commenttensorflow/tensorflow

Problem converting model to tensorflow lite (LSTM model)

Can you try using the following flag converter.experimental_new_converter = True when converting the model?

suha-glal

comment created time in a month

issue commenttensorflow/tensorflow

tflite_convert failed

@jianlijianli Do you happen to have any insight on this quantization related issue?

batulrangwala

comment created time in a month

issue commenttensorflow/tensorflow

[TF2.0]tf.lite.converter.convert() error:Cannot find the Placeholder op that is an input to the ReadVariableOp. watch my second problem

@gunjanddave You should be able to resolve your conversion issues by setting the flag experimental_new_converter to True. If that does not resolve your issues, please file a new bug with your error and details on how to reproduce your error.

tms2003

comment created time in a month

issue commenttensorflow/tensorflow

tflite_convert failed

There is documentation available here which shows an example of converting an SSD model. An example command might look like the following if you are using the Python installation:

tflite_convert
  --input_file=/tmp/some_quantized_graph.pb \
  --output_file=/tmp/foo.tflite \
  --inference_type=QUANTIZED_UINT8 \
  --input_shape=1,128,128,3 \
  --input_array=input \
  --output_array=MobilenetV1/Predictions/Reshape_1 \
  --mean_value=128 \
  --std_value=127

The details of each command are available here.

batulrangwala

comment created time in a month

issue commenttensorflow/tensorflow

Bug when convert to tflite models.

@renjie-liu I saw you implemented support for Range. Any idea where this error might be coming from?

dathudeptrai

comment created time in a month

issue commenttensorflow/tensorflow

Creating outputs from the final_state returned by dynamic_rnn causes conversion to fail

@renjie-liu Any insight into this error?

xkr1

comment created time in a month

issue commenttensorflow/tensorflow

Converting saved_model to TFLite model using TF 2.0

Reading your original post, one option is to use tf.compat.v1.TFLiteConverter.from_frozen_graph.

However, regarding the type error - @alanchiao any insight on it?

chauhansaurabhb

comment created time in a month

issue commenttensorflow/tensorflow

Bug when convert to tflite models.

Can you try setting converter.experimental_new_converter = True when converting the model and see if that fixes your error?

dathudeptrai

comment created time in a month

issue commenttensorflow/tensorflow

Converting saved_model to TFLite model using TF 2.0

Here is a slightly more comprehensive example that shows how to use the SavedModel structure. It is based on the first example in the concrete functions documentation:

# Load the SavedModel.
saved_model_obj = tf.saved_model.load(export_dir=saved_model_dir)

# Load the specific concrete function from the SavedModel.
concrete_func = saved_model_obj.signatures['serving_default']

# Set the shape of the input in the concrete function.
concrete_func.inputs[0].set_shape([])

# Convert the model to a TFLite model.
converter =  tf.lite.TFLiteConverter.from_concrete_functions([concrete_func])
converter.optimizations =  [tf.lite.Optimize.DEFAULT]
tflite_model = converter.convert()

serving_default is the default key for signatures in a SavedModels. However, your signature key might be different.

Just for complete context, this is the model generation code for the model above:

import tensorflow as tf

class Pow(tf.Module):
  def __init__(self, exponent):
    self.exponent = tf.Variable(exponent, dtype=tf.float32, name='Pow/exponent')

  @tf.function
  def __call__(self, x):
    return x ** self.exponent

# Generate concrete function.
root = Pow(3)
concrete_func = root.__call__.get_concrete_function(tf.constant(2.))

# Save the generated concrete function as a SavedModel.
saved_model_dir = '/tmp/pow'
tf.saved_model.save(root, saved_model_dir, signatures=concrete_func)
chauhansaurabhb

comment created time in a month

issue commenttensorflow/models

Unable to create model file 'detect.tflite' to use with TensorFlow Lite!

Try using the full path to the model instead of the relative path from the current directory.

thisishasan

comment created time in a month

issue commenttensorflow/tensorflow

Creating outputs from the final_state returned by dynamic_rnn causes conversion to fail

Can you try using the new experimental converter on the tf-nightly or on 2.1 branch which has more comprehensive support for dynamic_rnn. You can enable the experimental converter by setting converter.experimental_new_converter = True when converting the model. The code will look something like the following:

converter = tf.lite.TFLiteConverter(saved_model_dir)
converter.experimental_new_converter = True
converter.convert()
xkr1

comment created time in a month

issue commenttensorflow/tensorflow

Some of the operators in the model are not supported by the standard TensorFlow Lite runtime

If your batch size is always intended to be the same non-1 value, we recommend setting the batch size of the input tensor in the TensorFlow model (e.g. instead of (n, 20, 8) have the model shape be (10, 20, 8)).

However, if that's not the case, then we currently don't support dynamic shapes. There are a few other open issues about adding dynamic shape support: https://github.com/tensorflow/tensorflow/issues/24607, https://github.com/tensorflow/tensorflow/issues/33711. It's an issue we are aware of and intend to resolve.

One solution that works for some users is to call resize_input_tensor on the Interpreter with new input shape having the desired batch size. That allows you to change the batch size during inference. However, this is just a work around and doesn't work for most models (because of the way the model is constructed).

YAIsaienkov

comment created time in 2 months

issue commenttensorflow/tensorflow

gru convert tflite err(KeyError: 'kernel')

@mkz0930 The flag experimental_new_converter is not available in 2.0. Please use the tf-nightly (pip install tf-nightly).

mkz0930

comment created time in 3 months

issue commenttensorflow/tensorflow

toco_from_protos: not found - breaking

If you installed using --user instead of sudo pip install, then you do not have any console commands in your path. You probably need to do export PATH=$PATH:~/.local/bin before running the converter. This is a pip oddity, not really a bug of our converter.

Reassigning to @aselle who has more knowledge about this.

igorhoogerwoord

comment created time in 3 months

issue commenttensorflow/tensorflow

TensorFlow Lite schema updater loses floating-point precision

Reassigning to @aselle who has more context.

vmarkovtsev

comment created time in 3 months

issue commenttensorflow/tensorflow

Freezing a graph using the C API

The primary supported file format in 2.0 is SavedModel. While GraphDef is a part of the SavedModel, it is no longer an independently supported format.

adaber

comment created time in 3 months

issue commenttensorflow/tensorflow

ValueError: This converter can only convert a single ConcreteFunction. Converting multiple functions is under development.

Can you use tf.keras.models.save_model instead. tf.keras.experimental.export_saved_model has been deprecated. I was able to get the following code working on tf-nightly (version: 2.1.0.dev20191118).

import tensorflow as tf
from tensorflow.keras.applications.mobilenet import MobileNet
model = MobileNet(weights='imagenet')
saved_model_path = "/tmp/saved_models/"
tf.keras.models.save_model(model, saved_model_path)
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_path)
tflite_model = converter.convert()
GodsDusk

comment created time in 3 months

issue commenttensorflow/tensorflow

ValueError: This converter can only convert a single ConcreteFunction. Converting multiple functions is under development.

Please provide details about what platform you are using (operating system, architecture). Also include your TensorFlow version. Also, did you compile from source or install a binary?

Make sure you also include the exact command if possible to produce the output included in your test case. If you are unclear what to include see the issue template displayed in the Github new issue template.

We ask for this in the issue submission template, because it is really difficult to help without that information. Thanks!

GodsDusk

comment created time in 3 months

issue commenttensorflow/tensorflow

how to convert tflite model to float 16?

The document describing the official changes in TFLiteConverter between 1.X and 2.0 is available here. It includes a section on the mapping between the 1.X and 2.0 types.

allenling

comment created time in 3 months

issue commenttensorflow/tensorflow

Tensorflow 2.0 tf.lite.TFLiteConverter.from_keras_model giving 'str' object has no attribute 'call'

To try the new converter, use the tf-nightly pip package and then try:

converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.experimental_new_converter = True
tflite_model = converter.convert()
alokmalik

comment created time in 4 months

issue commenttensorflow/tensorflow

Writing tflite file from Keras model throws "Cycle found! We already encountered that input array"

It seems like your model has control flow in it. There are two ways you can try converting your model:

  1. OpHint based conversion. This is recommended for TensorFlow 1.X.
  2. MLIR-based converter (which only supports TensorFlow 2.0 control flow). To try it out you can use the tf-nightly pip package and then try:
lstm_model = tf.keras.Sequential(...)
converter = tf.lite.TFLiteConverter.from_keras_model(lstm_model)
converter.experimental_new_converter = True
tflite_model = converter.convert()
ChaiKnight

comment created time in 4 months

issue commenttensorflow/tensorflow

Mixnet graph freezing issue

Can you validate that the feed_dict is in the graph that is being passed to freeze_graph and that the input arrays didn't get renamed. If that doesn't work, can you provide the checkpoint and pbtxt files as well as visibility into the code that's in build_model (imported by mixnet_builder) so that we can reproduce this issue.

dhananjaisharma10

comment created time in 4 months

issue closedtensorflow/tensorflow

TFlite conversion of tf.keras model fails

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 18.04
  • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: No
  • TensorFlow installed from (source or binary): pip install
  • TensorFlow version (use command below):v2.0.0-beta1-0-g8e423e3d56 2.0.0-beta1
  • Python version: 3.6.8
  • Bazel version (if compiling from source):
  • GCC/Compiler version (if compiling from source): 7.4.0
  • CUDA/cuDNN version: No
  • GPU model and memory: Nvidia titan-Xp

Describe the current behavior

import tensorflow as tf
model = tf.keras.models.load_model('keras_model.h5')
model.summary()
Model: "model"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to
==================================================================================================
input_1 (InputLayer)            [(None, 1048576)]    0
__________________________________________________________________________________________________
embedding (Embedding)           (None, 1048576, 8)   2056        input_1[0][0]
__________________________________________________________________________________________________
conv1d (Conv1D)                 (None, 2097, 128)    512128      embedding[0][0]
__________________________________________________________________________________________________
conv1d_1 (Conv1D)               (None, 2097, 128)    512128      embedding[0][0]
__________________________________________________________________________________________________
multiply (Multiply)             (None, 2097, 128)    0           conv1d[0][0]
                                                                 conv1d_1[0][0]
__________________________________________________________________________________________________
global_max_pooling1d (GlobalMax (None, 128)          0           multiply[0][0]
__________________________________________________________________________________________________
dense (Dense)                   (None, 128)          16512       global_max_pooling1d[0][0]
__________________________________________________________________________________________________
dense_1 (Dense)                 (None, 1)            129         dense[0][0]
==================================================================================================
Total params: 1,042,953
Trainable params: 1,042,953
Non-trainable params: 0
__________________________________________________________________________________________________
converter = tf.lite.TFLiteConverter.from_keras_model(model)

The converter fails to convert the model

>>> converter.convert()
2019-09-26 14:39:27.048354: I tensorflow/core/grappler/devices.cc:60] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0 (Note: TensorFlow was not compiled with CUDA support)
2019-09-26 14:39:27.048553: I tensorflow/core/grappler/clusters/single_machine.cc:359] Starting new session
2019-09-26 14:39:27.065544: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:716] Optimization results for grappler item: graph_to_optimize
2019-09-26 14:39:27.066324: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:718]   function_optimizer: function_optimizer did nothing. time = 0.002ms.
2019-09-26 14:39:27.066655: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:718]   function_optimizer: function_optimizer did nothing. time = 0ms.
Traceback (most recent call last):
  File "/home/sridhar/PE_CSV/malenv3/lib/python3.6/site-packages/tensorflow/python/framework/importer.py", line 427, in import_graph_def
    graph._c_graph, serialized, options)  # pylint: disable=protected-access
tensorflow.python.framework.errors_impl.InvalidArgumentError: Input 0 of node model/embedding/embedding_lookup was passed float from model/embedding/embedding_lookup/Read/ReadVariableOp/resource:0 incompatible with expected resource.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/sridhar/PE_CSV/malenv3/lib/python3.6/site-packages/tensorflow/lite/python/lite.py", line 348, in convert
    self._funcs[0])
  File "/home/sridhar/PE_CSV/malenv3/lib/python3.6/site-packages/tensorflow/python/framework/convert_to_constants.py", line 252, in convert_variables_to_constants_v2
    new_output_names)
  File "/home/sridhar/PE_CSV/malenv3/lib/python3.6/site-packages/tensorflow/python/eager/wrap_function.py", line 607, in function_from_graph_def
    wrapped_import = wrap_function(_imports_graph_def, [])
  File "/home/sridhar/PE_CSV/malenv3/lib/python3.6/site-packages/tensorflow/python/eager/wrap_function.py", line 585, in wrap_function
    collections={}),
  File "/home/sridhar/PE_CSV/malenv3/lib/python3.6/site-packages/tensorflow/python/framework/func_graph.py", line 716, in func_graph_from_py_func
    func_outputs = python_func(*func_args, **func_kwargs)
  File "/home/sridhar/PE_CSV/malenv3/lib/python3.6/site-packages/tensorflow/python/eager/wrap_function.py", line 80, in __call__
    return self.call_with_variable_creator_scope(self._fn)(*args, **kwargs)
  File "/home/sridhar/PE_CSV/malenv3/lib/python3.6/site-packages/tensorflow/python/eager/wrap_function.py", line 86, in wrapped
    return fn(*args, **kwargs)
  File "/home/sridhar/PE_CSV/malenv3/lib/python3.6/site-packages/tensorflow/python/eager/wrap_function.py", line 605, in _imports_graph_def
    importer.import_graph_def(graph_def, name="")
  File "/home/sridhar/PE_CSV/malenv3/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
    return func(*args, **kwargs)
  File "/home/sridhar/PE_CSV/malenv3/lib/python3.6/site-packages/tensorflow/python/framework/importer.py", line 431, in import_graph_def
    raise ValueError(str(e))
ValueError: Input 0 of node model/embedding/embedding_lookup was passed float from model/embedding/embedding_lookup/Read/ReadVariableOp/resource:0 incompatible with expected resource.

I'm able to use the model on a python based inference engine. I'm trying to just compress the model to deploy it on smaller setup and consume via a c/c++ wrapper.

closed time in 4 months

codeyman

issue commenttensorflow/tensorflow

TFlite conversion of tf.keras model fails

Closing this issue because the code snippet in https://github.com/tensorflow/tensorflow/issues/32849#issuecomment-539646714 works on the nightly (pip install tf-nightly).

@codeyman Please request to reopen if you experience additional problems.

codeyman

comment created time in 4 months

issue commenttensorflow/tensorflow

tf-lite 2.0 python

This link has information on how to build the TensorFlow pip from source.

peter197321

comment created time in 4 months

issue commenttensorflow/tensorflow

Tensorflow Lite conversion misshapes bias vector of FullyConnected

@ANSHUMAN87 Do you have an update on your PR?

dukecyto

comment created time in 4 months

IssuesEvent

issue commenttensorflow/tensorflow

Tensorflow Lite conversion misshapes bias vector of FullyConnected

Automatically closing due to lack of recent activity. Please update the issue when new information becomes available, and we will reopen the issue. Thanks!

dukecyto

comment created time in 4 months

issue closedtensorflow/tensorflow

Tensorflow Lite conversion misshapes bias vector of FullyConnected

System information

  • Have I written custom code: tflite converter code is straight from an example script
  • OS Platform and Distribution: Mac OS 10.14.5
  • TensorFlow installed from: pip install tf-nightly and pip install tensorflow
  • TensorFlow version: tested on v1.12.1-5178-gbafa0371c8 1.15.0-dev20190628 and v1.13.0-rc2-5-g6612da8951 1.13.1
  • Python version: 3.6.5

Describe the current behavior TFLite converter incorrectly shapes the bias for FullyConnected operators. Specifically in my test case (see attached model below), in the original freeze graph model, MatMul_6 takes a product of 32x12 matrix and 12x1 vector then add_7 adds a 32x1 vector to it as a bias. The converted TFLite model puts these two operations together into a FullyConnected op, and somehow its bias MatMul_6_bias is incorrectly shaped as a single-element vector. Consequently, the inference result of this TFLite model is incorrect.

Describe the expected behavior The bias vectors should be shaped as they were in the original freeze graph model.

Code to reproduce the issue tflite_bias_shape_issue.zip This zip file contains debug.pb (TF freeze graph model) and debug.tflite (TFLite converted model from frozen model). The conversion code is taken straight from the document:

import tensorflow as tf
graph_def_file = "debug.pb"
input_arrays = ["input"]
output_arrays = ["Reshape_1"]
converter = tf.lite.TFLiteConverter.from_frozen_graph(graph_def_file, input_arrays, output_arrays)
tflite_model = converter.convert()
open("debug.tflite", "wb").write(tflite_model)

The model has extra operators in the beginning (Sub, Div, Gather) just because I did not have time to rebuild the bare minimal test case but I think it is already simple enough.

closed time in 4 months

dukecyto

issue closedtensorflow/tensorflow

Bug in embedding_ops.py Leads to Crash when importing Frozen Wide and Deep model/graph

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): CentOS
  • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: N/A
  • TensorFlow installed from (source or binary): Source
  • TensorFlow version (use command below): 1.12, 1.13 and master
  • Python version: 2.7
  • Bazel version (if compiling from source): 0.19.2
  • GCC/Compiler version (if compiling from source): GCC 6.3
  • CUDA/cuDNN version: N/A
  • GPU model and memory: N/A

Describe the current behavior After successfully training and exporting the trained Wide and Deep model from here: https://github.com/tensorflow/models/tree/master/official/wide_deep Tried to freeze the exported model using freeze_graph.py. The frozen graph got generated without errors. However, when tried to load the frozen graph using the call tf.import_graph_def(graph_def), got the following error: File "../python2.7/site-packages/tensorflow/python/framework/importer.py", line 430, in import_graph_def raise ValueError(str(e)) ValueError: Input 0 of node import/linear/linear_model/linear_model/linear_model/age_bucketized/weighted_sum/embedding_lookup_sparse/embedding_lookup was passed float from import/linear/linear_model/age_bucketized/weights/part_0:0 incompatible with expected resource.

After inspecting the frozen graph, we found that ResourceGather Op is receiving float form Const node (which used to be VarHandleOP before freezing) but ResourceGather is expecting 'resource' data type.

The issue was resolved after changing this line https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/embedding_ops.py#L693 by removing the if statement and calling convert_to_tensor() unconditionally.

Describe the expected behavior The graph is expected to load and run successfully. Code to reproduce the issue https://github.com/tensorflow/models/tree/master/official/wide_deep

closed time in 4 months

mahmoud-abuzaina

issue commenttensorflow/tensorflow

Bug in embedding_ops.py Leads to Crash when importing Frozen Wide and Deep model/graph

Automatically closing due to lack of recent activity. Please update the issue when new information becomes available, and we will reopen the issue. Thanks!

mahmoud-abuzaina

comment created time in 4 months

issue commenttensorflow/tensorflow

Tflite coverter error. tensorflow/lite/toco/tooling_util.cc:935

An Exit op means that this graph has control flow. TensorFlow Lite doesn't support control flow generated by TensorFlow 1.X so this functionality is not expected to work. We are working on supporting control flow 2.0 so please regenerate your graph in 2.0 if you want to run it using TenosrFlow Lite.

tdas714

comment created time in 4 months

Pull request review commenttensorflow/tensorflow

Update documentation for TFLiteConverterV2

 def from_concrete_functions(cls, funcs):      Args:       funcs: List of TensorFlow ConcreteFunctions. The list should not contain-        duplicate elements.+        duplicate elements. Currently converter can only convert a single +        ConcreteFunction. Converting multiple functions is under development."

nit: Remove the quotation mark at the end.

lc0

comment created time in 5 months

Pull request review commenttensorflow/tensorflow

Update documentation for TFLiteConverterV2

 def from_concrete_functions(cls, funcs):      Args:       funcs: List of TensorFlow ConcreteFunctions. The list should not contain-        duplicate elements.+        duplicate elements. Currently converter can only convert a single +        ConcreteFunction. Converting multiple functions is under development." 

nit: Remove the quotation mark at the end.

lc0

comment created time in 5 months

issue closedtensorflow/tensorflow

Test //tensorflow/lite/python:tflite_convert_test is broken

I run Python tests for TFLite converter using this command //tensorflow/lite/python:tflite_convert_test

and I have error: ERROR: missing input file '//tensorflow/lite/python:tflite_convert.par'

Please take a look and include this test data or maybe I run it wrong way ? Thanks

closed time in 5 months

wwwind

issue commenttensorflow/tensorflow

Test //tensorflow/lite/python:tflite_convert_test is broken

The test is not expected to work in open source currently.

wwwind

comment created time in 5 months

issue commenttensorflow/tensorflow

Tensorflow 2.0 tf.lite.TFLiteConverter.from_keras_model giving 'str' object has no attribute 'call'

LSTMs are currently a work in progress.

As we work on adding support, can you provide a minimal repro of your issue. Either provide the model or a modified version of the model / model code to help investigate the issue.

alokmalik

comment created time in 5 months

Pull request review commenttensorflow/tensorflow

Release Notes for 2.0.0-rc0

+# Release 2.0.0++## Major Features and Improvements++TensorFlow 2.0 focuses on **simplicity** and **ease of use**, featuring updates like:++* Easy model building with Keras and eager execution.+* Robust model deployment in production on any platform.+* Powerful experimentation for research.+* API simplification by reducing duplication and removing deprecated endpoints.++For details on best practices with 2.0, see [the Effective 2.0 guide](https://www.tensorflow.org/beta/guide/effective_tf2)+++For information on upgrading your existing TensorFlow 1.x models, please refer to our [Upgrade](https://medium.com/tensorflow/upgrading-your-code-to-tensorflow-2-0-f72c3a4d83b5) and [Migration](https://www.tensorflow.org/beta/guide/migration_guide) guides. We have also released a collection of [tutorials and getting started guides](https://www.tensorflow.org/beta).++## Highlights++* TF 2.0 delivers Keras as the central high level API used to build and train models. Keras provides several model-building APIs such as Sequential, Functional, and Subclassing along with eager execution, for immediate iteration and intuitive debugging, and tf.data, for building scalable input pipelines. Checkout [guide](https://www.tensorflow.org/beta/guide/keras/overview) for additional details.+* Distribution Strategy: TF 2.0 users will be able to use the [`tf.distribute.Strategy`](https://www.tensorflow.org/beta/guide/distribute_strategy) API to distribute training with minimal code changes, yielding great out-of-the-box performance. It supports distributed training with Keras model.fit, as well as with custom training loops. Multi-GPU support is available, along with experimental support for multi worker and Cloud TPUs. Check out the [guide](https://www.tensorflow.org/beta/guide/distribute_strategy) for more details.+* Functions, not Sessions. The traditional declarative programming model of building a graph and executing it via a `tf.Session` is discouraged, and replaced with by writing regular Python functions. Using the `tf.function` decorator, such functions can be turned into graphs which can be executed remotely, serialized, and optimized for performance.+* Unification of tf.train.Optimizers and tf.keras.Optimizers. Use tf.keras.Optimizers for TF2.0. `compute_gradients` is removed as public API, and use GradientTape to compute gradients.+* AutoGraph translates Python control flow into TensorFlow expressions, allowing users to write regular Python inside `tf.function`-decorated functions. AutoGraph is also applied in functions used with tf.data, tf.distribute and tf.keras APIs. +* Unification of exchange formats to SavedModel. All TensorFlow ecosystem projects (TensorFlow Lite, TensorFlow JS, TensorFlow Serving, TensorFlow Hub) accept SavedModels. Model state should be saved to and restored from SavedModels.+* API Changes: Many API symbols have been renamed or removed, and argument names have changed. Many of these changes are motivated by consistency and clarity. The 1.x API remains available in the compat.v1 module. A list of all symbol changes can be found [here](https://docs.google.com/spreadsheets/d/1FLFJLzg7WNP6JHODX5q8BDgptKafq_slHpnHVbJIteQ/edit#gid=0). + * API clean-up, included removing `tf.app`, `tf.flags`, and `tf.logging` in favor of [absl-py](https://github.com/abseil/abseil-py).+* No more global variables with helper methods like `tf.global_variables_initializer` and `tf.get_global_step`.+* Add toggles `tf.enable_control_flow_v2()` and `tf.disable_control_flow_v2()` for enabling/disabling v2 control flow.+* Enable v2 control flow as part of `tf.enable_v2_behavior()` and `TF2_BEHAVIOR=1`.+* Fixes autocomplete for most TensorFlow API references by switching to use relative imports in API __init__.py files.++## Breaking Changes+* Many backwards incompatible API changes have been made to clean up the APIs and make them more consistent.+* Toolchains:+  * TensorFlow 1.15 is built using devtoolset7 (GCC7) on Ubuntu 16. This may lead to ABI incompatibilities with extensions built against earlier versions of TensorFlow.+  * Tensorflow code now produces 2 different pip packages: tensorflow_core containing all the code (in the future it will contain only the private implementation) and tensorflow which is a virtual pip package doing forwarding to tensorflow_core (and in the future will contain only the public API of tensorflow). We don't expect this to be breaking, unless you were importing directly from the implementation.+* `tf.contrib`:+  * `tf.contrib` has been deprecated, and functionality has been either migrated to the core TensorFlow API, to an ecosystem project such as [tensorflow/addons](https://www.github.com/tensorflow/addons) or [tensorflow/io](https://www.github.com/tensorflow/io), or removed entirely.+  * Remove `tf.contrib.timeseries` dependency on TF distributions.+  * Replace contrib references with `tf.estimator.experimental.*` for apis in early_stopping.py+* `tf.estimator`:+  * Premade estimators in the tf.estimator.DNN/Linear/DNNLinearCombined family have been updated to use `tf.keras.optimizers` instead of the `tf.compat.v1.train.Optimizer`s. If you do not pass in an `optimizer=` arg or if you use a string, the premade estimator will use the Keras optimizer. This is checkpoint breaking, as the optimizers have separate variables. A checkpoint converter tool for converting optimizers is included with the release,  but if you want to avoid any change, switch to the v1 version of the estimator:  `tf.compat.v1.estimator.DNN/Linear/DNNLinearCombined*`.+  * Default aggregation for canned Estimators is now `SUM_OVER_BATCH_SIZE`. To maintain previous default behavior, please pass `SUM` as the loss aggregation method.+  * Canned Estimators don’t support `input_layer_partitioner` arg in the API. If you have this arg, you will have to switch to `tf.compat.v1 canned Estimators`.+  * `Estimator.export_savedmodel` has been renamed `export_saved_model`.+  * When saving to SavedModel, Estimators will strip default op attributes. This is almost always the correct behavior, as it is more forwards compatible, but if you require that default attributes are saved with the model, please use `tf.compat.v1.Estimator`.+  * Feature Columns have been upgraded to be more Eager-friendly and to work with Keras. As a result, `tf.feature_column.input_layer` has been deprecated in favor of `tf.keras.layers.DenseFeatures`. v1 feature columns have direct analogues in v2 except for `shared_embedding_columns`, which are not cross-compatible with v1 and v2. Use `tf.feature_column.shared_embeddings` instead.+* `tf.keras`:+  * `OMP_NUM_THREADS` is no longer used by the default Keras config.  To configure the number of threads, use `tf.config.threading` APIs.+  * `tf.keras.model.save_model` and `model.save` now defaults to saving a TensorFlow SavedModel. HDF5 files are still supported.+  * Deprecated `tf.keras.experimental.export_saved_model` and `tf.keras.experimental.function`. Please use `tf.keras.models.save_model(..., save_format='tf')` and `tf.keras.models.load_model` instead.+  * Layers now default to float32, and automatically cast their inputs to the layer's dtype. If you had a model that used float64, it will probably silently use float32 in TensorFlow 2, and a warning will be issued that starts with "Layer <layer-name> is casting an input tensor from dtype float64 to the layer's dtype of float32". To fix, either set the default dtype to float64 with `tf.keras.backend.set_floatx('float64')`, or pass `dtype='float64'` to each of the Layer constructors. See `tf.keras.layers.Layer` for more information.+* `tf.lite`:+  * Removed `lite.OpHint`, `lite.experimental`, and `lite.constant` from 2.0 API.+* Tensors are no longer hashable, but instead compare element-wise with `==` and `!=`. Use `tf.compat.v1.disable_tensor_equality()` to return to the previous behavior.+* Performing equality operations on Tensors or Variables with incompatible shapes an exception is no longer thrown. Instead `__eq__` returns False and `__ne__` returns True.+* Removed `tf.string_split` from v2 API.+* Deprecated the use of `constraint=` and `.constraint` with ResourceVariable.+* Add `UnifiedGRU` as the new GRU implementation for tf2.0. Change the default recurrent activation function for GRU from `hard_sigmoid` to `sigmoid`, and `reset_after` to True in 2.0. Historically recurrent activation is `hard_sigmoid` since it is fast than 'sigmoid'. With new unified backend between CPU and GPU mode, since the CuDNN kernel is using sigmoid, we change the default for CPU mode to sigmoid as well. With that, the default GRU will be compatible with both CPU and GPU kernel. This will enable user with GPU to use CuDNN kernel by default and get a 10x performance boost in training. Note that this is checkpoint breaking change. If user want to use their 1.x pre-trained checkpoint, please construct the layer with GRU(recurrent_activation='hard_sigmoid', reset_after=False) to fallback to 1.x behavior.+* `CUDNN_INSTALL_PATH`, `TENSORRT_INSTALL_PATH`, `NCCL_INSTALL_PATH`, `NCCL_HDR_PATH` are deprecated. Use `TF_CUDA_PATHS` instead which supports a comma-separated list of base paths that are searched to find CUDA libraries and headers.+

@martinwicke Yup I mean removed. Should it be "Removed" instead of "Deprecated"?

goldiegadde

comment created time in 5 months

Pull request review commenttensorflow/tensorflow

Release Notes for 2.0.0-rc0

+# Release 2.0.0++## Major Features and Improvements++TensorFlow 2.0 focuses on **simplicity** and **ease of use**, featuring updates like:++* Easy model building with Keras and eager execution.+* Robust model deployment in production on any platform.+* Powerful experimentation for research.+* API simplification by reducing duplication and removing deprecated endpoints.++For details on best practices with 2.0, see [the Effective 2.0 guide](https://www.tensorflow.org/beta/guide/effective_tf2)+++For information on upgrading your existing TensorFlow 1.x models, please refer to our [Upgrade](https://medium.com/tensorflow/upgrading-your-code-to-tensorflow-2-0-f72c3a4d83b5) and [Migration](https://www.tensorflow.org/beta/guide/migration_guide) guides. We have also released a collection of [tutorials and getting started guides](https://www.tensorflow.org/beta).++## Highlights++* TF 2.0 delivers Keras as the central high level API used to build and train models. Keras provides several model-building APIs such as Sequential, Functional, and Subclassing along with eager execution, for immediate iteration and intuitive debugging, and tf.data, for building scalable input pipelines. Checkout [guide](https://www.tensorflow.org/beta/guide/keras/overview) for additional details.+* Distribution Strategy: TF 2.0 users will be able to use the [`tf.distribute.Strategy`](https://www.tensorflow.org/beta/guide/distribute_strategy) API to distribute training with minimal code changes, yielding great out-of-the-box performance. It supports distributed training with Keras model.fit, as well as with custom training loops. Multi-GPU support is available, along with experimental support for multi worker and Cloud TPUs. Check out the [guide](https://www.tensorflow.org/beta/guide/distribute_strategy) for more details.+* Functions, not Sessions. The traditional declarative programming model of building a graph and executing it via a `tf.Session` is discouraged, and replaced with by writing regular Python functions. Using the `tf.function` decorator, such functions can be turned into graphs which can be executed remotely, serialized, and optimized for performance.+* Unification of tf.train.Optimizers and tf.keras.Optimizers. Use tf.keras.Optimizers for TF2.0. `compute_gradients` is removed as public API, and use GradientTape to compute gradients.+* AutoGraph translates Python control flow into TensorFlow expressions, allowing users to write regular Python inside `tf.function`-decorated functions. AutoGraph is also applied in functions used with tf.data, tf.distribute and tf.keras APIs. +* Unification of exchange formats to SavedModel. All TensorFlow ecosystem projects (TensorFlow Lite, TensorFlow JS, TensorFlow Serving, TensorFlow Hub) accept SavedModels. Model state should be saved to and restored from SavedModels.+* API Changes: Many API symbols have been renamed or removed, and argument names have changed. Many of these changes are motivated by consistency and clarity. The 1.x API remains available in the compat.v1 module. A list of all symbol changes can be found [here](https://docs.google.com/spreadsheets/d/1FLFJLzg7WNP6JHODX5q8BDgptKafq_slHpnHVbJIteQ/edit#gid=0). + * API clean-up, included removing `tf.app`, `tf.flags`, and `tf.logging` in favor of [absl-py](https://github.com/abseil/abseil-py).+* No more global variables with helper methods like `tf.global_variables_initializer` and `tf.get_global_step`.+* Add toggles `tf.enable_control_flow_v2()` and `tf.disable_control_flow_v2()` for enabling/disabling v2 control flow.+* Enable v2 control flow as part of `tf.enable_v2_behavior()` and `TF2_BEHAVIOR=1`.+* Fixes autocomplete for most TensorFlow API references by switching to use relative imports in API __init__.py files.++## Breaking Changes+* Many backwards incompatible API changes have been made to clean up the APIs and make them more consistent.+* Toolchains:+  * TensorFlow 1.15 is built using devtoolset7 (GCC7) on Ubuntu 16. This may lead to ABI incompatibilities with extensions built against earlier versions of TensorFlow.+  * Tensorflow code now produces 2 different pip packages: tensorflow_core containing all the code (in the future it will contain only the private implementation) and tensorflow which is a virtual pip package doing forwarding to tensorflow_core (and in the future will contain only the public API of tensorflow). We don't expect this to be breaking, unless you were importing directly from the implementation.+* `tf.contrib`:+  * `tf.contrib` has been deprecated, and functionality has been either migrated to the core TensorFlow API, to an ecosystem project such as [tensorflow/addons](https://www.github.com/tensorflow/addons) or [tensorflow/io](https://www.github.com/tensorflow/io), or removed entirely.+  * Remove `tf.contrib.timeseries` dependency on TF distributions.+  * Replace contrib references with `tf.estimator.experimental.*` for apis in early_stopping.py+* `tf.estimator`:+  * Premade estimators in the tf.estimator.DNN/Linear/DNNLinearCombined family have been updated to use `tf.keras.optimizers` instead of the `tf.compat.v1.train.Optimizer`s. If you do not pass in an `optimizer=` arg or if you use a string, the premade estimator will use the Keras optimizer. This is checkpoint breaking, as the optimizers have separate variables. A checkpoint converter tool for converting optimizers is included with the release,  but if you want to avoid any change, switch to the v1 version of the estimator:  `tf.compat.v1.estimator.DNN/Linear/DNNLinearCombined*`.+  * Default aggregation for canned Estimators is now `SUM_OVER_BATCH_SIZE`. To maintain previous default behavior, please pass `SUM` as the loss aggregation method.+  * Canned Estimators don’t support `input_layer_partitioner` arg in the API. If you have this arg, you will have to switch to `tf.compat.v1 canned Estimators`.+  * `Estimator.export_savedmodel` has been renamed `export_saved_model`.+  * When saving to SavedModel, Estimators will strip default op attributes. This is almost always the correct behavior, as it is more forwards compatible, but if you require that default attributes are saved with the model, please use `tf.compat.v1.Estimator`.+  * Feature Columns have been upgraded to be more Eager-friendly and to work with Keras. As a result, `tf.feature_column.input_layer` has been deprecated in favor of `tf.keras.layers.DenseFeatures`. v1 feature columns have direct analogues in v2 except for `shared_embedding_columns`, which are not cross-compatible with v1 and v2. Use `tf.feature_column.shared_embeddings` instead.+* `tf.keras`:+  * `OMP_NUM_THREADS` is no longer used by the default Keras config.  To configure the number of threads, use `tf.config.threading` APIs.+  * `tf.keras.model.save_model` and `model.save` now defaults to saving a TensorFlow SavedModel. HDF5 files are still supported.+  * Deprecated `tf.keras.experimental.export_saved_model` and `tf.keras.experimental.function`. Please use `tf.keras.models.save_model(..., save_format='tf')` and `tf.keras.models.load_model` instead.+  * Layers now default to float32, and automatically cast their inputs to the layer's dtype. If you had a model that used float64, it will probably silently use float32 in TensorFlow 2, and a warning will be issued that starts with "Layer <layer-name> is casting an input tensor from dtype float64 to the layer's dtype of float32". To fix, either set the default dtype to float64 with `tf.keras.backend.set_floatx('float64')`, or pass `dtype='float64'` to each of the Layer constructors. See `tf.keras.layers.Layer` for more information.+* `tf.lite`:+  * Removed `lite.OpHint`, `lite.experimental`, and `lite.constant` from 2.0 API.+* Tensors are no longer hashable, but instead compare element-wise with `==` and `!=`. Use `tf.compat.v1.disable_tensor_equality()` to return to the previous behavior.+* Performing equality operations on Tensors or Variables with incompatible shapes an exception is no longer thrown. Instead `__eq__` returns False and `__ne__` returns True.+* Removed `tf.string_split` from v2 API.+* Deprecated the use of `constraint=` and `.constraint` with ResourceVariable.+* Add `UnifiedGRU` as the new GRU implementation for tf2.0. Change the default recurrent activation function for GRU from `hard_sigmoid` to `sigmoid`, and `reset_after` to True in 2.0. Historically recurrent activation is `hard_sigmoid` since it is fast than 'sigmoid'. With new unified backend between CPU and GPU mode, since the CuDNN kernel is using sigmoid, we change the default for CPU mode to sigmoid as well. With that, the default GRU will be compatible with both CPU and GPU kernel. This will enable user with GPU to use CuDNN kernel by default and get a 10x performance boost in training. Note that this is checkpoint breaking change. If user want to use their 1.x pre-trained checkpoint, please construct the layer with GRU(recurrent_activation='hard_sigmoid', reset_after=False) to fallback to 1.x behavior.+* `CUDNN_INSTALL_PATH`, `TENSORRT_INSTALL_PATH`, `NCCL_INSTALL_PATH`, `NCCL_HDR_PATH` are deprecated. Use `TF_CUDA_PATHS` instead which supports a comma-separated list of base paths that are searched to find CUDA libraries and headers.+

Can you add "Deprecated freeze_graph command line tool."

Thanks!

goldiegadde

comment created time in 5 months

issue commenttensorflow/tensorflow

Unsupported ops

We have some general guidance on unsupported operations in our FAQ here. However, for the specific ops you mentioned we are currently working on getting them working through this path as indicated on our roadmap. Control flow is still a work in progress.

leeor-langer

comment created time in 6 months

IssuesEvent

Pull request review commenttensorflow/tensorflow

Check types of name parameters

 def get_tensors_from_tensor_names(graph, tensor_names):   tensors = []   invalid_tensors = []   for name in tensor_names:+    if not isinstance(name,str):

nit: space before str

grwlf

comment created time in 6 months

issue commenttensorflow/tensorflow

Bug in embedding_ops.py Leads to Crash when importing Frozen Wide and Deep model/graph

@CharlesKung A few questions/comments:

  1. Can you attach the input files that are being used in your code snippet so that I can run it and reproduce it locally? Additionally, can you update your code snippet to work as a stand alone. I noticed output_nodes is not defined.
  2. Why are you changing the logic in the embedding op? In general, freeze graph works by pattern matching. If you are changing the patterns then it will no longer work.
  3. Iterator ops should not be removed unless you are specifying the output nodes as nodes before the Iterator ops. However, by default nothing happens to them in freeze graph.
  4. The input of your frozen graph should be the same as the original graph that you created. The only thing that changes in freeze graph is that Variable ops are converted to Const ops (and other similar transformations relating to resources). The structure of the graph and the op names stay the same. I would suggest loading the frozen graph using TensorBoard to examine what the inputs should be if you are uncertain.
mahmoud-abuzaina

comment created time in 6 months

more