profile
viewpoint

Ask questionsWe cannot duplicate the value since it's not constant. Failed to duplicate values for the stateful op.

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 18.04
  • TensorFlow installed from (source or binary): binary
  • TensorFlow version (or github SHA if from source): 2.3.0-dev20200609

Command used to run the converter or code if you’re using the Python API If possible, please share a link to Colab/Jupyter/any notebook.

import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Bidirectional, LSTM, Dense

X = Input(shape=(None, 150), name='input')
Xc = Bidirectional(LSTM(20, return_sequences=True))(X)
Y = Dense(10, activation=tf.nn.softmax, name='output')(Xc)

model = Model(inputs=X, outputs=Y)
loss = tf.keras.losses.CategoricalCrossentropy()
model.compile(optimizer='adam',
              loss=loss,
              metrics=['accuracy'])
model.summary()

inputData = np.ones([100, 200, 150])
outputData = np.ones([100,200,10])

model.fit(x=inputData, y=outputData, epochs=2)

run_model = tf.function(lambda x: model(x))
BATCH_SIZE = 1
STEPS = None # 100
INPUT_SIZE = 150
concrete_func = run_model.get_concrete_function(
    tf.TensorSpec([BATCH_SIZE, STEPS, INPUT_SIZE], model.inputs[0].dtype))
MODEL_DIR = "./saved_model"
model.save(MODEL_DIR, save_format="tf", signatures=concrete_func)

converter = tf.lite.TFLiteConverter.from_saved_model(MODEL_DIR)
tflite_model = converter.convert() # Error!

The output from the converter invocation

---------------------------------------------------------------------------
Exception                                 Traceback (most recent call last)
~/VirtualEnv/ENV37-TF23NT/lib/python3.7/site-packages/tensorflow/lite/python/convert.py in toco_convert_protos(model_flags_str, toco_flags_str, input_data_str, debug_info_str, enable_mlir_converter)
    184                                                  debug_info_str,
--> 185                                                  enable_mlir_converter)
    186       return model_str

~/VirtualEnv/ENV37-TF23NT/lib/python3.7/site-packages/tensorflow/lite/python/wrap_toco.py in wrapped_toco_convert(model_flags_str, toco_flags_str, input_data_str, debug_info_str, enable_mlir_converter)
     37       debug_info_str,
---> 38       enable_mlir_converter)
     39 

Exception: <unknown>:0: error: loc(callsite(callsite(callsite(unknown at "functional_1/bidirectional/backward_lstm/PartitionedCall@__inference_<lambda>_6523") at "StatefulPartitionedCall@__inference_signature_wrapper_6546") at "StatefulPartitionedCall")): We cannot duplicate the value since it's not constant.

<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: note: loc(callsite(callsite(callsite(unknown at "functional_1/bidirectional/backward_lstm/PartitionedCall@__inference_<lambda>_6523") at "StatefulPartitionedCall@__inference_signature_wrapper_6546") at "StatefulPartitionedCall")): see current operation: %5 = "tfl.unidirectional_sequence_lstm"(%4, %cst_13, %cst_14, %cst_15, %cst_16, %cst_5, %cst_6, %cst_7, %cst_8, %cst_32, %cst_32, %cst_32, %cst_9, %cst_10, %cst_11, %cst_12, %cst_32, %cst_32, %3, %3, %cst_32, %cst_32, %cst_32, %cst_32) {cell_clip = 1.000000e+01 : f32, fused_activation_function = "TANH", proj_clip = 0.000000e+00 : f32, time_major = false} : (tensor<1x?x150xf32>, tensor<20x150xf32>, tensor<20x150xf32>, tensor<20x150xf32>, tensor<20x150xf32>, tensor<20x20xf32>, tensor<20x20xf32>, tensor<20x20xf32>, tensor<20x20xf32>, none, none, none, tensor<20xf32>, tensor<20xf32>, tensor<20xf32>, tensor<20xf32>, none, none, tensor<?x20xf32>, tensor<?x20xf32>, none, none, none, none) -> tensor<1x?x20xf32>
<unknown>:0: error: Failed to duplicate values for the stateful op

<unknown>:0: note: see current operation: "func"() ( {
^bb0(%arg0: tensor<1x?x150xf32>):  // no predecessors
  %cst = "std.constant"() {value = dense<[0.00800104905, 8.002180e-03, 0.00801025052, 0.00799552072, 0.00798829272, 0.00801645405, 0.00800046883, 0.00799199379, 0.00802111439, 0.00795717072]> : tensor<10xf32>} : () -> tensor<10xf32>
  %cst_0 = "std.constant"() {value = dense<0.000000e+00> : tensor<f32>} : () -> tensor<f32>
  %cst_1 = "std.constant"() {value = dense<20> : tensor<i32>} : () -> tensor<i32>
  %cst_2 = "std.constant"() {value = dense<10> : tensor<1xi32>} : () -> tensor<1xi32>
  %cst_3 = "std.constant"() {value = dense<2> : tensor<1xi32>} : () -> tensor<1xi32>
  %cst_4 = "std.constant"() {value = dense<[0, 1]> : tensor<2xi32>} : () -> tensor<2xi32>
  %cst_5 = "std.constant"() {value = dense<"0x43B055BE41EB30BC7CBC073ED64B033EBE556E3C656489BD3DF794BD644D33BDF3AF61BDC7E0DD3C41C3E53CB1A15F3D1102DABCA10234BC40563B3E12127F3D34B0CB3D39A77DBD0517453DDFE1F7BDDF83473CB66DFA3BF23B6BBED780F5BDD89FB43CFAA7FB3D7C449DBD80AB69BB42E56EBDF801553D664AA83C8219073E4759D13D1AA96A3E2B3580BDD08902BE2A3405BECFA8AC3D696411BE89C88ABDE13E093EE903043EC2340B3E6836BD3D623122BCC7B0B1BBDB32BCBD2A6CDABD6517CD3D713F913D2AEF9B3D40CAC93DF5E3D63D703676BECAF9A9BD1F9C1D3D9496273E52D2533CC6B00A3EC863933C6BB56BBEC922CCBD89AA5F3E89AEAEBD0B3A193E3626443E9B07443DA23B1C3C671945BD45112EBCCD87913EB31949BD3EB81D3E2182C43C4164B53D9C74F1BD1C88EEBCF59AD0BCBCBEE6BC9D43D33C4FF6913DF8B692BD271E663E1CC7E73DCC196E3C836EC03D3B91303E0AFA53BB7452F9BD2B8CF23D38CC613DA1F6B43DAEAA68BDD221863CA34B03BEDEFD6FBCA03C45BE6CA8633E4CACA53C0D6CACBC086D883DAC02753B878C99BEC33C14BC1090123BBD892D3EDCE4D43D64E2C7BC44BBC53CC9F8CA3D0670763EB46F24BB9B0B20BD94E4AF3D3CDE65BD8ECD503E9235D13D412F20BE2D648C3D39F6B73C6111323E4841CCBCA8EDB5BDC86956BBB2CA3BBD984065BD906B083EED91923DBF5CABBC9376F73CE42FA03D560D18BDF153CFBC4993A93D0755093EA4F544BEEF01823B8B56013E9D3DE23C3BA71D3EB2BC3CBEDADA243D2818773CF66D073E94D01E3EB463AB3DFC3227BEBC0E0DBE2EA1CFBBD458273DBCD92F3EA507993E47BAC2BDEEB640BD92CA673D254C253DEEA2883C177BC13C29AE1A3D0F6511BAF3D859BEE97216BEB7C9BF3BEAA9BEBD3C95AD3B23F37EBDD18329BD46EF113DDDF18C3EEA04543EC3B81C3EF10615BDCD10CD3CB6
...
...
EBDA94F863E437E6CBEB1F8C9BC29E166BE0A6C7EBEBB491B3E94C1363E308BBC3DE7969A3E3D763ABEC7248D3E30D3623E0F67E73D35859FBE522F913D78492CBE84823A3EBD52163DFA137F3EDC103B3E6D13A23E80CFAB3D1E78B4BD881468BEA37B553E50C436BECFC833BE3113403EC29BACBD2DEB62BCA9EB8A3D3F62AD3E5578843DA76FCE3D895EE5BB95318B3ED4D2F7BD209935BEA21190BE98DA9EBE1693A13E7067A73E3B8A813E813E363E6FBBA5BD5E5E54BECFF6A73EB52769BD9E95A53ED071973DC200E63DD239AFBD86A4A3BEA2E3CA3D1DDA04BE316DCA3D1F459F3D998431BE38311E3EE0D2A83E33F3543E39295D3D2FF682BE7AF0AC3EDA1AAABEFC26A3BEE20D66BEBD3D913E8267A13EEA65793E6ECE44BD951A10BE496502BE2E077FBE6F6AD0BD492369BE7B4E9DBDD86385BE552FB33E64C755BE8BA8B0BEF02B3CBE828D9BBEDE08ECBD53AF07BEF4FC093D44F7ADBDCE144C3EE2313ABD65A4AEBE81A0313E03F398BE305EBEBCAD4B763EE34F8BBE4B29A03CD167903E799B263E11073EBD21791DBD47585D3E102063BE11FA21BD32BA48BE7F3923BE88BD073E3A2F11BDCCA56DBE8FDA623EAD3D583DCA74B83DDE6A8FBCAC2A9CBE400576BD125E03BEEEAB59BC8FFF343E5E83773E"> : tensor<10x40xf32>} : () -> tensor<10x40xf32>
  %cst_30 = "std.constant"() {value = dense<0> : tensor<1xi32>} : () -> tensor<1xi32>
  %cst_31 = "std.constant"() {value = dense<1> : tensor<1xi32>} : () -> tensor<1xi32>
  %cst_32 = "std.constant"() {value} : () -> none
  %0 = "tfl.shape"(%arg0) : (tensor<1x?x150xf32>) -> tensor<3xi32>
  %1 = "tfl.strided_slice"(%0, %cst_30, %cst_31, %cst_31) {begin_mask = 0 : i32, ellipsis_mask = 0 : i32, end_mask = 0 : i32, new_axis_mask = 0 : i32, shrink_axis_mask = 1 : i32} : (tensor<3xi32>, tensor<1xi32>, tensor<1xi32>, tensor<1xi32>) -> tensor<i32>
  %2 = "tfl.pack"(%1, %cst_1) {axis = 0 : i32, values_count = 2 : i32} : (tensor<i32>, tensor<i32>) -> tensor<2xi32>
  %3 = "tfl.fill"(%2, %cst_0) : (tensor<2xi32>, tensor<f32>) -> tensor<?x20xf32>
  %4 = "tfl.reverse_v2"(%arg0, %cst_31) : (tensor<1x?x150xf32>, tensor<1xi32>) -> tensor<1x?x150xf32>
  %5 = "tfl.unidirectional_sequence_lstm"(%4, %cst_13, %cst_14, %cst_15, %cst_16, %cst_5, %cst_6, %cst_7, %cst_8, %cst_32, %cst_32, %cst_32, %cst_9, %cst_10, %cst_11, %cst_12, %cst_32, %cst_32, %3, %3, %cst_32, %cst_32, %cst_32, %cst_32) {cell_clip = 1.000000e+01 : f32, fused_activation_function = "TANH", proj_clip = 0.000000e+00 : f32, time_major = false} : (tensor<1x?x150xf32>, tensor<20x150xf32>, tensor<20x150xf32>, tensor<20x150xf32>, tensor<20x150xf32>, tensor<20x20xf32>, tensor<20x20xf32>, tensor<20x20xf32>, tensor<20x20xf32>, none, none, none, tensor<20xf32>, tensor<20xf32>, tensor<20xf32>, tensor<20xf32>, none, none, tensor<?x20xf32>, tensor<?x20xf32>, none, none, none, none) -> tensor<1x?x20xf32>
  %6 = "tfl.reverse_v2"(%5, %cst_31) : (tensor<1x?x20xf32>, tensor<1xi32>) -> tensor<1x?x20xf32>
  %7 = "tfl.unidirectional_sequence_lstm"(%arg0, %cst_25, %cst_26, %cst_27, %cst_28, %cst_17, %cst_18, %cst_19, %cst_20, %cst_32, %cst_32, %cst_32, %cst_21, %cst_22, %cst_23, %cst_24, %cst_32, %cst_32, %3, %3, %cst_32, %cst_32, %cst_32, %cst_32) {cell_clip = 1.000000e+01 : f32, fused_activation_function = "TANH", proj_clip = 0.000000e+00 : f32, time_major = false} : (tensor<1x?x150xf32>, tensor<20x150xf32>, tensor<20x150xf32>, tensor<20x150xf32>, tensor<20x150xf32>, tensor<20x20xf32>, tensor<20x20xf32>, tensor<20x20xf32>, tensor<20x20xf32>, none, none, none, tensor<20xf32>, tensor<20xf32>, tensor<20xf32>, tensor<20xf32>, none, none, tensor<?x20xf32>, tensor<?x20xf32>, none, none, none, none) -> tensor<1x?x20xf32>
  %8 = "tfl.concatenation"(%7, %6) {axis = 2 : i32, fused_activation_function = "NONE"} : (tensor<1x?x20xf32>, tensor<1x?x20xf32>) -> tensor<1x?x40xf32>
  %9 = "tfl.shape"(%8) : (tensor<1x?x40xf32>) -> tensor<3xi32>
  %10 = "tfl.gather"(%9, %cst_4) {axis = 0 : i32} : (tensor<3xi32>, tensor<2xi32>) -> tensor<2xi32>
  %11 = "tfl.reduce_prod"(%10, %cst_30) {keep_dims = false} : (tensor<2xi32>, tensor<1xi32>) -> tensor<i32>
  %12 = "tfl.concatenation"(%10, %cst_2) {axis = 0 : i32, fused_activation_function = "NONE"} : (tensor<2xi32>, tensor<1xi32>) -> tensor<3xi32>
  %13 = "tfl.gather"(%9, %cst_3) {axis = 0 : i32} : (tensor<3xi32>, tensor<1xi32>) -> tensor<1xi32>
  %14 = "tfl.reduce_prod"(%13, %cst_30) {keep_dims = false} : (tensor<1xi32>, tensor<1xi32>) -> tensor<i32>
  %15 = "tfl.pack"(%11, %14) {axis = 0 : i32, values_count = 2 : i32} : (tensor<i32>, tensor<i32>) -> tensor<2xi32>
  %16 = "tfl.reshape"(%8, %15) : (tensor<1x?x40xf32>, tensor<2xi32>) -> tensor<?x?xf32>
  %17 = "tfl.fully_connected"(%16, %cst_29, %cst_32) {fused_activation_function = "NONE", keep_num_dims = false, weights_format = "DEFAULT"} : (tensor<?x?xf32>, tensor<10x40xf32>, none) -> tensor<?x10xf32>
  %18 = "tfl.reshape"(%17, %12) : (tensor<?x10xf32>, tensor<3xi32>) -> tensor<?x?x?xf32>
  %19 = "tfl.add"(%18, %cst) {fused_activation_function = "NONE"} : (tensor<?x?x?xf32>, tensor<10xf32>) -> tensor<?x?x10xf32>
  %20 = "tfl.softmax"(%19) {beta = 1.000000e+00 : f32} : (tensor<?x?x10xf32>) -> tensor<?x?x10xf32>
  "std.return"(%20) : (tensor<?x?x10xf32>) -> ()
}) {arg0 = {tf_saved_model.index_path = ["x"]}, result0 = {tf_saved_model.index_path = ["output_0"]}, sym_name = "serving_default", tf.entry_function = {control_outputs = "", inputs = "serving_default_x:0", outputs = "StatefulPartitionedCall:0"}, tf_saved_model.exported_names = ["serving_default"], type = (tensor<1x?x150xf32>) -> tensor<?x?x10xf32>} : () -> ()

Failure details

  • Conversion succeeded If I set the STEPS to an integer value (e.g: 100).
  • Conversion failed if I set the STEPS to None.
tensorflow/tensorflow

Answer questions renjie-liu

Only the conversion requires fixed shape, you can resize the input later in the runtime.

useful!

Related questions

ModuleNotFoundError: No module named 'tensorflow.contrib' hot 9
Tf.Keras metrics issue hot 8
Error occurred when finalizing GeneratorDataset iterator hot 7
Error loading tensorflow hot 6
module 'tensorflow' has no attribute 'ConfigProto' hot 6
TF 2.0 'Tensor' object has no attribute 'numpy' while using .numpy() although eager execution enabled by default hot 6
tensorflow-gpu CUPTI errors
Lossy conversion from float32 to uint8. Range [0, 1]. Convert image to uint8 prior to saving to suppress this warning.
ModuleNotFoundError: No module named 'tensorflow.contrib'
When importing TensorFlow, error loading Hadoop
OSError: SavedModel file does not exist at: saved_model_dir/{saved_model.pbtxt|saved_model.pb}
AttributeError: module &#39;tensorflow.python.framework.op_def_registry&#39; has no attribute &#39;register_op_list&#39;
tf.keras.layers.Conv1DTranspose ?
[TF 2.0] tf.keras.optimizers.Adam hot 4
TF2.0 AutoGraph issue hot 4
source:https://uonfu.com/
Github User Rank List