profile
viewpoint

startedhediet/vscode-debug-visualizer

started time in a month

issue commenttensorflow/tensorflow

interpreter.invoke() of tflite model causes Aborted (core dumped) despite successful tflite conversion under tensorflow version 1.14.0

I tested it, but the error persists:

Starting program: /usr/bin/python3 tfLiteinference.py
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
[New Thread 0x7ffff412a700 (LWP 14446)]
[New Thread 0x7ffff1929700 (LWP 14447)]
[New Thread 0x7fffef128700 (LWP 14448)]
/home/alex/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/alex/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/alex/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/alex/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/alex/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/alex/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
[Thread 0x7fffef128700 (LWP 14448) exited]
[Thread 0x7ffff1929700 (LWP 14447) exited]
[Thread 0x7ffff412a700 (LWP 14446) exited]
/home/alex/.local/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/alex/.local/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/alex/.local/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/alex/.local/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/alex/.local/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/alex/.local/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
1.14.0
INFO: Initialized TensorFlow Lite runtime.
[{'name': 'image', 'index': 54, 'shape': array([  1, 640, 480,   1], dtype=int32), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0)}]
[{'name': 'descriptor', 'index': 52, 'shape': array([   1, 4096], dtype=int32), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0)}]
(1, 640, 480, 1)
[New Thread 0x7fffef128700 (LWP 14452)]
[New Thread 0x7ffff1929700 (LWP 14453)]
[New Thread 0x7ffff412a700 (LWP 14454)]
[New Thread 0x7fffc1984700 (LWP 14455)]

Thread 1 "python3" received signal SIGABRT, Aborted.
__GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
51      ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
#1  0x00007ffff7a24801 in __GI_abort () at abort.c:79
#2  0x00007fffc68e2203 in tflite::RuntimeShape::RuntimeShape(int, tflite::RuntimeShape const&, int) ()
   from /home/alex/.local/lib/python3.6/site-packages/tensorflow/lite/python/interpreter_wrapper/_tensorflow_wrap_interpreter_wrapper.so
#3  0x00007fffc68e2b87 in void tflite::NdArrayDescsForElementwiseBroadcast<4>(tflite::RuntimeShape const&, tflite::RuntimeShape const&, tflite::NdArrayDesc<4>*, tflite::NdArrayDesc<4>*) ()
   from /home/alex/.local/lib/python3.6/site-packages/tensorflow/lite/python/interpreter_wrapper/_tensorflow_wrap_interpreter_wrapper.so
#4  0x00007fffc69bf8ab in tflite::reference_ops::BroadcastSub4DSlow(tflite::ArithmeticParams const&, tflite::RuntimeShape const&, float const*, tflite::RuntimeShape const&, float const*, tflite::RuntimeShape const&, float*) ()
   from /home/alex/.local/lib/python3.6/site-packages/tensorflow/lite/python/interpreter_wrapper/_tensorflow_wrap_interpreter_wrapper.so
#5  0x00007fffc69bfc94 in void tflite::ops::builtin::sub::EvalSub<(tflite::ops::builtin::sub::KernelType)2>(TfLiteContext*, TfLiteNode*, TfLiteSubParams*, tflite::ops::builtin::sub::OpData const*, TfLiteTensor const*, TfLiteTensor const*, TfLiteTensor*) () from /home/alex/.local/lib/python3.6/site-packages/tensorflow/lite/python/interpreter_wrapper/_tensorflow_wrap_interpreter_wrapper.so
#6  0x00007fffc69c0eb4 in TfLiteStatus tflite::ops::builtin::sub::Eval<(tflite::ops::builtin::sub::KernelType)2>(TfLiteContext*, TfLiteNode*) ()
   from /home/alex/.local/lib/python3.6/site-packages/tensorflow/lite/python/interpreter_wrapper/_tensorflow_wrap_interpreter_wrapper.so
#7  0x00007fffc69fb31f in tflite::Subgraph::Invoke() () from /home/alex/.local/lib/python3.6/site-packages/tensorflow/lite/python/interpreter_wrapper/_tensorflow_wrap_interpreter_wrapper.so
#8  0x00007fffc69fdfa0 in tflite::Interpreter::Invoke() () from /home/alex/.local/lib/python3.6/site-packages/tensorflow/lite/python/interpreter_wrapper/_tensorflow_wrap_interpreter_wrapper.so
#9  0x00007fffc68dc768 in tflite::interpreter_wrapper::InterpreterWrapper::Invoke() () from /home/alex/.local/lib/python3.6/site-packages/tensorflow/lite/python/interpreter_wrapper/_tensorflow_wrap_interpreter_wrapper.so
#10 0x00007fffc68da507 in _wrap_InterpreterWrapper_Invoke () from /home/alex/.local/lib/python3.6/site-packages/tensorflow/lite/python/interpreter_wrapper/_tensorflow_wrap_interpreter_wrapper.so
---Type <return> to continue, or q <return> to quit---
#11 0x000000000050a8af in _PyCFunction_FastCallDict (kwargs=<optimized out>, nargs=<optimized out>, args=<optimized out>, func_obj=<built-in method InterpreterWrapper_Invoke of module object at remote 0x7fffc6a4ed68>)
    at ../Objects/methodobject.c:234
#12 _PyCFunction_FastCallKeywords (kwnames=<optimized out>, nargs=<optimized out>, stack=<optimized out>, func=<optimized out>) at ../Objects/methodobject.c:294
#13 call_function.lto_priv () at ../Python/ceval.c:4851
#14 0x000000000050c5b9 in _PyEval_EvalFrameDefault () at ../Python/ceval.c:3335
#15 0x0000000000509d48 in PyEval_EvalFrameEx (throwflag=0, 
    f=Frame 0x7fffc6a52c18, for file /home/alex/.local/lib/python3.6/site-packages/tensorflow/lite/python/interpreter_wrapper/tensorflow_wrap_interpreter_wrapper.py, line 109, in Invoke (self=<InterpreterWrapper(this=<SwigPyObject at remote 0x7fffc6b3ed80>) at remote 0x7ffff67ed4a8>)) at ../Python/ceval.c:754
#16 _PyFunction_FastCall (globals=<optimized out>, nargs=140736526101528, args=<optimized out>, co=<optimized out>) at ../Python/ceval.c:4933
#17 fast_function.lto_priv () at ../Python/ceval.c:4968
#18 0x000000000050aa7d in call_function.lto_priv () at ../Python/ceval.c:4872
#19 0x000000000050c5b9 in _PyEval_EvalFrameDefault () at ../Python/ceval.c:3335
#20 0x0000000000509d48 in PyEval_EvalFrameEx (throwflag=0, 
    f=Frame 0x7fffce6721f0, for file /home/alex/.local/lib/python3.6/site-packages/tensorflow/lite/python/interpreter.py, line 304, in invoke (self=<Interpreter(_interpreter=<InterpreterWrapper(this=<SwigPyObject at remote 0x7fffc6b3ed80>) at remote 0x7ffff67ed4a8>) at remote 0x7ffff67ed470>)) at ../Python/ceval.c:754
#21 _PyFunction_FastCall (globals=<optimized out>, nargs=140736656253424, args=<optimized out>, co=<optimized out>) at ../Python/ceval.c:4933
---Type <return> to continue, or q <return> to quit---
#22 fast_function.lto_priv () at ../Python/ceval.c:4968
#23 0x000000000050aa7d in call_function.lto_priv () at ../Python/ceval.c:4872
#24 0x000000000050c5b9 in _PyEval_EvalFrameDefault () at ../Python/ceval.c:3335
#25 0x0000000000508245 in PyEval_EvalFrameEx (throwflag=0, f=Frame 0xae0b68, for file tfLiteinference.py, line 22, in <module> ()) at ../Python/ceval.c:754
#26 _PyEval_EvalCodeWithName.lto_priv.1836 () at ../Python/ceval.c:4166
#27 0x000000000050b403 in PyEval_EvalCodeEx (closure=0x0, kwdefs=0x0, defcount=0, defs=0x0, kwcount=0, kws=0x0, argcount=0, args=0x0, locals=<optimized out>, globals=<optimized out>, _co=<optimized out>) at ../Python/ceval.c:4187
#28 PyEval_EvalCode (co=<optimized out>, globals=<optimized out>, locals=<optimized out>) at ../Python/ceval.c:731
#29 0x0000000000635222 in run_mod () at ../Python/pythonrun.c:1025
#30 0x00000000006352d7 in PyRun_FileExFlags () at ../Python/pythonrun.c:978
#31 0x0000000000638a8f in PyRun_SimpleFileExFlags () at ../Python/pythonrun.c:419
#32 0x0000000000638c65 in PyRun_AnyFileExFlags () at ../Python/pythonrun.c:81
#33 0x0000000000639631 in run_file (p_cf=0x7fffffffd94c, filename=<optimized out>, fp=<optimized out>) at ../Modules/main.c:340
#34 Py_Main () at ../Modules/main.c:810
#35 0x00000000004b0f40 in main (argc=2, argv=0x7fffffffdb48) at ../Programs/python.c:69
(gdb) py-list
 104    
 105        def AllocateTensors(self):
 106            return _tensorflow_wrap_interpreter_wrapper.InterpreterWrapper_AllocateTensors(self)
 107    
 108        def Invoke(self):
>109            return _tensorflow_wrap_interpreter_wrapper.InterpreterWrapper_Invoke(self)
 110    
 111        def InputIndices(self):
 112            return _tensorflow_wrap_interpreter_wrapper.InterpreterWrapper_InputIndices(self)
 113    
 114        def OutputIndices(self):
(gdb) py-bt
Traceback (most recent call first):
  <built-in method InterpreterWrapper_Invoke of module object at remote 0x7fffc6a4ed68>
  File "/home/alex/.local/lib/python3.6/site-packages/tensorflow/lite/python/interpreter_wrapper/tensorflow_wrap_interpreter_wrapper.py", line 109, in Invoke
    return _tensorflow_wrap_interpreter_wrapper.InterpreterWrapper_Invoke(self)
  File "/home/alex/.local/lib/python3.6/site-packages/tensorflow/lite/python/interpreter.py", line 304, in invoke
    self._interpreter.Invoke()
  File "tfLiteinference.py", line 22, in <module>
    interpreter.invoke()
lllAlexanderlll

comment created time in 2 months

issue commenttensorflow/tensorflow

interpreter.invoke() of tflite model causes Aborted (core dumped) despite successful tflite conversion under tensorflow version 1.14.0

Sure:

Reading symbols from python3...Reading symbols from /usr/lib/debug/.build-id/28/7763e881de67a59b31b452dd0161047f7c0135.debug...done.
done.
Starting program: /usr/bin/python3 tfLiteinference.py
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
[New Thread 0x7ffff412a700 (LWP 15400)]
[New Thread 0x7ffff3929700 (LWP 15401)]
[New Thread 0x7fffef128700 (LWP 15402)]
[Thread 0x7fffef128700 (LWP 15402) exited]
[Thread 0x7ffff3929700 (LWP 15401) exited]
[Thread 0x7ffff412a700 (LWP 15400) exited]
2.0.0
INFO: Initialized TensorFlow Lite runtime.
[{'name': 'image', 'index': 54, 'shape': array([  1, 640, 480,   1], dtype=int32), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0)}]
[{'name': 'descriptor', 'index': 52, 'shape': array([   1, 4096], dtype=int32), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0)}]
(1, 640, 480, 1)
[New Thread 0x7fffef128700 (LWP 15408)]
[New Thread 0x7ffff3929700 (LWP 15409)]
[New Thread 0x7ffff412a700 (LWP 15410)]
[New Thread 0x7fffc0e53700 (LWP 15411)]

Thread 1 "python3" received signal SIGABRT, Aborted.
__GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
51      ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
#1  0x00007ffff7a24801 in __GI_abort () at abort.c:79
#2  0x00007fffc5e995c9 in void tflite::ops::builtin::sub::EvalSub<(tflite::ops::builtin::sub::KernelType)2>(TfLiteContext*, TfLiteNode*, TfLiteSubParams*, tflite::ops::builtin::sub::OpData const*, TfLiteTensor const*, TfLiteTensor const*, TfLiteTensor*) () from /usr/local/lib/python3.6/dist-packages/tensorflow_core/lite/python/interpreter_wrapper/_tensorflow_wrap_interpreter_wrapper.so
#3  0x00007fffc5e9be65 in TfLiteStatus tflite::ops::builtin::sub::Eval<(tflite::ops::builtin::sub::KernelType)2>(TfLiteContext*, TfLiteNode*) ()
   from /usr/local/lib/python3.6/dist-packages/tensorflow_core/lite/python/interpreter_wrapper/_tensorflow_wrap_interpreter_wrapper.so
#4  0x00007fffc5ed8577 in tflite::Subgraph::Invoke() () from /usr/local/lib/python3.6/dist-packages/tensorflow_core/lite/python/interpreter_wrapper/_tensorflow_wrap_interpreter_wrapper.so
#5  0x00007fffc5eda22a in tflite::Interpreter::Invoke() () from /usr/local/lib/python3.6/dist-packages/tensorflow_core/lite/python/interpreter_wrapper/_tensorflow_wrap_interpreter_wrapper.so
#6  0x00007fffc5da7f58 in tflite::interpreter_wrapper::InterpreterWrapper::Invoke() () from /usr/local/lib/python3.6/dist-packages/tensorflow_core/lite/python/interpreter_wrapper/_tensorflow_wrap_interpreter_wrapper.so
#7  0x00007fffc5da6054 in _wrap_InterpreterWrapper_Invoke () from /usr/local/lib/python3.6/dist-packages/tensorflow_core/lite/python/interpreter_wrapper/_tensorflow_wrap_interpreter_wrapper.so
#8  0x000000000050a8af in _PyCFunction_FastCallDict (kwargs=<optimized out>, nargs=<optimized out>, args=<optimized out>, func_obj=<built-in method InterpreterWrapper_Invoke of module object at remote 0x7fffc61338b8>)
    at ../Objects/methodobject.c:234
#9  _PyCFunction_FastCallKeywords (kwnames=<optimized out>, nargs=<optimized out>, stack=<optimized out>, func=<optimized out>) at ../Objects/methodobject.c:294
#10 call_function.lto_priv () at ../Python/ceval.c:4851
#11 0x000000000050c5b9 in _PyEval_EvalFrameDefault () at ../Python/ceval.c:3335
#12 0x0000000000509d48 in PyEval_EvalFrameEx (throwflag=0, 
    f=Frame 0x7fffc613a048, for file /usr/local/lib/python3.6/dist-packages/tensorflow_core/lite/python/interpreter_wrapper/tensorflow_wrap_interpreter_wrapper.py, line 109, in Invoke (self=<InterpreterWrapper(this=<SwigPyObject at remote 0x7fffc6210db0>) at remote 0x7ffff67ed438>)) at ../Python/ceval.c:754
#13 _PyFunction_FastCall (globals=<optimized out>, nargs=140736516563016, args=<optimized out>, co=<optimized out>) at ../Python/ceval.c:4933
#14 fast_function.lto_priv () at ../Python/ceval.c:4968
#15 0x000000000050aa7d in call_function.lto_priv () at ../Python/ceval.c:4872
#16 0x000000000050c5b9 in _PyEval_EvalFrameDefault () at ../Python/ceval.c:3335
#17 0x0000000000509d48 in PyEval_EvalFrameEx (throwflag=0, 
    f=Frame 0x7fffc612d398, for file /usr/local/lib/python3.6/dist-packages/tensorflow_core/lite/python/interpreter.py, line 453, in invoke (self=<Interpreter(_interpreter=<InterpreterWrapper(this=<SwigPyObject at remote 0x7fffc6210db0>) at remote 0x7ffff67ed438>, _delegates=[]) at remote 0x7ffff67ed400>)) at ../Python/ceval.c:754
#18 _PyFunction_FastCall (globals=<optimized out>, nargs=140736516510616, args=<optimized out>, co=<optimized out>) at ../Python/ceval.c:4933
#19 fast_function.lto_priv () at ../Python/ceval.c:4968
#20 0x000000000050aa7d in call_function.lto_priv () at ../Python/ceval.c:4872
#21 0x000000000050c5b9 in _PyEval_EvalFrameDefault () at ../Python/ceval.c:3335
#22 0x0000000000508245 in PyEval_EvalFrameEx (throwflag=0, f=Frame 0xae0b68, for file tfLiteinference.py, line 22, in <module> ()) at ../Python/ceval.c:754
#23 _PyEval_EvalCodeWithName.lto_priv.1836 () at ../Python/ceval.c:4166
#24 0x000000000050b403 in PyEval_EvalCodeEx (closure=0x0, kwdefs=0x0, defcount=0, defs=0x0, kwcount=0, kws=0x0, argcount=0, args=0x0, locals=<optimized out>, globals=<optimized out>, _co=<optimized out>) at ../Python/ceval.c:4187
#25 PyEval_EvalCode (co=<optimized out>, globals=<optimized out>, locals=<optimized out>) at ../Python/ceval.c:731
#26 0x0000000000635222 in run_mod () at ../Python/pythonrun.c:1025
#27 0x00000000006352d7 in PyRun_FileExFlags () at ../Python/pythonrun.c:978
#28 0x0000000000638a8f in PyRun_SimpleFileExFlags () at ../Python/pythonrun.c:419
#29 0x0000000000638c65 in PyRun_AnyFileExFlags () at ../Python/pythonrun.c:81
#30 0x0000000000639631 in run_file (p_cf=0x7fffffffd9dc, filename=<optimized out>, fp=<optimized out>) at ../Modules/main.c:340
#31 Py_Main () at ../Modules/main.c:810
#32 0x00000000004b0f40 in main (argc=2, argv=0x7fffffffdbd8) at ../Programs/python.c:69
(gdb) py-bt
Traceback (most recent call first):
  <built-in method InterpreterWrapper_Invoke of module object at remote 0x7fffc61338b8>
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/lite/python/interpreter_wrapper/tensorflow_wrap_interpreter_wrapper.py", line 109, in Invoke
    return _tensorflow_wrap_interpreter_wrapper.InterpreterWrapper_Invoke(self)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/lite/python/interpreter.py", line 453, in invoke
    self._interpreter.Invoke()
  File "tfLiteinference.py", line 22, in <module>
    interpreter.invoke()

The tensorflow version used during the inference is 2.0.0 (as shown in the output).

Thanks for helping!

lllAlexanderlll

comment created time in 2 months

issue openedtensorflow/tensorflow

interpreter.invoke() of tflite model causes Aborted (core dumped) despite successful tflite conversion under tensorflow version 1.14.0

<em>Please make sure that this is a bug. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:bug_template</em>

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Kubuntu 18.04
  • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:
  • TensorFlow installed from (source or binary): pip3
  • TensorFlow version (use command below): 1.14.0
  • Python version: 3.6.9
  • Bazel version (if compiling from source):
  • GCC/Compiler version (if compiling from source):
  • CUDA/cuDNN version:
  • GPU model and memory:

You can collect some of this information using our environment capture script You can also obtain the TensorFlow version with: 1. TF 1.0: python -c "import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)" 2. TF 2.0: python -c "import tensorflow as tf; print(tf.version.GIT_VERSION, tf.version.VERSION)"

Describe the current behavior The mobilenetvlad model was successfuly converted to a tf lite model by the sample code provided from the tensorflow website. To archieve this I added the parameter input_shapes to the from_saved_model call: converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir, input_shapes={"image": [1, 640, 480, None]})

But if inference is tested with the according sample code from the tensorflow website, the program is aborted and the core dumped.

Describe the expected behavior Sucessfull inference of a [1, 4096] sized image descriptor.

Code to reproduce the issue

  1. Download the mobilenetvlad model
  2. With tensorflow 1.14.0 (tf2 will not work): Convert the saved_model to tflite, by setting an input shape (was set to None, None, None, 1]) in the model description, but should be 640x480 according to the paper.pdf: import tensorflow as tf saved_model_dir='hierarchical_loc/global-loc/models/mobilenetvlad_depth-0.35' converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir, input_shapes={"image": [1, 640, 480, None]}) #only change of code beside filenames tflite_model = converter.convert() open("converted_model_1_640_480.tflite", "wb").write(tflite_model)
  3. Run teh inference from : `import numpy as np import tensorflow as tf

print(tf.version)

Load TFLite model and allocate tensors.

interpreter = tf.lite.Interpreter(model_path="converted_model_1_640_480.tflite") interpreter.allocate_tensors()

Get input and output tensors.

input_details = interpreter.get_input_details() output_details = interpreter.get_output_details() print(input_details) print(output_details)

Test model on random input data.

input_shape = input_details[0]['shape'] input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32) print(input_data.shape) interpreter.set_tensor(input_details[0]['index'], input_data)

interpreter.invoke()

output_data = interpreter.get_tensor(output_details[0]['index']) print(output_data)`

Other info / logs

Output of the tflite confersion: /home/alex/Documents/VirtualEnvironments/tensorflow_1_15/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([("qint8", np.int8, 1)]) /home/alex/Documents/VirtualEnvironments/tensorflow_1_15/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint8 = np.dtype([("quint8", np.uint8, 1)]) /home/alex/Documents/VirtualEnvironments/tensorflow_1_15/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint16 = np.dtype([("qint16", np.int16, 1)]) /home/alex/Documents/VirtualEnvironments/tensorflow_1_15/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint16 = np.dtype([("quint16", np.uint16, 1)]) /home/alex/Documents/VirtualEnvironments/tensorflow_1_15/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint32 = np.dtype([("qint32", np.int32, 1)]) /home/alex/Documents/VirtualEnvironments/tensorflow_1_15/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. np_resource = np.dtype([("resource", np.ubyte, 1)]) /home/alex/Documents/VirtualEnvironments/tensorflow_1_15/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([("qint8", np.int8, 1)]) /home/alex/Documents/VirtualEnvironments/tensorflow_1_15/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint8 = np.dtype([("quint8", np.uint8, 1)]) /home/alex/Documents/VirtualEnvironments/tensorflow_1_15/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint16 = np.dtype([("qint16", np.int16, 1)]) /home/alex/Documents/VirtualEnvironments/tensorflow_1_15/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint16 = np.dtype([("quint16", np.uint16, 1)]) /home/alex/Documents/VirtualEnvironments/tensorflow_1_15/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint32 = np.dtype([("qint32", np.int32, 1)]) /home/alex/Documents/VirtualEnvironments/tensorflow_1_15/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. np_resource = np.dtype([("resource", np.ubyte, 1)]) 1.14.0 INFO: Initialized TensorFlow Lite runtime. [{'name': 'image', 'index': 54, 'shape': array([ 1, 640, 480, 1], dtype=int32), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0)}] [{'name': 'descriptor', 'index': 52, 'shape': array([ 1, 4096], dtype=int32), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0)}] (1, 640, 480, 1) Aborted (core dumped)

Output of gdb debugging gdb -ex r --args python3 tfLiteinference.py --> Aborted --> py-list: `Thread 1 "python3" received signal SIGABRT, Aborted. __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51 51 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory. (gdb) py-list 104
105 def AllocateTensors(self): 106 return _tensorflow_wrap_interpreter_wrapper.InterpreterWrapper_AllocateTensors(self) 107
108 def Invoke(self):

109 return _tensorflow_wrap_interpreter_wrapper.InterpreterWrapper_Invoke(self) 110
111 def InputIndices(self): 112 return _tensorflow_wrap_interpreter_wrapper.InterpreterWrapper_InputIndices(self) 113
114 def OutputIndices(self):`

I hope anyone can help me with this.

Cheers, Alex

created time in 2 months

startedprobcomp/Gen

started time in 3 months

startedAircoookie/WLED

started time in 4 months

issue commentopenimaj/openimaj

ERROR: Failed to resolve: com.lowagie:itext:2.1.7.js6

After following the error https://stackoverflow.com/a/55812178 and changing the build.gradle accordingly,

repositories {
    mavenCentral()
    maven { url "http://jaspersoft.jfrog.io/jaspersoft/third-party-ce-artifacts/" }
    maven {
        url "http://maven.openimaj.org"
    }
}

dependencies {
    implementation fileTree(dir: 'libs', include: ['*.jar'])
    implementation 'androidx.appcompat:appcompat:1.1.0'
    implementation 'androidx.constraintlayout:constraintlayout:1.1.3'
    testImplementation 'junit:junit:4.12'
    androidTestImplementation 'androidx.test.ext:junit:1.1.1'
    androidTestImplementation 'androidx.test.espresso:espresso-core:3.2.0'
    annotationProcessor 'org.apache.logging.log4j:log4j-core:2.11.0'
    implementation('org.openimaj:image-processing:1.3.9') {
        exclude group: 'org.apache.xmlgraphics'
        exclude group: 'xml-apis'
    }
}

the error changes to: "More than one file was found with OS independent path 'org/jfree/chart/plot/LocalizationBundle_pl.properties'"

Does anyone have any suggestions how to solve this error?

afinas-wii

comment created time in 4 months

issue commentopenimaj/openimaj

ERROR: Failed to resolve: com.lowagie:itext:2.1.7.js6

Hi, the same happens for openIMAJ version 1.3.9:

build.gradle: `apply plugin: 'com.android.application'

android { compileSdkVersion 28 defaultConfig { applicationId "com.example.testApp" minSdkVersion 22 targetSdkVersion 28 versionCode 1 versionName "1.0" testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner" } buildTypes { release { minifyEnabled false proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro' } } packagingOptions { exclude 'META-INF/DEPENDENCIES' exclude 'META-INF/LICENSE' exclude 'META-INF/LICENSE.txt' exclude 'META-INF/license.txt' exclude 'META-INF/NOTICE' exclude 'META-INF/NOTICE.txt' exclude 'META-INF/notice.txt' exclude 'META-INF/ASL2.0' } }

repositories { mavenCentral() maven { url "http://maven.openimaj.org" } }

dependencies { implementation fileTree(dir: 'libs', include: ['*.jar']) implementation 'androidx.appcompat:appcompat:1.1.0' implementation 'androidx.constraintlayout:constraintlayout:1.1.3' testImplementation 'junit:junit:4.12' androidTestImplementation 'androidx.test.ext:junit:1.1.1' androidTestImplementation 'androidx.test.espresso:espresso-core:3.2.0'

implementation('org.openimaj:image-processing:1.3.9') {
    exclude group: 'org.apache.xmlgraphics'
    exclude group: 'xml-apis'
}

} ` @afinas-wii could you reslove that problem?

Cheers, Alex

afinas-wii

comment created time in 4 months

PublicEvent
more