profile
viewpoint

DeepakG19/ProjectOxford-ClientSDK 0

The official home for the Microsoft Project Oxford client SDK and samples

issue commenttensorflow/tensorflow

null pointer dereference Error in TF2.3.0 with runforMultipleInputOutput

Can you attach the code you're using to allocate your inputs/outputs, and their respective (Java) buffers?

float[] image_ = getPixelData(image); Log.d(TAG, "Image Float Array "+image_.length+" : "+image_[0]); ByteBuffer imageBuffer = ByteBuffer.allocateDirect(4 * modelH * modelW * 3); imageBuffer.rewind(); imageBuffer.order(ByteOrder.nativeOrder()); for (int i = 0; i < image_.length; i++) { imageBuffer.putFloat((float) image_[i]); } Log.d(TAG, "Image Buffer Array "+imageBuffer.getFloat(0)+" : "+imageBuffer.capacity());

private static float[] getPixelData(Bitmap imageBitmap) { if (imageBitmap == null) { return null; } int width = imageBitmap.getWidth(); int height = imageBitmap.getHeight(); int inputSize = 256; int[] pixels = new int[width * height]; float[] floatValues = new float[width * height * 3]; imageBitmap.getPixels(pixels, 0, imageBitmap.getWidth(), 0, 0, imageBitmap.getWidth(), imageBitmap.getHeight()); int pixel = 0, k = 0; for (int i = 0; i < height; ++i) { for (int j = 0; j < width; ++j) { final int val = pixels[pixel++]; floatValues[k++] = (float) ((val >> 16) & 0xFF);// - (float) 123.68; floatValues[k++] = (float) ((val >> 8) & 0xFF);// - (float) 116.779; floatValues[k++] = (float) (val & 0xFF);// - (float) 103.939; } } return floatValues; }

DeepakG19

comment created time in 20 days

issue commenttensorflow/tensorflow

null pointer dereference Error in TF2.3.0 with runforMultipleInputOutput

Can you provide a link to the model? It's possible this model has dynamically shaped outputs, which means the output shape depends on input values. In which case, the output tensor's shape can't be known until you run evaluation. We're working on making this easier to support with Java, where you can do something like:

Feed Input tensor
Run inference
Fetch output tensor

Rather than providing input/output buffers in a single invocation.

The model is custom made and might not be possible to share. Further, as I know how the network will modify the output shape (equal to input image), i am allocating a buffer of that size before inference call.

PS - the same model works fine in Python.

DeepakG19

comment created time in 25 days

issue closedtensorflow/tensorflow

Didnt find op for builtin opcode 'RESIZE_NEAREST_NEIGHBOR' version '3'

<em>Please make sure that this is a bug. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:bug_template</em>

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Android
  • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: Samsung A51
  • TensorFlow installed from (source or binary): Maven
  • TensorFlow version (use command below): implementation('org.tensorflow:tensorflow-lite:2.3.0'){changing=true}
  • Python version: n/a
  • Bazel version (if compiling from source): n/a
  • GCC/Compiler version (if compiling from source): n/a
  • CUDA/cuDNN version: n/a
  • GPU model and memory: n/a

Describe the current behavior

.tflite generated using tf.lite.TFLiteConverter.from_saved_model tf_version = 2.3.0 Python Implementation for Inference works without errors.

using the same model on Android for inference gives the error- cant create interpreter : Didnt find op for builtin opcode 'RESIZE_NEAREST_NEIGHBOR' version '3' build-gradle : implementation('org.tensorflow:tensorflow-lite:2.3.0'){changing=true}

Describe the expected behavior the Android code should run.

closed time in a month

DeepakG19

issue commenttensorflow/tensorflow

Didnt find op for builtin opcode 'RESIZE_NEAREST_NEIGHBOR' version '3'

As suggested by @amahendrakar, this issue is closed and the new submitted issue can be followed up from here.

DeepakG19

comment created time in a month

issue openedtensorflow/tensorflow

ca

<em>Please make sure that this is a bug. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:bug_template</em>

System information

Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Android Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: Samsung A51 TensorFlow installed from (source or binary): Maven TensorFlow version (use command below): implementation('org.tensorflow:tensorflow-lite:2.3.0'){changing=true} Python version: n/a Bazel version (if compiling from source): n/a GCC/Compiler version (if compiling from source): n/a CUDA/cuDNN version: n/a GPU model and memory: n/a

Describe the current behavior

.tflite generated using tf.lite.TFLiteConverter.from_saved_model tf_version = 2.3.0 Python Implementation for Inference works without errors.

using the same model on Android for inference gives the error- 2020-09-24 07:31:59.732 17121-17121/? A/DEBUG: Cause: null pointer dereference

Debugging Log -

Model Input Tensor Details 2020-09-24 07:31:59.284 16761-16761/Processor: hws2:0 2 Input SHAPE- 0 2020-09-24 07:31:59.285 16761-16761/Processor: mask:0 1 1 1 1 Input SHAPE- 1 2020-09-24 07:31:59.285 16761-16761/Processor: image:0 1 1 1 3 Input SHAPE- 2 2020-09-24 07:31:59.285 16761-16761/Processor: hws:0 2 Input SHAPE- 3

2020-09-24 07:31:59.285 16761-16761/Processor: strided_slice_1:0 1 1 3 Output SHAPE- 0

Model Reallocating the Input Tensor after : tflite.resizeInput(1,dim); tflite.resizeInput(2,dim); tflite.allocateTensors();

Model Input Tensor Details After Reallocation 2020-09-24 07:31:59.286 16761-16761/Processor: hws2:0 2 Input SHAPE- 0 2020-09-24 07:31:59.286 16761-16761/Processor: mask:0 1 256 256 1 Input SHAPE- 1 2020-09-24 07:31:59.286 16761-16761/Processor: image:0 1 256 256 3 Input SHAPE- 2 2020-09-24 07:31:59.286 16761-16761/Processor: hws:0 2 Input SHAPE- 3

2020-09-24 07:31:59.286 16761-16761/Processor: strided_slice_1:0 1 1 3 Output SHAPE- 0

Notice : Output Shape doesnt changes, which I assume is the correct behavior

On Running : tflite.runForMultipleInputsOutputs(inputs, outputs);

inputs and outputs are properly initialized and non-null This error comes -

2020-09-24 07:31:59.442 16761-16761/com.package.deepak A/libc: Fatal signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0x0 in tid 16761 (service.deepak), pid 16761 (service.deepak)

2020-09-24 07:31:59.731 17121-17121/? A/DEBUG: *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** 2020-09-24 07:31:59.732 17121-17121/? A/DEBUG: Build fingerprint: Samsung-A50 2020-09-24 07:31:59.732 17121-17121/? A/DEBUG: Revision: '2' 2020-09-24 07:31:59.732 17121-17121/? A/DEBUG: ABI: 'arm64' 2020-09-24 07:31:59.732 17121-17121/? A/DEBUG: pid: 16761, tid: 16761, name: service.deepak >>> com.package.deepak <<< 2020-09-24 07:31:59.732 17121-17121/? A/DEBUG: signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0x0 2020-09-24 07:31:59.732 17121-17121/? A/DEBUG: Cause: null pointer dereference

2020-09-24 07:41:50.415 18015-18015/? A/DEBUG: backtrace: 2020-09-24 07:41:50.415 18015-18015/? A/DEBUG: #00 pc 000000000001dd6c /system/lib64/libc.so (memcpy+124) 2020-09-24 07:41:50.415 18015-18015/? A/DEBUG: #1 pc 0000000000133560 /data/app/com.package.deepak-fuEaz_w7MUZ9fy7vI4iAfA==/lib/arm64/libtensorflowlite_jni.so 2020-09-24 07:41:50.415 18015-18015/? A/DEBUG: #2 pc 00000000001331e8 /data/app/com.package.deepak-fuEaz_w7MUZ9fy7vI4iAfA==/lib/arm64/libtensorflowlite_jni.so 2020-09-24 07:41:50.415 18015-18015/? A/DEBUG: #3 pc 00000000001b269c /data/app/com.package.deepak-fuEaz_w7MUZ9fy7vI4iAfA==/lib/arm64/libtensorflowlite_jni.so 2020-09-24 07:41:50.415 18015-18015/? A/DEBUG: #4 pc 00000000001b546c /data/app/com.package.deepak-fuEaz_w7MUZ9fy7vI4iAfA==/lib/arm64/libtensorflowlite_jni.so 2020-09-24 07:41:50.415 18015-18015/? A/DEBUG: #5 pc 0000000000046738 /data/app/com.package.deepak-fuEaz_w7MUZ9fy7vI4iAfA==/lib/arm64/libtensorflowlite_jni.so (Java_org_tensorflow_lite_NativeInterpreterWrapper_run+32) 2020-09-24 07:41:50.415 18015-18015/? A/DEBUG: #6 pc 0000000000563be0 /system/lib64/libart.so (art_quick_generic_jni_trampoline+144) 2020-09-24 07:41:50.416 18015-18015/? A/DEBUG: #7 pc 000000000055ae4c /system/lib64/libart.so (art_quick_invoke_static_stub+604) 2020-09-24 07:41:50.416 18015-18015/? A/DEBUG: #8 pc 00000000000d04e8 /system/lib64/libart.so (art::ArtMethod::Invoke(art::Thread*, unsigned int*, unsigned int, art::JValue*, char const*)+232) 2020-09-24 07:41:50.416 18015-18015/? A/DEBUG: #9 pc 00000000002838ac /system/lib64/libart.so (art::interpreter::ArtInterpreterToCompiledCodeBridge(art::Thread*, art::ArtMethod*, art::ShadowFrame*, unsigned short, art::JValue*)+344) 2020-09-24 07:41:50.416 18015-18015/? A/DEBUG: #10 pc 000000000027d8b4 /system/lib64/libart.so (bool art::interpreter::DoCall<false, false>(art::ArtMethod*, art::Thread*, art::ShadowFrame&, art::Instruction const*, unsigned short, art::JValue*)+968) 2020-09-24 07:41:50.416 18015-18015/? A/DEBUG: #11 pc 000000000052b750 /system/lib64/libart.so (MterpInvokeStatic+204) 2020-09-24 07:41:50.416 18015-18015/? A/DEBUG: #12 pc 000000000054d394 /system/lib64/libart.so (ExecuteMterpImpl+14612) 2020-09-24 07:41:50.416 18015-18015/? A/DEBUG: #13 pc 000000000021fcf4 /dev/ashmem/dalvik-classes.dex extracted in memory from /data/app/com.package.deepak-fuEaz_w7MUZ9fy7vI4iAfA==/base.apk_17926_17926 (deleted) (org.tensorflow.lite.NativeInterpreterWrapper.run+156) 2020-09-24 07:41:50.416 18015-18015/? A/DEBUG: #14 pc 00000000002575b8 /system/lib64/libart.so (_ZN3art11interpreterL7ExecuteEPNS_6ThreadERKNS_20CodeItemDataAccessorERNS_11ShadowFrameENS_6JValueEb.llvm.1037722801+488) 2020-09-24 07:41:50.416 18015-18015/? A/DEBUG: #15 pc 000000000025d0ac /system/lib64/libart.so (art::interpreter::ArtInterpreterToInterpreterBridge(art::Thread*, art::CodeItemDataAccessor const&, art::ShadowFrame*, art::JValue*)+216) 2020-09-24 07:41:50.416 18015-18015/? A/DEBUG: #16 pc 000000000027d898 /system/lib64/libart.so (bool art::interpreter::DoCall<false, false>(art::ArtMethod*, art::Thread*, art::ShadowFrame&, art::Instruction const*, unsigned short, art::JValue*)+940) 2020-09-24 07:41:50.416 18015-18015/? A/DEBUG: #17 pc 000000000052a24c /system/lib64/libart.so (MterpInvokeVirtual+588) 2020-09-24 07:41:50.416 18015-18015/? A/DEBUG: #18 pc 000000000054d214 /system/lib64/libart.so (ExecuteMterpImpl+14228) 2020-09-24 07:41:50.417 18015-18015/? A/DEBUG: #19 pc 000000000021f2fe /dev/ashmem/dalvik-classes.dex extracted in memory from /data/app/com.package.deepak-fuEaz_w7MUZ9fy7vI4iAfA==/base.apk_17926_17926 (deleted) (org.tensorflow.lite.Interpreter.runForMultipleInputsOutputs+10) 2020-09-24 07:41:50.417 18015-18015/? A/DEBUG: #20 pc 00000000002575b8 /system/lib64/libart.so (_ZN3art11interpreterL7ExecuteEPNS_6ThreadERKNS_20CodeItemDataAccessorERNS_11ShadowFrameENS_6JValueEb.llvm.1037722801+488) 2020-09-24 07:41:50.417 18015-18015/? A/DEBUG: #21 pc 000000000051aae0 /system/lib64/libart.so (artQuickToInterpreterBridge+1020) 2020-09-24 07:41:50.417 18015-18015/? A/DEBUG: #22 pc 0000000000563cfc /system/lib64/libart.so (art_quick_to_interpreter_bridge+92) 2020-09-24 07:41:50.417 18015-18015/? A/DEBUG: #23 pc 0000000000019ba4 /dev/ashmem/dalvik-jit-code-cache_17926_17926 (deleted) (com.package.deepak.Processor.process+12052) 2020-09-24 07:41:50.417 18015-18015/? A/DEBUG: #24 pc 000000000055aedc /system/lib64/libart.so (art_quick_osr_stub+44)

Describe the expected behavior the Android code should run.

Ps - Reproducible Code/Model is not available due to confidentiality reasons.

created time in a month

issue commenttensorflow/tensorflow

Didnt find op for builtin opcode 'RESIZE_NEAREST_NEIGHBOR' version '3'

This is weird. @DeepakG19 could you try the latest nightly for the aar and see if it works on Android?

Able to resolve the error. The issue was a third-party module which implicitly had tf2.2.0 and was over-writing the dependecny added using the maven in main module. Turning this module off resolved the above error. ### But, I am facing some new errors -

2020-09-24 07:31:59.284 16761-16761/Processor: hws2:0 2 Input SHAPE- 0 2020-09-24 07:31:59.285 16761-16761/Processor: mask:0 1 1 1 1 Input SHAPE- 1 2020-09-24 07:31:59.285 16761-16761/Processor: image:0 1 1 1 3 Input SHAPE- 2 2020-09-24 07:31:59.285 16761-16761/Processor: hws:0 2 Input SHAPE- 3

2020-09-24 07:31:59.285 16761-16761/Processor: strided_slice_1:0 1 1 3 Output SHAPE- 0

after : tflite.resizeInput(1,dim); tflite.resizeInput(2,dim); tflite.allocateTensors();

2020-09-24 07:31:59.286 16761-16761/Processor: hws2:0 2 Input SHAPE- 0 2020-09-24 07:31:59.286 16761-16761/Processor: mask:0 1 256 256 1 Input SHAPE- 1 2020-09-24 07:31:59.286 16761-16761/Processor: image:0 1 256 256 3 Input SHAPE- 2 2020-09-24 07:31:59.286 16761-16761/Processor: hws:0 2 Input SHAPE- 3

2020-09-24 07:31:59.286 16761-16761/Processor: strided_slice_1:0 1 1 3 Output SHAPE- 0

Notice : Output Shape doesnt changes, which I assume is the correct behavior

On Running : tflite.runForMultipleInputsOutputs(inputs, outputs);

This error comes -

2020-09-24 07:31:59.442 16761-16761/com.package.deepak A/libc: Fatal signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0x0 in tid 16761 (service.deepak), pid 16761 (service.deepak)

2020-09-24 07:31:59.731 17121-17121/? A/DEBUG: *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** 2020-09-24 07:31:59.732 17121-17121/? A/DEBUG: Build fingerprint: Samsung-A50 2020-09-24 07:31:59.732 17121-17121/? A/DEBUG: Revision: '2' 2020-09-24 07:31:59.732 17121-17121/? A/DEBUG: ABI: 'arm64' 2020-09-24 07:31:59.732 17121-17121/? A/DEBUG: pid: 16761, tid: 16761, name: service.deepak >>> com.package.deepak <<< 2020-09-24 07:31:59.732 17121-17121/? A/DEBUG: signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0x0 2020-09-24 07:31:59.732 17121-17121/? A/DEBUG: Cause: null pointer dereference

2020-09-24 07:41:50.415 18015-18015/? A/DEBUG: backtrace: 2020-09-24 07:41:50.415 18015-18015/? A/DEBUG: #00 pc 000000000001dd6c /system/lib64/libc.so (memcpy+124) 2020-09-24 07:41:50.415 18015-18015/? A/DEBUG: #01 pc 0000000000133560 /data/app/com.package.deepak-fuEaz_w7MUZ9fy7vI4iAfA==/lib/arm64/libtensorflowlite_jni.so 2020-09-24 07:41:50.415 18015-18015/? A/DEBUG: #02 pc 00000000001331e8 /data/app/com.package.deepak-fuEaz_w7MUZ9fy7vI4iAfA==/lib/arm64/libtensorflowlite_jni.so 2020-09-24 07:41:50.415 18015-18015/? A/DEBUG: #03 pc 00000000001b269c /data/app/com.package.deepak-fuEaz_w7MUZ9fy7vI4iAfA==/lib/arm64/libtensorflowlite_jni.so 2020-09-24 07:41:50.415 18015-18015/? A/DEBUG: #04 pc 00000000001b546c /data/app/com.package.deepak-fuEaz_w7MUZ9fy7vI4iAfA==/lib/arm64/libtensorflowlite_jni.so 2020-09-24 07:41:50.415 18015-18015/? A/DEBUG: #05 pc 0000000000046738 /data/app/com.package.deepak-fuEaz_w7MUZ9fy7vI4iAfA==/lib/arm64/libtensorflowlite_jni.so (Java_org_tensorflow_lite_NativeInterpreterWrapper_run+32) 2020-09-24 07:41:50.415 18015-18015/? A/DEBUG: #06 pc 0000000000563be0 /system/lib64/libart.so (art_quick_generic_jni_trampoline+144) 2020-09-24 07:41:50.416 18015-18015/? A/DEBUG: #07 pc 000000000055ae4c /system/lib64/libart.so (art_quick_invoke_static_stub+604) 2020-09-24 07:41:50.416 18015-18015/? A/DEBUG: #08 pc 00000000000d04e8 /system/lib64/libart.so (art::ArtMethod::Invoke(art::Thread*, unsigned int*, unsigned int, art::JValue*, char const*)+232) 2020-09-24 07:41:50.416 18015-18015/? A/DEBUG: #09 pc 00000000002838ac /system/lib64/libart.so (art::interpreter::ArtInterpreterToCompiledCodeBridge(art::Thread*, art::ArtMethod*, art::ShadowFrame*, unsigned short, art::JValue*)+344) 2020-09-24 07:41:50.416 18015-18015/? A/DEBUG: #10 pc 000000000027d8b4 /system/lib64/libart.so (bool art::interpreter::DoCall<false, false>(art::ArtMethod*, art::Thread*, art::ShadowFrame&, art::Instruction const*, unsigned short, art::JValue*)+968) 2020-09-24 07:41:50.416 18015-18015/? A/DEBUG: #11 pc 000000000052b750 /system/lib64/libart.so (MterpInvokeStatic+204) 2020-09-24 07:41:50.416 18015-18015/? A/DEBUG: #12 pc 000000000054d394 /system/lib64/libart.so (ExecuteMterpImpl+14612) 2020-09-24 07:41:50.416 18015-18015/? A/DEBUG: #13 pc 000000000021fcf4 /dev/ashmem/dalvik-classes.dex extracted in memory from /data/app/com.package.deepak-fuEaz_w7MUZ9fy7vI4iAfA==/base.apk_17926_17926 (deleted) (org.tensorflow.lite.NativeInterpreterWrapper.run+156) 2020-09-24 07:41:50.416 18015-18015/? A/DEBUG: #14 pc 00000000002575b8 /system/lib64/libart.so (_ZN3art11interpreterL7ExecuteEPNS_6ThreadERKNS_20CodeItemDataAccessorERNS_11ShadowFrameENS_6JValueEb.llvm.1037722801+488) 2020-09-24 07:41:50.416 18015-18015/? A/DEBUG: #15 pc 000000000025d0ac /system/lib64/libart.so (art::interpreter::ArtInterpreterToInterpreterBridge(art::Thread*, art::CodeItemDataAccessor const&, art::ShadowFrame*, art::JValue*)+216) 2020-09-24 07:41:50.416 18015-18015/? A/DEBUG: #16 pc 000000000027d898 /system/lib64/libart.so (bool art::interpreter::DoCall<false, false>(art::ArtMethod*, art::Thread*, art::ShadowFrame&, art::Instruction const*, unsigned short, art::JValue*)+940) 2020-09-24 07:41:50.416 18015-18015/? A/DEBUG: #17 pc 000000000052a24c /system/lib64/libart.so (MterpInvokeVirtual+588) 2020-09-24 07:41:50.416 18015-18015/? A/DEBUG: #18 pc 000000000054d214 /system/lib64/libart.so (ExecuteMterpImpl+14228) 2020-09-24 07:41:50.417 18015-18015/? A/DEBUG: #19 pc 000000000021f2fe /dev/ashmem/dalvik-classes.dex extracted in memory from /data/app/com.package.deepak-fuEaz_w7MUZ9fy7vI4iAfA==/base.apk_17926_17926 (deleted) (org.tensorflow.lite.Interpreter.runForMultipleInputsOutputs+10) 2020-09-24 07:41:50.417 18015-18015/? A/DEBUG: #20 pc 00000000002575b8 /system/lib64/libart.so (_ZN3art11interpreterL7ExecuteEPNS_6ThreadERKNS_20CodeItemDataAccessorERNS_11ShadowFrameENS_6JValueEb.llvm.1037722801+488) 2020-09-24 07:41:50.417 18015-18015/? A/DEBUG: #21 pc 000000000051aae0 /system/lib64/libart.so (artQuickToInterpreterBridge+1020) 2020-09-24 07:41:50.417 18015-18015/? A/DEBUG: #22 pc 0000000000563cfc /system/lib64/libart.so (art_quick_to_interpreter_bridge+92) 2020-09-24 07:41:50.417 18015-18015/? A/DEBUG: #23 pc 0000000000019ba4 /dev/ashmem/dalvik-jit-code-cache_17926_17926 (deleted) (com.package.deepak.Processor.process+12052) 2020-09-24 07:41:50.417 18015-18015/? A/DEBUG: #24 pc 000000000055aedc /system/lib64/libart.so (art_quick_osr_stub+44)

DeepakG19

comment created time in a month

issue commenttensorflow/tensorflow

Didnt find op for builtin opcode 'RESIZE_NEAREST_NEIGHBOR' version '3'

@DeepakG19, In order to expedite the trouble-shooting process, could you please provide the complete code to reproduce the issue reported here and also the dataset you are using. Thanks!

Hi Any Updates?

DeepakG19

comment created time in a month

issue commenttensorflow/tensorflow

Didnt find op for builtin opcode 'RESIZE_NEAREST_NEIGHBOR' version '3'

@DeepakG19, In order to expedite the trouble-shooting process, could you please provide the complete code to reproduce the issue reported here and also the dataset you are using. Thanks!

Hi @amahendrakar, any updates?

DeepakG19

comment created time in a month

issue commenttensorflow/tensorflow

Didnt find op for builtin opcode 'RESIZE_NEAREST_NEIGHBOR' version '3'

@DeepakG19, In order to expedite the trouble-shooting process, could you please provide the complete code to reproduce the issue reported here and also the dataset you are using. Thanks!

Hi, that wont be possible due to confidentiality clauses. But the issue being - same tflite works in python but not in Android.

DeepakG19

comment created time in a month

issue openedtensorflow/tensorflow

Didnt find op for builtin opcode 'RESIZE_NEAREST_NEIGHBOR' version '3'

<em>Please make sure that this is a bug. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:bug_template</em>

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Android
  • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: Samsung A51
  • TensorFlow installed from (source or binary): Maven
  • TensorFlow version (use command below): implementation('org.tensorflow:tensorflow-lite:2.3.0'){changing=true}
  • Python version: n/a
  • Bazel version (if compiling from source): n/a
  • GCC/Compiler version (if compiling from source): n/a
  • CUDA/cuDNN version: n/a
  • GPU model and memory: n/a

Describe the current behavior

.tflite generated using tf.lite.TFLiteConverter.from_saved_model tf_version = 2.3.0

using the same model on Android for inference gives the error- cant create interpreter : Didnt find op for builtin opcode 'RESIZE_NEAREST_NEIGHBOR' version '3' build-gradle : implementation('org.tensorflow:tensorflow-lite:2.3.0'){changing=true}

Describe the expected behavior the Android code should run.

created time in a month

push eventDeepakG19/IntroToTF

DeepakG19

commit sha 4754d09e5ba21ad6c8b8d966a4a02294c6917ba4

Update README.md

view details

push time in 2 months

PublicEvent

push eventDeepakG19/IntroToTF

DeepakG19

commit sha c04168ee8ca1e80cb869f0aaa4fbaf811518e906

Created using Colaboratory

view details

push time in 2 months

push eventDeepakG19/IntroToTF

DeepakG19

commit sha 471669ed054243d335aab76ae8fc4fd3e1d4ac8a

Created using Colaboratory

view details

push time in 2 months

push eventDeepakG19/IntroToTF

DeepakG19

commit sha 1c3e624934dd870bff283ec334f80b913b8e1446

Created using Colaboratory

view details

push time in 2 months

issue closedtensorflow/tensorflow

TFLITE Relocate Tensor Fail

<em>Please make sure that this is a bug. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:bug_template</em>

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 16.04
  • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:No
  • TensorFlow installed from (source or binary): pip install tensorflow-gpu
  • TensorFlow version (use command below): v1.14.0-rc1-22-gaf24dc91b5 1.14.0
  • Python version: 3.5.2
  • Bazel version (if compiling from source): No
  • GCC/Compiler version (if compiling from source): No
  • CUDA/cuDNN version:
  • GPU model and memory:

You can collect some of this information using our environment capture script You can also obtain the TensorFlow version with:

  1. TF 1.0: python -c "import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)"
  2. TF 2.0: python -c "import tensorflow as tf; print(tf.version.GIT_VERSION, tf.version.VERSION)"

Describe the current behavior Steps to generate .tflite-

  1. Train a ckpt
  2. Create saved_model.pb using None,None input parameter
  3. Generate .TFLITE saved .pb model by specifying default input because None,None doesnt works

Generated .tflite Input Details. {'shape': array([ 1, 256, 256, 3], dtype=int32), 'quantization': (0.0, 0), 'dtype': <class 'numpy.float32'>, 'index': 0, 'name': 'image'} {'shape': array([ 1, 256, 256, 1], dtype=int32), 'quantization': (0.0, 0), 'dtype': <class 'numpy.float32'>, 'index': 1, 'name': 'mask'} {'shape': array([ 1, 64, 64, 1], dtype=int32), 'quantization': (0.0, 0), 'dtype': <class 'numpy.float32'>, 'index': 2, 'name': 'mask2'} {'shape': array([ 1, 128, 128, 1], dtype=int32), 'quantization': (0.0, 0), 'dtype': <class 'numpy.float32'>, 'index': 3, 'name': 'mask4'}

Actual Input Detail (1, 432, 492, 3) (1, 432, 492, 1) (1, 216, 246, 1) (1, 108, 123, 1)

Allocating Tensors based on Actual Input Values interpreter.resize_tensor_input(input_details[0]['index'], (1,h,w,3)) interpreter.resize_tensor_input(input_details[1]['index'], (1,h,w,1)) interpreter.resize_tensor_input(input_details[2]['index'], (1,int(h/4),int(w/4),1)) interpreter.resize_tensor_input(input_details[3]['index'], (1,int(h/2),int(w/2),1)) interpreter.allocate_tensors()

ERROR - File "testtliteNone.py", line 141, in <module> interpreter.allocate_tensors() File "/homelib/python3.5/site-packages/tensorflow/lite/python/interpreter.py", line 95, in allocate_tensors return self._interpreter.AllocateTensors() File "/home/lib/python3.5/site-packages/tensorflow/lite/python/interpreter_wrapper/tensorflow_wrap_interpreter_wrapper.py", line 106, in AllocateTensors return _tensorflow_wrap_interpreter_wrapper.InterpreterWrapper_AllocateTensors(self) RuntimeError: tensorflow/lite/kernels/kernel_util.cc:233 d1 == d2 || d1 == 1 || d2 == 1 was not true.Node number 4 (MUL) failed to prepare.

Describe the expected behavior Allocation Should Be Done

Standalone code to reproduce the issue Provide a reproducible test case that is the bare minimum necessary to generate the problem. If possible, please share a link to Colab/Jupyter/any notebook.

Other info / logs Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.

closed time in 2 months

DeepakG19

issue commenttensorflow/tensorflow

TFLITE Relocate Tensor Fail

Hi @DeepakG19

Passing dummy values and resize the model during inference to different size is not how you should specify dynamic shape, and not working is expected.

The issue is this part "and since None or -1 or 1 is not a valid input," You should be able to use None as input. If the model was saved with inputs as None, then please use our Python API to do the conversion by specifying only the saved_model path. Example

Convert the model.

converter = tf.lite.TFLiteConverter.from_saved_model("saved_model_dir_path") tflite_model = converter.convert()

Save the TF Lite model.

with tf.io.gfile.GFile('model.tflite', 'wb') as f: f.write(tflite_model)

After doing the above, please

  1. Paste any errors you got during the conversion script above.
  2. If conversion passed and inference has issue, then paste code for inference that you tried and what was the error.
  3. Please share the model you are trying to convert.

Thanks

Hi Karim, Thanks for quick response. The solution you provided, works at the moment.The problem was using a dummy value (as you said) to save model as tflite, which I got from https://stackoverflow.com/a/55732431.

DeepakG19

comment created time in 2 months

issue commenttensorflow/tensorflow

TFLITE Relocate Tensor Fail

The failure during allocate_tensors is because a MUL op has operands that are not broadcastable. So either you're setting inputs that are invalid, or some other issue.

Can you run the TF model with same shape ? Can you share the exact commands you used ?

Please try to share reproduce steps or better a code snippet with sample model that has issue.

Thanks

Hi Karim,

The saved_model in pb [saved_model.pb and \variables] runs without any issue for variable sized inputs. This was created after passing None,None in input placeholder dimension.

The above model is further used to create tflite, and since None or -1 or 1 is not a valid input, we specify a dummy size, for instance 512x512. tflite is created passing 512x512x3:512x512x1:128x128x1:256x256x1 as input for respective placeholders. Any input of the same size (512x512) executes without any error. But when the input size is changed, the error in allocate_tensor appears, as mentioned earlier.

The code used to generate tflite from saved_model is tflite_convert --saved_model_dir="/None_dir/" --output_file="out.tflite" --input_shapes=1,512,512,3:1,512,512,1;1,128,128,1:1,256,256,1 --input_arrays=image,mask,mask2,mask4 --output_arrays=out --enable_v1_converter

Have also tried - tflite_convert --saved_model_dir="/None_dir/" --output_file="out.tflite" --input_shapes=1,512,512,3:1,512,512,1;1,128,128,1:1,256,256,1 --input_arrays=image,mask,mask2,mask4 --output_arrays=out --enable_v1_converter --experimental_new_converter=true

DeepakG19

comment created time in 2 months

issue commenttensorflow/tensorflow

TFLITE Relocate Tensor Fail

Passing explicit shape during conversion doesn't guarantee you can resize input during inference, and you shouldn't try resizing the input. You will need to pass None in the input for the dimensions that are dynamic. If your TF model is constructed already with None in shape, then converting the saved model will result in TFLite model with same shape. If you had problems during conversion, then please reply with more details on what you tried and what was the error you got. Thanks

  1. Ckpt is created from training using input 256x256
  2. saved_model.pb is generated, passing input params as None,None for all 4 inputs
  3. tflite_converter CLI is used to convert to tflite. Input shapes provided as None or -1 doesnt work.

Dynamic size input exists at inference time. The error occurs while using INTERPRETER.RELOCATE_TENSOR() after changing input tensor size.

Also, tflite created for passing any size ie 256x256 or 512x512 etc in tflite_converter works for input of that size but will fail when we try to relocate_tensor

DeepakG19

comment created time in 3 months

issue commenttensorflow/tensorflow

TFLITE Relocate Tensor Fail

Passing explicit shape during conversion doesn't guarantee you can resize input during inference, and you shouldn't try resizing the input.

You will need to pass None in the input for the dimensions that are dynamic. If your TF model is constructed already with None in shape, then converting the saved model will result in TFLite model with same shape. If you had problems during conversion, then please reply with more details on what you tried and what was the error you got.

Thanks

  1. Ckpt is created from training using input 256x256
  2. saved_model.pb is generated, passing input params as None,None for all 4 inputs
  3. tflite_converter CLI is used to convert to tflite. Input shapes provided as None or -1 doesnt work.

Dynamic size input exists at inference time. The error occurs while using INTERPRETER.RELOCATE_TENSOR() after changing input tensor size.

DeepakG19

comment created time in 3 months

issue commenttensorflow/tensorflow

TFLITE Relocate Tensor Fail

sorry, I mean this:

Generated .tflite Input Details.
{'shape': array([ 1, 256, 256, 3], dtype=int32), 'quantization': (0.0, 0), 'dtype': <class 'numpy.float32'>, 'index': 0, 'name': 'image'}
{'shape': array([ 1, 256, 256, 1], dtype=int32), 'quantization': (0.0, 0), 'dtype': <class 'numpy.float32'>, 'index': 1, 'name': 'mask'}
{'shape': array([ 1, 64, 64, 1], dtype=int32), 'quantization': (0.0, 0), 'dtype': <class 'numpy.float32'>, 'index': 2, 'name': 'mask2'}
{'shape': array([ 1, 128, 128, 1], dtype=int32), 'quantization': (0.0, 0), 'dtype': <class 'numpy.float32'>, 'index': 3, 'name': 'mask4'}

Actual Input Detail
(1, 432, 492, 3) (1, 432, 492, 1) (1, 216, 246, 1) (1, 108, 123, 1)

Also, it's possible your model has some fixed shape in the middle which can break the shape propagation.

You will need to build the model with dynamic shape in the first place.

Can you try if not using resize input. and see if it works?

That printed order is different, values are passed correctly, checked. There were RESIZE ops, since downsampling is not supported, i am passing additional inputs. For upscaling I am using tf.keras.backend.resize_images, which has a scaling parameter, and hence i doubt there is any fixed size.

DeepakG19

comment created time in 3 months

issue commenttensorflow/tensorflow

TFLITE Relocate Tensor Fail

Not sure about what model you're using, but

Actual Input Detail
(1, 432, 492, 3) (1, 432, 492, 1) (1, 216, 246, 1) (1, 108, 123, 1)

Allocating Tensors based on Actual Input Values
interpreter.resize_tensor_input(input_details[0]['index'], (1,h,w,3))
interpreter.resize_tensor_input(input_details[1]['index'], (1,h,w,1))
interpreter.resize_tensor_input(input_details[2]['index'], (1,int(h/4),int(w/4),1))
interpreter.resize_tensor_input(input_details[3]['index'], (1,int(h/2),int(w/2),1))
interpreter.allocate_tensors()

seems like the resize here may have some inconsistent shape.

20200731_124638

The respective size ratio match, resize should work?

DeepakG19

comment created time in 3 months

issue commenttensorflow/tensorflow

TFLITE Relocate Tensor Fail

It seems the allocation failed because of the resize input.

Can you try not resize input?

thanks

How does one make a tflite model independent of input size for inference?

DeepakG19

comment created time in 3 months

issue commenttensorflow/tensorflow

TFLITE Relocate Tensor Fail

@DeepakG19 Can you please try against latest TF (2.3 or nightly) ? Thanks!

Tried with tf-nightly. Same issue.

DeepakG19

comment created time in 3 months

issue commenttensorflow/tensorflow

TFLITE Relocate Tensor Fail

@DeepakG19

Will it be possible to share colab link or simple standalone code with supporting files to reproduce the issue in our environment.It helps us in localizing the issue faster.Thanks!

sorry Ravi. can not share the model file. can share commands that i used to generate the model files. Thanks for understanding.

DeepakG19

comment created time in 3 months

issue openedtensorflow/tensorflow

TFLITE Relocate Tensor Fail

<em>Please make sure that this is a bug. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:bug_template</em>

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 16.04
  • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:No
  • TensorFlow installed from (source or binary): pip install tensorflow-gpu
  • TensorFlow version (use command below): v1.14.0-rc1-22-gaf24dc91b5 1.14.0
  • Python version: 3.5.2
  • Bazel version (if compiling from source): No
  • GCC/Compiler version (if compiling from source): No
  • CUDA/cuDNN version:
  • GPU model and memory:

You can collect some of this information using our environment capture script You can also obtain the TensorFlow version with:

  1. TF 1.0: python -c "import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)"
  2. TF 2.0: python -c "import tensorflow as tf; print(tf.version.GIT_VERSION, tf.version.VERSION)"

Describe the current behavior Steps to generate .tflite-

  1. Train a ckpt
  2. Create saved_model.pb using None,None input parameter
  3. Generate .TFLITE saved .pb model by specifying default input because None,None doesnt works

Generated .tflite Input Details. {'shape': array([ 1, 256, 256, 3], dtype=int32), 'quantization': (0.0, 0), 'dtype': <class 'numpy.float32'>, 'index': 0, 'name': 'image'} {'shape': array([ 1, 256, 256, 1], dtype=int32), 'quantization': (0.0, 0), 'dtype': <class 'numpy.float32'>, 'index': 1, 'name': 'mask'} {'shape': array([ 1, 64, 64, 1], dtype=int32), 'quantization': (0.0, 0), 'dtype': <class 'numpy.float32'>, 'index': 2, 'name': 'mask2'} {'shape': array([ 1, 128, 128, 1], dtype=int32), 'quantization': (0.0, 0), 'dtype': <class 'numpy.float32'>, 'index': 3, 'name': 'mask4'}

Actual Input Detail (1, 432, 492, 3) (1, 432, 492, 1) (1, 216, 246, 1) (1, 108, 123, 1)

Allocating Tensors based on Actual Input Values interpreter.resize_tensor_input(input_details[0]['index'], (1,h,w,3)) interpreter.resize_tensor_input(input_details[1]['index'], (1,h,w,1)) interpreter.resize_tensor_input(input_details[2]['index'], (1,int(h/4),int(w/4),1)) interpreter.resize_tensor_input(input_details[3]['index'], (1,int(h/2),int(w/2),1)) interpreter.allocate_tensors()

ERROR - File "testtliteNone.py", line 141, in <module> interpreter.allocate_tensors() File "/homelib/python3.5/site-packages/tensorflow/lite/python/interpreter.py", line 95, in allocate_tensors return self._interpreter.AllocateTensors() File "/home/lib/python3.5/site-packages/tensorflow/lite/python/interpreter_wrapper/tensorflow_wrap_interpreter_wrapper.py", line 106, in AllocateTensors return _tensorflow_wrap_interpreter_wrapper.InterpreterWrapper_AllocateTensors(self) RuntimeError: tensorflow/lite/kernels/kernel_util.cc:233 d1 == d2 || d1 == 1 || d2 == 1 was not true.Node number 4 (MUL) failed to prepare.

Describe the expected behavior Allocation Should Be Done

Standalone code to reproduce the issue Provide a reproducible test case that is the bare minimum necessary to generate the problem. If possible, please share a link to Colab/Jupyter/any notebook.

Other info / logs Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.

created time in 3 months

more