Ask questionsFL16 model run on GPU

System information

  • Host OS Platform and Distribution (e.g., Linux Ubuntu 16.04):Linux Ubuntu 16.04
  • TensorFlow installed from (source or binary): binary
  • Tensorflow version (commit SHA if source): 1.15
  • Target platform (e.g. Arm Mbed OS, Arduino Nano 33 etc.):Android 9 api28 ,Mali-T864 GPU

Describe the problem We tried to run a post-quantized (to float16) model on a robot with GPU delegate according to and but it fails to run on GPU even after we graph transformed non-GPU supported operators in it. The logs is attached. Interesting thing is if we do not quantize it to fl16, all operators of the model can successfully run on GPU. Netron shows there are lots of 'dequantize' operators added to the graph after we use tflite converter to quantize the model to fl16. So what should we do to let the quantized fl16 model run on GPU entirely?

One more question is we found a parameter SetAllowFp16PrecisionForFp32 in tflite c++. What is the difference between 1).set this to true and use a fl32 model. 2). set this to true and use fl16 model. 3). set this to false and use fl32 model. 4) set this to false and use fl16 model?

Many thanks.

Model is uploaded in: Inputs are image of size 1933213

Please provide the exact sequence of commands/steps when you ran into the problem INFO: Initialized TensorFlow Lite runtime. INFO: Created TensorFlow Lite delegate for GPU. ERROR: Next operations are not supported by GPU delegate: CONV_2D: Expected 1 input tensor(s), but node has 3 runtime input(s). DEPTHWISE_CONV_2D: Expected 1 input tensor(s), but node has 3 runtime input(s). DEQUANTIZE: Operation is not supported. First 0 operations will run on the GPU, and the remaining 198 on the CPU.


Answer questions rxiang040

The model is faster on GPU. We actually tested the inference time of our model (fl32) with the last layer run on GPU or CPU. I would say the difference is huge. If the whole model is on GPU, the inference time is 195 ms/frame, and if the last layer on CPU and others on GPU, the inference time is 162 ms/frame.

So for int8 model, will it run faster on GPU?

Github User Rank List