profile
viewpoint

Ask questionsFL16 model run on GPU

System information

  • Host OS Platform and Distribution (e.g., Linux Ubuntu 16.04):Linux Ubuntu 16.04
  • TensorFlow installed from (source or binary): binary
  • Tensorflow version (commit SHA if source): 1.15
  • Target platform (e.g. Arm Mbed OS, Arduino Nano 33 etc.):Android 9 api28 ,Mali-T864 GPU

Describe the problem We tried to run a post-quantized (to float16) model on a robot with GPU delegate according to https://www.tensorflow.org/lite/performance/gpu and https://medium.com/tensorflow/tensorflow-model-optimization-toolkit-float16-quantization-halves-model-size-cc113c75a2fa but it fails to run on GPU even after we graph transformed non-GPU supported operators in it. The logs is attached. Interesting thing is if we do not quantize it to fl16, all operators of the model can successfully run on GPU. Netron shows there are lots of 'dequantize' operators added to the graph after we use tflite converter to quantize the model to fl16. So what should we do to let the quantized fl16 model run on GPU entirely?

One more question is we found a parameter SetAllowFp16PrecisionForFp32 in tflite c++. What is the difference between 1).set this to true and use a fl32 model. 2). set this to true and use fl16 model. 3). set this to false and use fl32 model. 4) set this to false and use fl16 model?

Many thanks.

Model is uploaded in: https://drive.google.com/drive/folders/18B4Wx4BEPxfptsTmIEZySwILLZNXbE2v?usp=sharing Inputs are image of size 1933213

Please provide the exact sequence of commands/steps when you ran into the problem INFO: Initialized TensorFlow Lite runtime. INFO: Created TensorFlow Lite delegate for GPU. ERROR: Next operations are not supported by GPU delegate: CONV_2D: Expected 1 input tensor(s), but node has 3 runtime input(s). DEPTHWISE_CONV_2D: Expected 1 input tensor(s), but node has 3 runtime input(s). DEQUANTIZE: Operation is not supported. First 0 operations will run on the GPU, and the remaining 198 on the CPU.

tensorflow/tensorflow

Answer questions rxiang040

@srjoglekar246 Thanks very much for your advice. Now it can run on GPU with tflite2.3. But still we have one problem. I have attached our model structure. At the last layer of our model, it is a ResizeBilinear layer. We found that this operation is much efficient if we can run it with CPU. So we modified tensorflow/lite/delegates/utils.cc at line 219 by adding the following code:

if (node_id == 197) { std::string msg = "197 Bilinear upsamping is not run on GPU but CPU"; unsupported_details = &msg; return false; }

197 is the node_id of ResizeBilinear layer. However when we run our program on the bot, the log writes:

ERROR: Following operations are not supported by GPU delegate: DEQUANTIZE: RESIZE_BILINEAR: 197 operations will run on the GPU, and the remaining 1 operations will run on the CPU.

Ideally, RESIZE_BILINEAR should be the only one op that will not run on GPU, but somehow, DEQUANTIZE shows up here. More strangely, even here, it reports two names of ops, but in the next sentence it says "the remaining 1 operations will run on the CPU". So do you know what's happening here?

Also I tested FL32 model and FL16M model, they almost have no inference time difference. So why should we use fl16 quantization here? (any advantage of fl16 comparing to fl32?)

Thanks!

model_halfed_V2_fl16 tflite

useful!
source:https://uonfu.com/
Github User Rank List