AI Benchmark app not responding

sayak

New member
I tried to benchmark our TFLite model (generated using a TF 1.15 runtime) using the AI Benchmark app. Here are additional details:
* Device model id: RMX1801.
*
Android Version: 8.1.
*
When exactly this problem appears: After clicking "Run Model" under "Custom Model" tab.
 

Andrey Ignatov

Administrator
Staff member
Ok, the problem here is not with the benchmark, but with the incompatibility of the TFLite versions: your converted model cannot be opened correctly with the TFLite lib (v2.4) integrated in the benchmark. Can you please provide a bit more details about your model:
  1. Are you using Keras or TensorFlow for model conversion?
  2. What TensorFlow version did you use to convert the model?
  3. Is your model quantized or floating-point?
  4. Did you try the conversion instructions provided here or here?
 

sayak

New member
Are you using Keras or TensorFlow for model conversion?

- TensorFlow SavedModel

What TensorFlow version did you use to convert the model?

- TensorFlow 1.15.0

Is your model quantized or floating-point?

- We used dynamic-range quantization.

I am assuming we should not have used dynamic-range quantization.
 

Andrey Ignatov

Administrator
Staff member
I am assuming we should not have used dynamic-range quantization.
Yes, dynamic-range quantization should never, ever be used as it is not supported by anyone now and will likely be re-implemented completely in the future.
 

sayak

New member
Got it. But didn't quite get "it is not supported by anyone" since TFLite Benchmark (v2) does support different quantizations including dynamic-range.
 

Andrey Ignatov

Administrator
Staff member
since TFLite Benchmark (v2) does support different quantizations including dynamic-range

This does not mean that it is possible to get some speed-up or even to run the obtained dynamic-range quantized models on real NPUs or DSPs.
 

sayak

New member
I see. Got your point. I had initially misunderstood it.

On a related note, I have been able to run full-integer quantized models on DSPs (an example). So, do you think that might be a case here?
 

Andrey Ignatov

Administrator
Staff member
If we fully integer quantize our models, then we should be good, right?

This depends on the track - you need to quantize your model only in the camera scene detection and image super-resolution challenges, in all other competitions you can submit the original floating-point networks.

For the above mentioned quantized tracks, you can just follow the published tutorials and you should get DSP/NPU compatible models.
 
Top