Hello,
I have some questions about the testing of running tflite model on the smartphone (AI Benchmark).
Specifically, the converted 'float32' tflite model works fine on all acceleration modes, but I find that the 'Int8' model performs much slower when I test it on NNAPI. As below...