Inference slowdown on Google Pixel 6 Pro when camera is active

andrewg

New member
I've noticed that running my TFLite CNNs through the NNAPI on the Google Pixel 6 Pro (target device "google-edgetpu") while the camera is active causes large increases in inference runtimes (observed ~30-500% increase compared to inactive camera, depending on target model). I am trying to simultaneously display a camera preview to the screen and run frame-by-frame ML analysis on the camera feed, but this slowdown is a hindrance. I suspect the device is using the TPU for some image post-processing by default, but I have not been able to find a setting that can be turned off to fix this.

Any ideas on exactly what is happening and if/how it can be fixed?
 
Last edited:

Andrey Ignatov

Administrator
Staff member
Hi @andrewg,

Sorry for the late response.

the camera is active causes large increases in inference runtimes
I suspect the device is using the TPU for some image post-processing by default

Yes, the Google Tensor TPU was extensively advertised for being used for photo processing. I do not remember any option allowing to disable it while displaying the camera preview using the standard SurfaceView, though you can just try to run your model on GPU instead with the TFLite GPU delegate. In many cases, its performance would be very close to the one of the TPU.
 
Top