Hey Andrey, I want to piggy-back on this question and ask if you and the AI-Bench team could:
Share the exact model configuration you used for the AI tests instead of linking to the papers and/or public repositories.
Share the source code for the ports for models used for benchmarking (e.g...
Hey @Mako443 , thanks for answering your question + sharing the solutions; very much appreciated. We are also working with Qualcomm SoCs (SNPE, QNN) and are increasingly frustrated by the state of their ecosystem (their developer forum literally runs on a potato server) + non-availability of...
Hi @Andrey Ignatov ,
Two questions about the Power Efficiency section of the Burnout benchmark:
What inference precision (INT8, FP16) was used for calculating the metrics "NPU FPS / Watt" and "NPU, Avg. Watt"?
How was the power measured; which profiler was used for this?
Thanks, E.