Recent content by Emperor

  1. Emperor

    Is it possible to get source code of AI bench 5.0.3

    Hey Andrey, I want to piggy-back on this question and ask if you and the AI-Bench team could: Share the exact model configuration you used for the AI tests instead of linking to the papers and/or public repositories. Share the source code for the ports for models used for benchmarking (e.g...
  2. Emperor

    How to reproduce Float16 models on HTP

    Hey @Mako443 , thanks for answering your question + sharing the solutions; very much appreciated. We are also working with Qualcomm SoCs (SNPE, QNN) and are increasingly frustrated by the state of their ecosystem (their developer forum literally runs on a potato server) + non-availability of...
  3. Emperor

    Power Efficiency Measurements & Inference Precision

    Hey Everyone, @Andrey Ignatov , a boost on the previous message. I hope someone can help. Thanks 🙏
  4. Emperor

    Power Efficiency Measurements & Inference Precision

    Hi @Andrey Ignatov , Two questions about the Power Efficiency section of the Burnout benchmark: What inference precision (INT8, FP16) was used for calculating the metrics "NPU FPS / Watt" and "NPU, Avg. Watt"? How was the power measured; which profiler was used for this? Thanks, E.
Top