Search results

  1. Andrey Ignatov

    What is the source repo/original model used for Section 23 Text Completion test?

    In terms of architecture, you can just check the TFLite models from AI Benchmark APK.
  2. Andrey Ignatov

    What is the source repo/original model used for Section 23 Text Completion test?

    No, that's not a typo - this model was adapted with some small modifications to the text completion task, being applied to word embeddings instead of images.
  3. Andrey Ignatov

    AI Benchmark 5.1.1 Mobile Released

    What's New: Updated Qualcomm QNN and MediaTek Neuron delegates. Enhanced stability and accuracy of the power consumption test. Various bug fixes and performance improvements. Download this release from the official website or from the Google Play store. Feel free to discuss AI Benchmark...
  4. Andrey Ignatov

    AI Benchmark V5 Scores Updates

    Detailed AI Benchmark V5 results were released for over 50 IoT, smartTV and automotive platforms: https://ai-benchmark.com/ranking_IoT https://ai-benchmark.com/ranking_IoT_detailed The results of the recently presented mobile chipsets including the Snapdragon 8 Gen 3, Dimensity 9300, Google...
  5. Andrey Ignatov

    Does aibenchmark update and adapt to AndroidT?

    Hi @Mountain, replied to you by email.
  6. Andrey Ignatov

    AI Benchmark 5.1.0 Mobile Released

    What's New: Added new NPU power consumption test. Updated TFLite runtime. Updated TFLite GPU, NNAPI, Qualcomm QNN, Hexagon NN and Samsung ENN delegates. Updated in-app ranking table. Various bug fixes and performance improvements. Download this release from the official website or from the...
  7. Andrey Ignatov

    Difference between the HTP and DSP delegate?

    This is a very brief answer, but the general idea is as follows: HTP = rebranded compute DSP (since Snapdragon 888 / Hexagon v68): contains HVX and HMX co-processors / modules. Note that both HVX and HMX modules are also present in other Hexagon DSPs without HTP. HTA = additional co-processor...
  8. Andrey Ignatov

    Is it possible to get source code of AI bench 5.0.3

    One can potentially extract all models directly from the benchmark APK file. Feel free to use this forum for sharing or comparing the results, such posts will not be deleted or banned.
  9. Andrey Ignatov

    More information about result consolidation

    Yes, average or median of the results after removing the outliers. For the majority of SoCs, the results are obtained based in phone measurements, but in some cases development kits are also used (e.g., when no actual devices have been released yet). No, the SoC ranking is not taking into...
  10. Andrey Ignatov

    Difference between the HTP and DSP delegate?

    Yes, partly: the Hexagon 6xx family is denoted as DSPs in QNN, while the Hexagon 7xx family - as HTPs. Here you can find the full list of Hexagon processors. There are also large architectural differences between these two families - the latest HTPs, for instance, are able to accelerate both...
  11. Andrey Ignatov

    Questions about Apple Benchmarks

    INT8 models were running with the TFLite GPU delegate. No, these are plain NPU/GPU runtime results. Because of the bug in the iOS TFLite implementation.
  12. Andrey Ignatov

    Power Efficiency Measurements & Inference Precision

    In the standard benchmark mode, only INT8 inference is tested. However, one can also check the results of FP16 inference in the PRO mode. You can switch between different NPU inference profiles in the settings, sustained speed is used by default.
  13. Andrey Ignatov

    Reference for error/accuracy

    Hi @bagofwater, Thank you for your suggestions. For FP16 inference, the targets are generated in FP32 mode that provides an accuracy of 7-8 digits after decimal point, so there are no issues here.
  14. Andrey Ignatov

    Can't run very a basic model with GPU delegate. Problem with my conversion?

    Yes, your model should have only one input layer in order to be executed successfully. The easiest workaround here would be to stack two input tensors together into a single input layer and then unstack them during inference.
  15. Andrey Ignatov

    How to access APU in Mediatek chipsets

    Right now, it is not possible to force enable delegates when running custom models, this functionality will be added in the next benchmark version.
  16. Andrey Ignatov

    tflite model input data type mismatch?

    You need to use TensorFlow's full integer quantization: https://www.tensorflow.org/lite/performance/post_training_integer_quant You can also find a useful example showing how to obtain a fully quantized model here...
  17. Andrey Ignatov

    How to access APU in Mediatek chipsets

    Try force enabling the Neuron delegate in the acceleration settings by pressing > 3 times on this option. The latest Neuron build should generally be compatible with Dimensity 920-based phones running Android T. It should be also accessible through Android NNAPI.
  18. Andrey Ignatov

    Oculus (Meta) Quest 2

    Hi @tvkamara, Yes, unfortunately the latest Android versions do not allow the app to access any files without a user picking them up explicitly. Can you please send us the error logs from the logcat? Oculus Quest 2 is using a highly customized Android version, thus we need more details...
  19. Andrey Ignatov

    Device List

    Hi @David Kerr, Thanks! We include the majority of devices which scores can be validated to the ranking table. We also observed the results of a number of your devices in the past. Please contact me by email if you want some of them to be published - we might have a few questions regarding...
  20. Andrey Ignatov

    Transformer-based architectures not working with GPU delegate

    TFLite GPU delegate supports only a subset of TFLite / TF ops (mainly related to computer vision), thus it's very common that many NLP models cannot be executed with it. In your case, the problem is with the following 3 layers: GATHER, RESHAPE and SLICE. In principle, it should be possible to...
  21. Andrey Ignatov

    What are Qualcomm QNN HTP/DSP Delegates?

    Hi @Andreas Kirmse, As far as we know, you should be able to download QNN SDK from Qualcomm Createpoint after registering there. However, it might not yet be available for all users now. Well, there is a good reason for this. NNAPI has lots of issues that were not solved by Google, and...
  22. Andrey Ignatov

    Is it possible to get source code of AI bench 5.0.3

    Hi @koka, Unfortunately, the source code of AI Benchmark for Android is not publicly available. Are your modifications related to support of new ops/layers, or to a different inference/acceleration backend?
  23. Andrey Ignatov

    AI Benchmark 5.0.3 Mobile Released

    What's New: Updated TFLite GPU, Qualcomm QNN, MediaTek Neuron and Hexagon NN delegates. Updated in-app ranking table. Various bug fixes and performance improvements. Download this release from the official website or from the Google Play store. Feel free to discuss AI Benchmark results in...
  24. Andrey Ignatov

    The Meaning of Lib and NNAPI 1.1 / NNAPI 1.3

    These results were obtained with the vendor's tflite delegate. NNAPI-1.1/1.3 here refers to the set of ops present in the corresponding models.
  25. Andrey Ignatov

    Support for custom delegates

    Yes, it can be integrated to AI Benchmark. Please send us an email with the information about your delegate (andrey at vision.ee.ethz.ch).
  26. Andrey Ignatov

    Can't use mediatek neuron delegate

    Perfect. Yes, there are a couple of changes in Android 13 preventing the lib from working correctly on a number of Dimensity-based devices. The next benchmark update will resolve these problems.
  27. Andrey Ignatov

    Can't use mediatek neuron delegate

    Hi @jinqimu, Thanks for the provided info. Can you please try to install and run this beta benchmark build with an updated Neuron lib and let us know if you still faces this issue?
  28. Andrey Ignatov

    AI Benchmark V5 Phone Scores Updates

    Detailed AI Benchmark V5 results were released for over 750 Android devices: https://ai-benchmark.com/ranking https://ai-benchmark.com/ranking_detailed Historical results for previous AI Benchmark versions are still available in our archive section: v4.0.3 (year 2021)...
  29. Andrey Ignatov

    Can't use mediatek neuron delegate

    That's interesting, can you please specify your phone model and Android version/build? An info from the logcat for the 1st test would also be useful (if you can get it).
  30. Andrey Ignatov

    Performance numbers in WSL2

    Yes, that's normal - these models are using specific NLP-based ops that are currently not fully supported by DirectML.
  31. Andrey Ignatov

    Implementation of GPU and NPU usage / performance monitoring

    You can sometimes find this information in the logcat when initializing the model with TFLite/NNAPI. Or you can tell us the device model you are using. There is no easy way to do this.
  32. Andrey Ignatov

    Parallel execution of neural networks, performance impact?

    Yes, AI Benchmark sections 5, 6 and 10 are testing exactly this behavior, the results of these tests can be found here.
  33. Andrey Ignatov

    Bokeh Effect Rendering Challenge

    Yes, all the AIM 2022 award certificates are available here: https://data.vision.ee.ethz.ch/cvl/aim22/AIM2022awards_certificates.pdf
  34. Andrey Ignatov

    Real-Time Image Super-Resolution Challenge

    Yes, all the AIM 2022 award certificates are available here: https://data.vision.ee.ethz.ch/cvl/aim22/AIM2022awards_certificates.pdf
  35. Andrey Ignatov

    Bokeh Effect Rendering Challenge

    https://ai-benchmark.com/workshops/mai/2022/#challenges
  36. Andrey Ignatov

    Bokeh Effect Rendering Challenge

    The scores reported in this table were using PSNR instead of MOS. In any case, in this challenge we have two official winners - team Antins_cs and ENERZAi, thus this is not changing anything.
  37. Andrey Ignatov

    AI Benchmark Accuracy/Error Questions

    Hi @yuchai84, sum of each image's Per-Label Loss / Image number Average per-pixel L1 error.
  38. Andrey Ignatov

    Bokeh Effect Rendering Challenge

    As there was no time for proofreading of this paper, it was sent directly to the publisher.
  39. Andrey Ignatov

    a question about pro mode power consumption

    Hi @AndreaChi, Only power consumption of the acceleration unit.
  40. Andrey Ignatov

    Interpretation of Benchmark Score & Runtime

    No, INT8 and FP16 scores are normalized using different coefficients.
  41. Andrey Ignatov

    Bokeh Effect Rendering Challenge

    The final results were released using the same link.
  42. Andrey Ignatov

    Real-Time Image Super-Resolution Challenge

    Yes, you will receive them closer to the actual ECCV workshop event. We don't know yet when the publisher will release these papers, but we might also upload them to arXiv this months. It will likely be a hybrid event, more information would be published soon on the website.
  43. Andrey Ignatov

    Real-Time Image Super-Resolution Challenge

    Yes. An email with the final results was just sent to all challenge participants.
  44. Andrey Ignatov

    Monocular Depth Estimation Challenge

    Hi @zhyever, An email with the final submission instructions was sent yesterday to all registered papers.
  45. Andrey Ignatov

    Real-Time Image Super-Resolution Challenge

    Only submissions containing a valid challenge report are evaluated during the test phase. Submissions having only a TFLite model are discarded. They, they would be measured once again in an "offline" mode.
  46. Andrey Ignatov

    Learned Smartphone ISP Challenge

    Hi @Msss, thanks for noticing this issue. The results of your both submissions are now published in the ranking table.
  47. Andrey Ignatov

    Bokeh Effect Rendering Challenge

    We can see that you made a successful Codalab submission today at 7:19:51 PM.
  48. Andrey Ignatov

    Real-Time Video Super-Resolution Challenge

    You can ignore this, the name of the archive does not matter.
  49. Andrey Ignatov

    Bokeh Effect Rendering Challenge

    Yes, this is the correct input size. This code shows you how to resize the input tensor of your model.
Top