Search results

  1. Andrey Ignatov

    Mobile AI Workshop General Organization Questions

    Because these are two different challenges (quantized vs. floating-point) with different requirements (target platforms, min. required accuracy).
  2. Andrey Ignatov

    Mobile AI Workshop General Organization Questions

    Please find this information using the following link: https://codalab.lisn.upsaclay.fr/competitions/21868#learn_the_details-evaluation The participants in this challenge are required to submit their final python codes that: 1) take the input text prompt, 2) generate text using this prompt as...
  3. Andrey Ignatov

    Mobile AI Workshop Technical and Model Conversion Questions

    Thanks for noticing. Instead of the PUNET model, we are now providing a more efficient MicroISP baseline. The corresponding link was added to the challenge description.
  4. Andrey Ignatov

    Mobile AI Workshop General Organization Questions

    Yes, in this challenge there is only one phase, where you can upload your final solution. You can make as many submissions as you want, but only your last one counts (will be validated). There is no automatic validation in Codalab - instead, we use it only for uploading solutions (thus the...
  5. Andrey Ignatov

    Is it possible to get source code of AI bench 5.0.3

    APK is just an archive, you can unpack it with 7z or any other file archiver.
  6. Andrey Ignatov

    Mobile AI Workshop General Organization Questions

    The results were sent today by email. Yes, this will be done soon.
  7. Andrey Ignatov

    Mobile AI Workshop General Organization Questions

    For the majority of challenges, the results would be sent this week. You need to follow the final submission instructions. All conditionally accepted papers would be checked before sending them to the publisher to see whether all issues are fixed. If this was not done, the paper would be...
  8. Andrey Ignatov

    Mobile AI Workshop Technical and Model Conversion Questions

    The correct one is: dim_2 = 8 * torch.export.Dim("height", min=8, max=256) dim_3 = 8 * torch.export.Dim("width", min=8, max=256) edge_model = ai_edge_torch.convert(model.eval(), sample_input, dynamic_shapes=({2:dim_2, 3:dim_3},)) However, this option is broken in the latest PyTorch...
  9. Andrey Ignatov

    Mobile AI Workshop General Organization Questions

    You don't need to upload the results on the test images in this challenge. That's another issue, we've mentioned during yesterday's Q&A session that for the denoising challenge the data would be provided later when this challenge transitions to ICCV.
  10. Andrey Ignatov

    Mobile AI Workshop General Organization Questions

    First of all, this is not about 0.002dB accuracy difference - this is about some ops that are completely screwed up by the ai-edge-torch converter, so that you get total corruptions instead of real output. Secondly, this primarily refers to the challenges, where we have an unconstrained track...
  11. Andrey Ignatov

    Mobile AI Workshop General Organization Questions

    If your submission exceeds 300Mbs, please upload your visual results or codes to a separate shared storage platform and provide the corresponding link in your factsheet. Yes, you can submit everything in one zip file.
  12. Andrey Ignatov

    Mobile AI Workshop Technical and Model Conversion Questions

    It's not an issue with the input channels - it's a problem with a bilinear resize layer. The issue is self-explanatory: half_pixel_center and align_corner options cannot be used at the same time. Try changing the parameters of the resize layer in your model.
  13. Andrey Ignatov

    Mobile AI Workshop Technical and Model Conversion Questions

    Are you using the ai_edge_torch plugin for model conversion like in this tutorial?
  14. Andrey Ignatov

    Mobile AI Workshop Technical and Model Conversion Questions

    By default, all solutions will be evaluated using FP16 + TFLite GPU Delegate mode. However, CPU backend will be used for solutions not supporting this option. TFLite GPU delegate is the easiest mode in terms of model adaptation as it supports nearly all common ops. However, yes, there might be...
  15. Andrey Ignatov

    Mobile AI Workshop General Organization Questions

    Using logarithm operator is not a very good option itself as it's a purely logical op, i.e., no acceleration by NPU/GPU if supported at all by the platform.
  16. Andrey Ignatov

    Mobile AI Workshop General Organization Questions

    Yes, it is based solely on NPU runtime, CPU runtime is provided only for the reference. However, if the final submission won't run on NPU, its CPU runtime will be used instead.
  17. Andrey Ignatov

    How can we get the exact result of SoC (not Smartphone)?

    SoC score is obtained in the same way as the Mobile score, but it does not take into account the results of the last memory test.
  18. Andrey Ignatov

    What kind of pre-trained model does Vision Transformer (ViT) use?

    Yes, but an alternative implementation (not from Google).
  19. Andrey Ignatov

    Mobile AI Workshop Technical and Model Conversion Questions

    Does this error also occur during CPU-based inference? One possible issue might be the dimensionality of your tensor: GPU delegate might be expecting a 4D tensor, but it received a 3D one. If you are still unable to solve this problem, you can send us your model by email.
  20. Andrey Ignatov

    Mobile AI Workshop General Organization Questions

    ai-edge-torch plugin automatically converts PyTorch NCHW models to TFLite NHWC models: https://github.com/aiff22/MAI-2025-Workshop/blob/main/pytorch_to_tflite.py If you have any issues with it, please post in this thread, we will then check this separately.
  21. Andrey Ignatov

    Mobile AI Workshop General Organization Questions

    We extended the number of submissions to 10 in the test phase in all challenges. Note, however, that only your last submission counts. Training code is also needed as we check some submissions for reproducibility.
  22. Andrey Ignatov

    Mobile AI Workshop Technical and Model Conversion Questions

    Yes, there is indeed an issue with dynamic input size for PyTorch to TFLite conversion. We will accept this for models converted from PyTorch. However, be prepared that we might ask you to provide the same model with additional input sizes if automatic tensor resizing won't work for your model.
  23. Andrey Ignatov

    Mobile AI Workshop General Organization Questions

    Yes, that's right, you can ignore this error. What we need is just a zip archive uploaded to Codalab with the requested TFLite model and a factsheet.
  24. Andrey Ignatov

    Mobile AI Workshop General Organization Questions

    These are separate things: To participate in the final competition phase, you need to submit your TFLite model and a factsheet describing your solution to Codalab. In addition to that, you can also submit a separate full-length paper to the CMT paper submission website. If the paper is well...
  25. Andrey Ignatov

    Mobile AI Workshop General Organization Questions

    You are right, that was an incorrect info, we've updated it.
  26. Andrey Ignatov

    Mobile AI Workshop General Organization Questions

    We've just sent the final submission instructions for all challenges. No, the deadline is on the 28th of March as was stated on the Codalab webpage. As was stated in the latest instructions, you can send us up to three models this week for runtime validation on the target device. The deadline...
  27. Andrey Ignatov

    Mobile AI Workshop Technical and Model Conversion Questions

    Any technical questions and questions related to model conversion to TFLite format can be asked in this thread.
  28. Andrey Ignatov

    Mobile AI Workshop General Organization Questions

    Any questions related to general MAI Workshop and Challenges organization can be asked in this thread.
  29. Andrey Ignatov

    AI Benchmark 6.0.1 Mobile Released

    What's New: 1. Updated QNN delegate brings support for the Snapdragon 8 Elite and improves the results of previous-gen Qualcomm SoCs. 2. Updated Neuron delegate brings support for the Dimensity 8300 and improves the results of the Dimensity 9400, 9300 and 8300 SoCs. Download this release from...
  30. Andrey Ignatov

    Bokeh Effect Rendering Challenge

    We've updated the links on the official website: https://people.ee.ethz.ch/~ihnatova/pynet-bokeh.html#dataset
  31. Andrey Ignatov

    Burnout Benchmark

    Yes, you can do this after entering the PRO mode, from where you can launch each workload separately (for CPU, GPU and NPU) for an unlimited amount of time. No, it's not interacting with the security system directly. This can happen when the phone's power management system is not working...
  32. Andrey Ignatov

    AI Benchmark 6.0.0 Mobile Released

    What's New: 1. New tasks and models: Vision Transformer (ViT) architectures, Large Language Models (LLMs), Stable Diffusion network, etc. 2. Added tests checking the performance of quantized INT16 inference. 3. LiteRT (TFLite) runtime updated to version 2.17. 4. Updated Qualcomm QNN, MediaTek...
  33. Andrey Ignatov

    What is the source repo/original model used for Section 23 Text Completion test?

    In terms of architecture, you can just check the TFLite models from AI Benchmark APK.
  34. Andrey Ignatov

    What is the source repo/original model used for Section 23 Text Completion test?

    No, that's not a typo - this model was adapted with some small modifications to the text completion task, being applied to word embeddings instead of images.
  35. Andrey Ignatov

    AI Benchmark 5.1.1 Mobile Released

    What's New: Updated Qualcomm QNN and MediaTek Neuron delegates. Enhanced stability and accuracy of the power consumption test. Various bug fixes and performance improvements. Download this release from the official website or from the Google Play store. Feel free to discuss AI Benchmark...
  36. Andrey Ignatov

    AI Benchmark V5 Scores Updates

    Detailed AI Benchmark V5 results were released for over 50 IoT, smartTV and automotive platforms: https://ai-benchmark.com/ranking_IoT https://ai-benchmark.com/ranking_IoT_detailed The results of the recently presented mobile chipsets including the Snapdragon 8 Gen 3, Dimensity 9300, Google...
  37. Andrey Ignatov

    Does aibenchmark update and adapt to AndroidT?

    Hi @Mountain, replied to you by email.
  38. Andrey Ignatov

    AI Benchmark 5.1.0 Mobile Released

    What's New: Added new NPU power consumption test. Updated TFLite runtime. Updated TFLite GPU, NNAPI, Qualcomm QNN, Hexagon NN and Samsung ENN delegates. Updated in-app ranking table. Various bug fixes and performance improvements. Download this release from the official website or from the...
  39. Andrey Ignatov

    Difference between the HTP and DSP delegate?

    This is a very brief answer, but the general idea is as follows: HTP = rebranded compute DSP (since Snapdragon 888 / Hexagon v68): contains HVX and HMX co-processors / modules. Note that both HVX and HMX modules are also present in other Hexagon DSPs without HTP. HTA = additional co-processor...
  40. Andrey Ignatov

    Is it possible to get source code of AI bench 5.0.3

    One can potentially extract all models directly from the benchmark APK file. Feel free to use this forum for sharing or comparing the results, such posts will not be deleted or banned.
  41. Andrey Ignatov

    More information about result consolidation

    Yes, average or median of the results after removing the outliers. For the majority of SoCs, the results are obtained based in phone measurements, but in some cases development kits are also used (e.g., when no actual devices have been released yet). No, the SoC ranking is not taking into...
  42. Andrey Ignatov

    Difference between the HTP and DSP delegate?

    Yes, partly: the Hexagon 6xx family is denoted as DSPs in QNN, while the Hexagon 7xx family - as HTPs. Here you can find the full list of Hexagon processors. There are also large architectural differences between these two families - the latest HTPs, for instance, are able to accelerate both...
  43. Andrey Ignatov

    Questions about Apple Benchmarks

    INT8 models were running with the TFLite GPU delegate. No, these are plain NPU/GPU runtime results. Because of the bug in the iOS TFLite implementation.
  44. Andrey Ignatov

    Power Efficiency Measurements & Inference Precision

    In the standard benchmark mode, only INT8 inference is tested. However, one can also check the results of FP16 inference in the PRO mode. You can switch between different NPU inference profiles in the settings, sustained speed is used by default.
  45. Andrey Ignatov

    Reference for error/accuracy

    Hi @bagofwater, Thank you for your suggestions. For FP16 inference, the targets are generated in FP32 mode that provides an accuracy of 7-8 digits after decimal point, so there are no issues here.
  46. Andrey Ignatov

    Can't run very a basic model with GPU delegate. Problem with my conversion?

    Yes, your model should have only one input layer in order to be executed successfully. The easiest workaround here would be to stack two input tensors together into a single input layer and then unstack them during inference.
  47. Andrey Ignatov

    How to access APU in Mediatek chipsets

    Right now, it is not possible to force enable delegates when running custom models, this functionality will be added in the next benchmark version.
Top