Real-Time Camera Scene Detection Challenge

YxChen

New member
If I use a mobile device to test, can I only use Samsung Galaxy S10 (Exynos) in this competition?
 

Choi SangBum

New member
Does input resolution of tflite should also be the size of dataset that is given in this competition?
how does resizing algorithm (pre-processing) goes on test phase or validation
 

Andrey Ignatov

Administrator
Staff member
Does input resolution of tflite should also be the size of dataset that is given in this competition?

TFLite model should accept images of resolution 576 x 384 pixels. All the necessary pre-processing (e.g., rescaling or cropping) should be done inside the model.
 

stvea

New member
Do you have any specific rules of the ranking? Such as what is the weight of inference speed in the grade???
 

Andrey Ignatov

Administrator
Staff member
Do you have any specific rules of the ranking? Such as what is the weight of inference speed in the grade?

The final submission score will be proportional to the accuracy and inversely proportional to the runtime. The exact scoring formula will be announced a bit later.
 

saikatdutta

New member
I got the following error while submitting:

WARNING: Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap.
Traceback (most recent call last):
File "/tmp/codalab/tmpP55f2y/run/program/evaluation.py", line 68, in
print("Mobile: %s"%mobile)
NameError: name 'mobile' is not defined

What might be the issue?
 

Radu Timofte

New member
Staff member
I've checked the latest submission and I think that your .txt files are not in the right format.
Attached is an example of .txt files that you could check.
 

Attachments

  • submission_example.zip
    2.6 KB · Views: 20

saikatdutta

New member
I've checked the latest submission and I think that your .txt files are not in the right format.
Attached is an example of .txt files that you could check.
Thanks for pointing out the error, my submission was successful.
Although I got an accuracy score of 0.00%, I manually checked a few of my predictions which were true. Can you please check that -
1. class id to class name mapping that I need to follow is this: https://competitions.codalab.org/competitions/28113#participate
2. predicted ids should be in range of 1-30 and not 0-29.

Thanks.
 

YxChen

New member
The result was submitted two days ago, and the status was still submitting two days later. What's wrong with it
 

Radu Timofte

New member
Staff member
Codalab is experiencing some issues starting Friday.
Codalab is working to fix the issues as soon as possible.

Sorry about that.
 

mobileai

New member
I am getting this error although my predictions seem to be in sample format provided above:
WARNING: Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap.
Traceback (most recent call last):
File "/tmp/codalab/tmpkLOipW/run/program/evaluation.py", line 39, in <module>
if len(prediction_list) != 600:
NameError: name 'prediction_list' is not defined

I can't seem to figure out the issue, any ideas? My user name is Sidiki. Thank you.
 
Last edited:

Andrey Ignatov

Administrator
Staff member
can you check validation datset number 189.jpg? it seems have 2 dimensions

Yes, this is a black-and-white image, but there is no problem here - if needed, you can just convert it to a normal RGB image by adding two additional channels with the same values.
 

YxChen

New member
"Test on bionic" server is not working. The status always be "In progress"

AI Benchmark has three modes: CPU, GPU, and NNAPI, which mode should be used when running on my device? It is worth noting that some type of models run much faster in CPU than in NNAPI

"Test on bionic" server is not working. The status always be "In progress" and I can't click the "Check Model" button
 
Last edited by a moderator:

Andrey Ignatov

Administrator
Staff member
"Test on bionic" server is not working. The status always be "In progress" and I can't click the "Check Model" button

The server is working from Sunday without any problems, just wait till the evaluation of the previously submitted model is finished.

CPU, GPU, and NNAPI, which mode should be used when running on my device? It is worth noting that some type of models run much faster in CPU than in NNAPI

This completely depends on the phone / SoC model you are using - many of them are not able to accelerate quantized models efficiently. Please check this ranking table and paper for more details.
 

YxChen

New member
The server is working from Sunday without any problems, just wait till the evaluation of the previously submitted model is finished.



This completely depends on the phone / SoC model you are using - many of them are not able to accelerate quantized models efficiently. Please check this ranking table and paper for more details.
so which mode should be used when running on my device?
 

YxChen

New member
Could you please tell me when the specific rules of scoring can be published? What is the ratio of accuracy and running time to calculate the final score?
 

YxChen

New member
I have two questions:
1. What is the form of submission during the test phase, or is it the same as in the validation phase, such as downloading test images, offline testing, submission of test results file (TXT) and model files(TFLite)?
2.In the verification phase, we only need to submit the result of top-1. Is it the result of top-3 that we need to submit in the testing phase?
 

Andrey Ignatov

Administrator
Staff member
What is the form of submission during the test phase, or is it the same as in the validation phase, such as downloading test images, offline testing, submission of test results file (TXT) and model files(TFLite)?

In the test phase, you will need to submit only your TFLite model that will be used for runtime and top-1 / top-3 accuracy evaluation. Test images will not be provided.
 

sayannath

New member
WARNING: Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap. Traceback (most recent call last): File "/tmp/codalab/tmpjjwPDE/run/program/evaluation.py", line 39, in if len(prediction_list) != 600: NameError: name 'prediction_list' is not defined

I am submitting the zip file named as submission.zip. If we extract the zip file there is readme.txt, results.txt and tflite_model.tflite. There are only three files in the zip. Stll getting this error.
 

andrewBatutin

New member
"The runtime of your final model will be evaluated on the Apple Bionic platform". Which one among CPU, GPU, DSP and NPU will the final runtime test of the tflite model run on?
Also which Bionic SoC will be used?
There is also a question of how the iOS app with tflight model will be built - as debug or release?
I guess as long as all contestants are evaluated on a same platform/device/build settings, right?
thnx
 

Andrey Ignatov

Administrator
Staff member
Also how the final inference time will be calculated?
On real device?
Using lightspeed?

Lightspeed server pushes and evaluates the runtime of your model on a real devices, therefore these two options are equivalent.

Which one among CPU, GPU, DSP and NPU will the final runtime test of the tflite model run on?

TFLite CoreML delegate is used, which is more or less equivalent to NNAPI option in AI Benchmark app.

which Bionic SoC will be used?

Bionic A11. In the final challenge paper, we will also report the results on A14 SoC.

guess as long as all contestants are evaluated on a same platform/device/build settings, right?

Yes, all solutions are evaluated on the same iPhone device. You can already get your model's runtime on this phone by uploading your TFLite file to the previously specified server.
 

sayak

New member
Hi. Could someone clarify what is meant by test frames as mentioned in the last point of submission guidelines?

> Download link(s) to the FULL results of ALL of the test frames

Do test frames refer to the files we submit to the server that consist readme.txt, results.txt, and the TFLite model file? If not, what they should be?

Also, referring back to the following question:

> Also for the testing phase what input should model expect? Does the model need to handle resize and normalization?

Another question is there are no test images released. In that case, what do we need to submit for the testing phase (apart from the email)?
 
Last edited:

YxChen

New member
the email say:
model.tflite" should be:
- fully-quantized INT8 model (with no FP32 / FP16 / INT16 ops);


I want to know whether the input image must be int8 or just the model weight is int8
When I use QAT (Quantization Aware Training), the input format is FLOAT32, is that OK???
 
Top