Monocular Depth Estimation Challenge

Park

New member
Traceback (most recent call last): File "./data/mobile_ai_2022_monocular_depth_estimation_challenge/scoring_programs/0/evaluation.py", line 116, in raise Exception('Target evaluation server is not available') Exception: Target evaluation server is not available

The evaluation fails on 2022.06.13 09:56 a.m.
 

Raymond

New member
Traceback (most recent call last): File "./data/mobile_ai_2022_monocular_depth_estimation_challenge/scoring_programs/0/evaluation.py", line 197, in raise Exception(r.json()) Exception: {'completed': False, 'error': "Didn't find op for builtin opcode 'ADD' version '4'. An older version of this builtin might be supported. Are you using an old TFLite binary with a newer model?\nRegistration failed.\n", 'ready': "Didn't find op for builtin opcode 'ADD' version '4'. An older version of this builtin might be supported. Are you using an old TFLite binary with a newer model?\nRegistration failed.\n", 'result': None}

The evaluation fails on 2022.06.15 22:01
 

Andrey Ignatov

Administrator
Staff member
Traceback (most recent call last): File "./data/mobile_ai_2022_monocular_depth_estimation_challenge/scoring_programs/0/evaluation.py", line 116, in raise Exception('Target evaluation server is not available') Exception: Target evaluation server is not available

The evaluation fails on 2022.06.13 09:56 a.m.

Hi @Park, should be fixed now.
 
Dear Organizers,

I have a question about benchmarking.

I have RaspberryPi 4 and run my tflites on it. However, I obtain very different (and much lower) times than reported by the leaderboard.
The leaderboard results are very stable, so it does not seem to be the "measurement noise".

T leaderboardT my
0.1560.080
0.1210.050
1.5661.062


Could you share some details how does the benchmarking procedure look like?

Ideally, what is the snippet used for benchmarking?

That would be very helpful.

Thank you very much in advance.

Best regards,
Michał
 

Andrey Ignatov

Administrator
Staff member
Is there any information about the intrinsic parameters of the used RGB cameras?
You can find the information about the dataset and captured images in our previous challenge paper.

when will the testing phase end while it starts on 30 Jun 2022?
testing phase starts on 22 Jul 2022.
The preliminary end date is the 30th of July. Note that the test phase consists in only submitting the final models, codes and reports - you won't be getting the results on the test dataset.

Could you share some details how does the benchmarking procedure look like?
The model is running on the Raspberry Pi 4 Model B Rev 1.2 multiple times, and the average runtime is then computed. TFLite 2.5.0 / 4 threads are used for inference. Maybe your Raspberry Pi is a bit overclocked, or you have an advanced cooling system allowing for more stable CPU clock rates. In any case, all solutions would be tested on the same device, thus only their relative performance would matter.
 
Last edited:
Dear Organizers,

Will it be possible to submit solutions to the Development Phase, after Testing Phase starts on 22.07?

Or are these mutually exclusive?

Thansk for your clarification in advance @Andrey Ignatov !

Best regards,
Michał
 

zhyever

New member
Dear Organizers,

I'd like to ask where to submit the camera-ready version. I have modified the paper following the reviewers' suggestions, but I did not see the submitting link in the CMT.

Thanks,
Zhenyu
 
Top