These values are distance in millimeters (i.e., 1000 = 1m).The depth given in train data is in uint16. The depth ranges from 0 to 65536. What does the depth value mean in the real world?
The images were collected using a stereo ZED camera.1. How does the depth generated, by radar, tof, or calculated by two cameras?
Each depth estimation method has its own working range (min and max distance to the object). Additionally, for some objects like the sky, the distance cannot be measured by any method as it is technically infinite. In these cases, the resulting distance values are replaced by zeros and should be ignored both during the training and validation steps.However, why some of the sky has the depth ground truth while other part is black?
Hi were you able to submit your predictions?I get an error like this in the performance evaluation. Even when I submit with the depth png in the train set, I get the same error. I guess there is something wrong in the evaluation system.
WARNING: Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap.
Traceback (most recent call last):
File "/tmp/codalab/tmpHNjjjw/run/program/evaluation.py", line 104, in
File "/tmp/codalab/tmpHNjjjw/run/program/evaluation.py", line 41, in compute_psnr
File "/tmp/codalab/tmpHNjjjw/run/program/evaluation.py", line 24, in _open_img
h, w, c = F.shape
ValueError: need more than 2 values to unpack
so, where can i see the tutorial?
Hi, the submitted results before can be seen on the leaderboard.The validation server is up and running.
We've updated the evaluation scripts on the server and some details.
The main ranking measure is Score1 (si-RMSE). Score 2(RMSE) is provided for reference.
The scoring scripts we are using are provided here.
Please check them carefully as we ignore far or undefined pixels (according to the ground truth).
I've rerun the latest submissions of
Only the successful submissions count towards the maximum number of allowed submissions.
Should you have questions please let us know.
Please check your successful submission and the one you just submitted.
Your output png files should be of the expected format, the same as found in the ground truth depth images.
I am wondering how can we train the DNN properly with such depth labelling or I am missing something? Please clarify.
Whether is the input data type is float32 or unsigned int8?
2. Whether is the input image in RGB or BGR?
3. Whether is the resolution of the tflite output 480x640x1 or 480x640?
Is there an extension of final test phase and final submission deadline?
why my tflite can run in my PC,but when i commit my tflite to the online website, it failed?
I already tested my tflite file in other tools and they performs fine.
Hi, for the Factsheet_Template_MAI2021_Challenges can a word template please be provided?
yes,i can running with AI Benchmark，but it sometimes can running in online website,sometimes not, what is the reason for it?The deadline for the final submission is March 21, 11:59 p.m. UTC, it will not be extended.
Are your models running fine with AI Benchmark?
Unfortunately, we are using LaTeX templates only as TeX is the standard format used by all publishers. It is really easy to work with it, please refer to this or this tutorial to learn all TeX basics. You can also edit this fact sheet template online in Overleaf.
The test phase has started, but it seems that test data hasn't been uploaded yet.
yes,i can running with AI Benchmark
Yes, my model can run in AI Benchmark apk
this is my model link https://drive.google.com/file/d/1STNygyY-0HznnxClnWmgFrKTQDfe9iSr/view?usp=sharingCheck the email sent to all challenge participants yesterday.
Ok, send me the links to your models by PM.
Does that mean the output tensor will directly be evaluated with gt depth?
BTW, the link to download the factsheet does not work.
Probably you need VPN for downloading it, I'm also attaching this factsheet template below.