All questions related to Monocular Depth Estimation Challenge can be asked in this thread.
These values are distance in millimeters (i.e., 1000 = 1m).The depth given in train data is in uint16. The depth ranges from 0 to 65536. What does the depth value mean in the real world?
The images were collected using a stereo ZED camera.1. How does the depth generated, by radar, tof, or calculated by two cameras?
Each depth estimation method has its own working range (min and max distance to the object). Additionally, for some objects like the sky, the distance cannot be measured by any method as it is technically infinite. In these cases, the resulting distance values are replaced by zeros and should be ignored both during the training and validation steps.However, why some of the sky has the depth ground truth while other part is black?
How to evaluate the results of the prediction? by use PSNR and SSIM?
so, where can i see the tutorial?Yes, by using RMSE, si-RMSE, log10 and rel losses. We are also preparing an additional tutorial for this challenge now.
Hi were you able to submit your predictions?I get an error like this in the performance evaluation. Even when I submit with the depth png in the train set, I get the same error. I guess there is something wrong in the evaluation system.
WARNING: Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap.
Traceback (most recent call last):
File "/tmp/codalab/tmpHNjjjw/run/program/evaluation.py", line 104, in
compute_psnr(ref_im,res_im)
File "/tmp/codalab/tmpHNjjjw/run/program/evaluation.py", line 41, in compute_psnr
_open_img(os.path.join(input_dir,'ref',ref_im)),
File "/tmp/codalab/tmpHNjjjw/run/program/evaluation.py", line 24, in _open_img
h, w, c = F.shape
ValueError: need more than 2 values to unpack
so, where can i see the tutorial?
Hi, the submitted results before can be seen on the leaderboard.The validation server is up and running.
We've updated the evaluation scripts on the server and some details.
The main ranking measure is Score1 (si-RMSE). Score 2(RMSE) is provided for reference.
The scoring scripts we are using are provided here.
Please check them carefully as we ignore far or undefined pixels (according to the ground truth).
I've rerun the latest submissions of
zhyl
Minsu.Kwon
Parkzyzhang
Only the successful submissions count towards the maximum number of allowed submissions.
Should you have questions please let us know.
Hi,Hi,
Please check your successful submission and the one you just submitted.
Your output png files should be of the expected format, the same as found in the ground truth depth images.
See https://github.com/numpy/numpy/issues/12744Hi,
Please check your successful submission and the one you just submitted.
Your output png files should be of the expected format, the same as found in the ground truth depth images.
Hi,Hi,
I have just checked the two results and I did not find a difference in format.
See the error code. The error is that your numpy version is too high and you should use numpy <= 1.15.0
Will the failed ones be ignored
I met the same problem when I submitted results.
Right now, it's OKYes, failed submissions are not counted.
Make sure that you are submitting the results in the correct format (single-channel 16-bit grayscale images).
I am wondering how can we train the DNN properly with such depth labelling or I am missing something? Please clarify.
Whether is the input data type is float32 or unsigned int8?
2. Whether is the input image in RGB or BGR?
3. Whether is the resolution of the tflite output 480x640x1 or 480x640?
The input is [1x480x640x1], so that the input is in GRAY color ?Float32.
The size of the model's input tensor should be [1x480x640x1].
The resulting inference time differs greatly (took much longer on raspberry 4 than on the phone)
Will this formula be the final ranking formula and remain unchanged?
Is there an extension of final test phase and final submission deadline?
why my tflite can run in my PC,but when i commit my tflite to the online website, it failed?
I already tested my tflite file in other tools and they performs fine.
Hi, for the Factsheet_Template_MAI2021_Challenges can a word template please be provided?
yes,i can running with AI Benchmark,but it sometimes can running in online website,sometimes not, what is the reason for it?The deadline for the final submission is March 21, 11:59 p.m. UTC, it will not be extended.
Are your models running fine with AI Benchmark?
Unfortunately, we are using LaTeX templates only as TeX is the standard format used by all publishers. It is really easy to work with it, please refer to this or this tutorial to learn all TeX basics. You can also edit this fact sheet template online in Overleaf.
Yes, my model can run in AI Benchmark apk in both CPU and TFLite GPU modesAre your models running fine with AI Benchmark?
The test phase has started, but it seems that test data hasn't been uploaded yet.
yes,i can running with AI Benchmark
Yes, my model can run in AI Benchmark apk
this is my model link https://drive.google.com/file/d/1STNygyY-0HznnxClnWmgFrKTQDfe9iSr/view?usp=sharingCheck the email sent to all challenge participants yesterday.
Ok, send me the links to your models by PM.
this is my model link
Does that mean the output tensor will directly be evaluated with gt depth?
BTW, the link to download the factsheet does not work.
I have the same error.View attachment 36when i commit final test it happened,why ?
do you know how to solve it?I have the same error.
No, I have no idea. Perhaps the organizer has.I have the same error.
if you succeed commit ,please tell me how to do it, thank you !No, I have no idea. Perhaps the organizer has.
Hi Andrey.Yes.
Probably you need VPN for downloading it, I'm also attaching this factsheet template below.