All questions related to Real-Time Video Super-Resolution Challenge can be asked in this thread.
YesDevide the AI-Benchmark output 'Avg latency' by 10 is the runtime per frame?
Please give some information. Three days have passed.Hello, I'm trying to submit the results,but it failed.
Error:
Traceback (most recent call last):
File "/tmp/codalab/tmp2Ey5ZM/run/program/evaluate.py", line 69, in
print("Energy: {}".format(energy))
NameError: name 'energy' is not defined
There arenot dirs in ZIP archive ,only files。
The content of zip(A total of 302 documents including 300 pngs):
xxx.tflite, readme.txt, 000_00000009.png,....,029_00000099.png。
The content of reame.txt:
Runtime per frame [ms] : 50.00
Mobile device: Nubia
Acceleration : TFLite GPU Delegate
Model size [Mb] : 0.083
Extra Data [1] / No Extra Data [0] : 0
Other description : No
Can you help me?
I don't know what's wrong with my ZIP archive?
Please give some information. Three days have passed.
Hello Finn,Please give some information. Three days have passed.
Thank you for reply。Hello Finn,
I just had access again to Codalab submissions and I checked yours.
It turns out your readme has the wrong format. The readme expected by the evaluation code should be as follows:
Latency per frame [ms] (-1 for unmeasured models) :
Energy consumption per inference [J] :
GPU memory usage per inference [MiB] :
Model size [MiB] :
Extra Data [1] / No Extra Data [0] :
Other description :
I am aware your readme has the format described in Codalab, we will fix that. You can submit again with this minor chang
How can we get this value?
Hi, I found the readme in Codalab haven’t fixed, Will it change back to the original format later?Hello Finn,
I just had access again to Codalab submissions and I checked yours.
It turns out your readme has the wrong format. The readme expected by the evaluation code should be as follows:
Latency per frame [ms] (-1 for unmeasured models) :
Energy consumption per inference [J] :
GPU memory usage per inference [MiB] :
Model size [MiB] :
Extra Data [1] / No Extra Data [0] :
Other description :
I am aware your readme has the format described in Codalab, we will fix that. You can submit again with this minor change.
I found the readme in Codalab haven’t fixed
Yes.Can the model use the remaining 2700 frames?
3000.all the restored frames by email ", Is that 300 or 3000?
So tflite model is only used to test the forward time (when a video has only 10 frames), not the final result? thank you for your replyYes.
3000.
So tflite model is only used to test the forward time (when a video has only 10 frames), not the final result?
that's to say, the tflite need to deal with 100 frames inputs in the later? (result infered in 100 frames is different with that infered on 10 frames, because the VSR model)TFLite model is used 1) to check the runtime, and 2) to check whether the submitted results were obtained with it.
When will the runtime validation server online ?
Dose the TFLite Gpu delegates support tf.nn.depth_to_space op directly? (not run in cpu)
depth_to_space
is a logical operation, thus its CPU and GPU performance won't be much different.Hello, 0-255 RGB images, and the user specifies the channel format: (rgb rgb ... rgb) or (rrrr... gggg... bbb...). For simplicity, The user can also specify 0-1 or 0-255.Hi,
Q1: Is the value of the input data limited to 0-1 or 0-255 , or does the user specify it in the users‘ readme?
Q2: What is the data format of input frame?
input: [1,180,320,30]
1) rgb or bgr?
2) if the format is rgb, the channels first (rr....r gg..g bb..b ) or last (rgb rgb ... rgb)?
It is 865. We will fix the typo in the title.View attachment 9
I found that OPPO Find X2 is equipped with Snapdragon 865 Soc. So, is the test platform 855 or 865?
As the purpose of the challenge aims at real-time, we have a metric that computes this on the target device https://competitions.codalab.org/competitions/28112#learn_the_details-evaluationDoes the model have to be real-time (24 fps)?
Must the model be float16 ?
Sometimes int8 quantization model is more faster than other.
hello, why i press the submit button in the codalab, it is no response?
I tried to submit with the given readme format, but CodaLab keep raising error as below:Hello Finn,
I just had access again to Codalab submissions and I checked yours.
It turns out your readme has the wrong format. The readme expected by the evaluation code should be as follows:
Latency per frame [ms] (-1 for unmeasured models) :
Energy consumption per inference [J] :
GPU memory usage per inference [MiB] :
Model size [MiB] :
Extra Data [1] / No Extra Data [0] :
Other description :
I am aware your readme has the format described in Codalab, we will fix that. You can submit again with this minor change.
Hello, I just checked your last submission and it has the form:I tried to submit with the given readme format, but CodaLab keep raising error as below:
Traceback (most recent call last):
File "/tmp/codalab/tmpGW2Itw/run/program/evaluate.py", line 68, in
print("Latency: {}".format(latency))
NameError: name 'latency' is not defined
How can I fix it?
Thx.
Runtime per frame [ms] :
Mobile device:
Acceleration :
Model size [Mb] :
Extra Data [1] / No Extra Data [0] : 0
Other description :
Latency per frame [ms] (-1 for unmeasured models) : 10.43
Energy consumption per inference [J] : -1
GPU memory usage per inference [MiB] : -1
Model size [MiB] : 12
Extra Data [1] / No Extra Data [0] : 1
Other description : Solution based on ....
Hello, I just checked your last submission and it has the form:
Code:Runtime per frame [ms] : Mobile device: Acceleration : Model size [Mb] : Extra Data [1] / No Extra Data [0] : 0 Other description :
But according to the competition, it should have a different one:
Code:Latency per frame [ms] (-1 for unmeasured models) : 10.43 Energy consumption per inference [J] : -1 GPU memory usage per inference [MiB] : -1 Model size [MiB] : 12 Extra Data [1] / No Extra Data [0] : 1 Other description : Solution based on ....
I have re-submmited with the official provided readme.txt. And it raises exactly the same error. |
Can you please remove the __MACOSX folder, and try again? I did an internal running and that was the reason.
I have re-submmited with the official provided readme.txt. And it raises exactly the same error.
Will you evaluate the submission on your private test data?
Will the use of external data or not use external data will rank separately?
Andrey can assist you with q1.Final Submission Instructions
"model_none.tflite" should be:
- a floating-point FP32 model (no FP16 / INT16 / INT8 quantization).
- with an input tensor of size [1, None, None, 30] taking 10 RGB images as an input.
- with an output tensor of size [1, None, None, 30] producing the 10 final super-resolved image results.
The provided TFLite models will be applied to test images having the same type as the validation data, their outputs and runtime will be used to compute the final results of your submission. If your model performs any image pre-processing (rescaling, normalization, etc.) - it should be integrated directly into it, no additional scripts are accepted.
q1:how can I generate model_none.tflite?
The model can support any resolution, but when I convert tf to tflite , the resolution is required.
q2: The rules don't limit the input to 0-1, or 0-255。 If the input of my model is [0,1],there is a normalization for image.
So normalization should be integrated inside the model, Is it right?
The model can support any resolution, but when I convert tf to tflite , the resolution is required.
experimental_new_converter
option to True
when converting a model with None dimensions.Hi, I have set the experimental_new_converter option to True by default,but still cannot convert tf to model_none.tflite。Are there any other parameters that need to be set?Set theexperimental_new_converter
option toTrue
when converting a model with None dimensions.
Hi, I have set the experimental_new_converter option to True by default,but still cannot convert tf to model_none.tflite。Are there any other parameters that need to be set?
1.we cannot convert our model to 'model_none.tflite' while 'model.tflite' is converted successful.That's very strange, can you also try to do the model conversion with TF-nightly? If it fails, please also attach the logs.
The model convert code as follow
tf.lite.OpsSet.SELECT_TF_OPS
option that allows the interpreter to use the standard TF (not TFLite) ops. Were you actually able to run the model with static input size using AI Benchmark (standard build, not a nightly one for developers)?