Real-Time Video Super-Resolution Challenge

wqIsGood

New member
Dear all, I see the runtime environment is Dimensity smartphone which has a powerful a APU capable of accelerating floating-point and quantized networks. Could you please tell me which Dimensity chips (e.g. Dimensity 1000? Dimensity 9000? ) do you use? Or which series of oppo do you use? Thanks in advance!
 

videosrboy

New member
Hi,

I recently tried to submit my zip files. Each time it would upload and say "submitted" and file size 0mb. Now 12 hours later, they show as failed and the error log from python script on your server shows that the "file is not a zip file".

I have made submissions to the past for NTIRE so am fairly confident in my zip file. Also, the leaderboard is currently empty, so I am wondering if other contestants are having the same issue.

Also as a second question, is the .tflite file required to have the same naming and dimension conventions as those specified for the full test evaluation (model.tflite with 30 input channels)?

Thanks
 

videosrboy

New member
The codalab is showing that the development phase now starts on June 26th and the results table is empty. I was wondering if it has been deliberately take offline or if this information being displayed is somehow unique to me?
 

Andrey Ignatov

Administrator
Staff member
The codalab is showing that the development phase now starts on June 26th and the results table is empty. I was wondering if it has been deliberately take offline or if this information being displayed is somehow unique to me?

Yes, an email with the corresponding announcement was sent this week. Note that MediaTek will additionally give an introductory talk to this challenge on Monday at 9:45am PT.
 

Emolly

New member
Hi, I see in the "Real-Time Video Super-Resolution Challenge", the webpage doesn't say whether TFLite GPU Delegate or Android NNAPI is used, and the running time of the two are not the same. Could you tell me if it's TFLite GPU Delegate or Android NNAPI please? Thank you!
 

eisblume

New member
Dear organizers, thanks for you organizing this interesting challenge, now I have some inquries for this challenge:
1. Is quantization (FP8/16) necessary for the submitted tflite model? How to compare quantized and non-quantized models for final rating?
2. Is it allowed to use extra data for training? Will this be counted for the final score?
3. What is the exact formula for calculating the score?

Thanks and wish for your reply~
😊
 

erick

New member
Hi, I noticed that the latency per frame in the results list is not consistent with what I measured locally using mobile.
So, How is the time on the list measured?Desktop GPU or smart phone?
 

ManYu

New member
Hi, I see in the "Real-Time Video Super-Resolution Challenge", the webpage doesn't say whether TFLite GPU Delegate or Android NNAPI is used, and the running time of the two are not the same. Could you tell me if it's TFLite GPU Delegate or Android NNAPI please? Thank you!
The runtime is measured with MediaTek Neuron delegate.
 

ManYu

New member
Dear organizers, thanks for you organizing this interesting challenge, now I have some inquries for this challenge:
1. Is quantization (FP8/16) necessary for the submitted tflite model? How to compare quantized and non-quantized models for final rating?
2. Is it allowed to use extra data for training? Will this be counted for the final score?
3. What is the exact formula for calculating the score?

Thanks and wish for your reply~
😊
1. No, quantization is not necessary for your submission. All models will be rated under same criteria.
2. Yes, you can use extra data for training.
3. The final scoring formula will be released soon.
 

ManYu

New member
Hi, I noticed that the latency per frame in the results list is not consistent with what I measured locally using mobile.
So, How is the time on the list measured?Desktop GPU or smart phone?
The runtime is measured on Mediatek Dimensity 9000 mobile device.
 

baky1983

New member
Dear organizers, thanks for you organizing this challenge, but failed when I submitted the results ,the error info as follow,please help me check what problem is it? Thanks for your reply!
1656926175066.png
1656926100323.png
index.php
 
Last edited:

tryagain

New member
Hi, I meet the Error when I submit my result, Please help me to check what is the problem, and I think failed submissions should be excluded from the total count, Because the Max submissions total is only 40.
202207041622160252150030124623FD.png
 

erick

New member
The runtime is measured on Mediatek Dimensity 9000 mobile device.
Thank you for your reply!
So, as a participant,How can I get the same latency test result as yours in the results list. Because i did not find the option "MediaTek Neuron delegate" in the AI benchmark software.
Or I can only get the right latency result by submiting the tflite model to the validation server?
 

ManYu

New member
Dear organizers, thanks for you organizing this challenge, but failed when I submitted the results ,the error info as follow,please help me check what problem is it? Thanks for your reply!
View attachment 65
index.php
We are currently checking on your model. In the meantime, please don't submit any result yet. Thank you.
 

ManYu

New member
Hi, I meet the Error when I submit my result, Please help me to check what is the problem, and I think failed submissions should be excluded from the total count, Because the Max submissions total is only 40.
View attachment 67
Please follow the submission guideline specified in “Learn the Details”->”Evaluation”. Your zip archive should not include folders.
 

ManYu

New member
Thank you for your reply!
So, as a participant,How can I get the same latency test result as yours in the results list. Because i did not find the option "MediaTek Neuron delegate" in the AI benchmark software.
Or I can only get the right latency result by submiting the tflite model to the validation server?
You can follow the instruction here (https://github.com/MediaTek-NeuroPilot/tflite-neuron-delegate) to build our Neuron Delegate and evaluation on a Dimensity 9000 device.
 

jin23

New member
Dear organizers, I have the following error which is same to the above that the user "tryagain" posted.
First trial,
docker: Error response from daemon: error gathering device information while adding custom device "/dev/bus/usb/001/029": no such file or directory.
Second to fourth trials (all the same zip file),
0%| | 0/300 [00:00
file = open(LOG_NAME, 'r')
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/codalab/tmpCxLJaP/run/output/output.csv'
My zip file dows not contain any folders, and I believe I exactly followed the submission guideline listed in "Learn the Details -> Evaluation".
What is wrong with this?
 

ManYu

New member
Dear organizers, I have the following error which is same to the above that the user "tryagain" posted.
First trial,

Second to fourth trials (all the same zip file),

My zip file dows not contain any folders, and I believe I exactly followed the submission guideline listed in "Learn the Details -> Evaluation".
What is wrong with this?
Could you provide your submission ID?
 

baky1983

New member
Dear organizers, thanks for you organizing this challenge, but failed when I submitted the results ,the error info as follow,please help me check what problem is it? Thanks for your reply!
View attachment 69
View attachment 68
index.php
ERROR: Neuron returned error Unknown Neuron error code: 11 at line 1540 while running computation.

ERROR: Node number 20 (TFLiteNeuronDelegate) failed to invoke.
Benchmarking failed.
error: closed
adb: no devices/emulators found
Traceback (most recent call last):
File "/tmp/codalab/tmpsQMq3o/run/program/evaluate.py", line 219, in <module>
df = pd.read_csv(f'power.{p}.csv', header=None)
File "/opt/conda/lib/python3.8/site-packages/pandas/util/_decorators.py", line 311, in wrapper
return func(*args, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/pandas/io/parsers/readers.py", line 680, in read_csv
return _read(filepath_or_buffer, kwds)
File "/opt/conda/lib/python3.8/site-packages/pandas/io/parsers/readers.py", line 575, in _read
parser = TextFileReader(filepath_or_buffer, **kwds)
File "/opt/conda/lib/python3.8/site-packages/pandas/io/parsers/readers.py", line 934, in __init__
self._engine = self._make_engine(f, self.engine)
File "/opt/conda/lib/python3.8/site-packages/pandas/io/parsers/readers.py", line 1218, in _make_engine
self.handles = get_handle( # type: ignore[call-overload]
File "/opt/conda/lib/python3.8/site-packages/pandas/io/common.py", line 786, in get_handle
handle = open(
FileNotFoundError: [Errno 2] No such file or directory: 'power.HIGH_PERFORMANCE.csv'
 

gauss

New member
Please check your Tensorflow version. (We suggest TF2.6) You can find more detail in your error log (stderr.txt).
Dear organizers, I have done as you suggest with TF2.6, but the error "docker: Error response from daemon: error gathering device information while adding custom device "/dev/bus/usb/001/041": no such file or directory." always exists when I submit the model.tflite(not model_none.tfllite). In a strange way, the model_none.tfllite can submit successfully and get result.

So what is wrong with this? My submission ID is wg234567p. I look forward to your reply,thanks a lot.
 

ManYu

New member
Dear organizers, I have done as you suggest with TF2.6, but the error "docker: Error response from daemon: error gathering device information while adding custom device "/dev/bus/usb/001/041": no such file or directory." always exists when I submit the model.tflite(not model_none.tfllite). In a strange way, the model_none.tfllite can submit successfully and get result.

So what is wrong with this? My submission ID is wg234567p. I look forward to your reply,thanks a lot.
We are currently checking on your results_8_8.zip which is causing the unexpected behavior. In the meantime, please don't submit the result yet. Thank you.
 

gauss

New member
Dear organizers, for the same tflite model, the energy consumption per inference is quite different from which I submit
previously . Now the energy consumption has increased 0.5 compared with before. Has the mobile platform or calculation method changed?

Looking forward to your reply,thanks a lot.
 

eisblume

New member
Dear organizers:
I have noticed that the quantized TFLite model submitted to codalab server test shows a unexpected increase of latency and energy consumption (e.g. from <10 ms 1Watt to > 200 ms 100Watt). Is such model not supported or not optimized? Are there some restrictions or references for development of quantized models in your platform? Thanks ~
 

gauss

New member
We are currently checking on your results_8_8.zip which is causing the unexpected behavior. In the meantime, please don't submit the result yet. Thank you.
Dear organizers:
About this problem,what is the final conclusion? thank you.
 

ManYu

New member
Dose the Energy consumption per inference [J] must equal or less than 1?
We don’t restrict the energy consumption to be less than 1. However, there is a punishment according to the scoring policy specified in “Learn the Details” —> “Evaluation” if your energy consumption is larger than 1.
 

ManYu

New member
Dear organizers, for the same tflite model, the energy consumption per inference is quite different from which I submit
previously . Now the energy consumption has increased 0.5 compared with before. Has the mobile platform or calculation method changed?

Looking forward to your reply,thanks a lot.
The energy consumption is currently measured in an automatic flow, therefore the result may vary under different device conditions. We will manually measure all submissions in the final phase for fair comparison.
 

ManYu

New member
Dear organizers:
I have noticed that the quantized TFLite model submitted to codalab server test shows a unexpected increase of latency and energy consumption (e.g. from <10 ms 1Watt to > 200 ms 100Watt). Is such model not supported or not optimized? Are there some restrictions or references for development of quantized models in your platform? Thanks ~
Please provide your submission ID.
The Optimization and Preference guideline can be found at “Particaipate”—> “Get Data”.
 

gauss

New member
捕获.PNG
捕获1.PNG
Dear organizers, could you please check my submission which is failed? thanks! My ID is OptimusPrime. (I have done as you suggest with TF2.6.)
 
Last edited:

eisblume

New member
Dear organizers, could you help to check my last submission with ID: sisyphus4869? The error log is as follows. Thanks!

docker: Error response from daemon: error gathering device information while adding custom device "/dev/bus/usb/001/024": no such file or directory.
 

erick

New member
Dear organizers:
I have submitted a INT8 quantized model to CodaLab. Compared to float32 model with the same network structure, I found that the quantized TFLite model shows a unexpected increase of latency and energy consumption (e.g. from <10 ms 1Watt to > 200 ms 100Watt). It doesn't seem reasonable. Maybe you can check that the quantized model is working correctly?
BTW, my submission ID is erick. Thank you!
 

eisblume

New member
Dear organizers,
I found in the new formula, the score of the second example is not calculated correctly (the actual score is 60.65 instead of 80.65). I want to confirm that is it just a calculation mistake? or the alpha and beta coefficients provided here are mistaken? Wait for your reply. Thanks~


1658213830908.png
 

baky1983

New member
Dear organizers, a new error occurs when I submit the result ,please help me check what problem is it? Thanks for your reply!
1658229603334.png
 

ManYu

New member
Dear organizers, could you help to check my last submission with ID: sisyphus4869? The error log is as follows. Thanks!

docker: Error response from daemon: error gathering device information while adding custom device "/dev/bus/usb/001/024": no such file or directory.

We had addressed some technical issues with the evaluation server, we will have a system update soon.
 

ManYu

New member
Dear organizers:
I have submitted a INT8 quantized model to CodaLab. Compared to float32 model with the same network structure, I found that the quantized TFLite model shows a unexpected increase of latency and energy consumption (e.g. from <10 ms 1Watt to > 200 ms 100Watt). It doesn't seem reasonable. Maybe you can check that the quantized model is working correctly?
BTW, my submission ID is erick. Thank you!
Your model is not fully quantized, therefore the model is inferenced on CPU.
(Filter weight is int8 and bias weight is float32, that is not supported on our platform)
 

ManYu

New member
Dear organizers,
I found in the new formula, the score of the second example is not calculated correctly (the actual score is 60.65 instead of 80.65). I want to confirm that is it just a calculation mistake? or the alpha and beta coefficients provided here are mistaken? Wait for your reply. Thanks~


View attachment 73
Thank you for the notice. We have fixed the calculation error on the webpage.
 

UpUpUp

New member
Dear organizers

1.Energy consumption is negative(-0.01),Could you help to check my submission with ID: UpUpUp,submission rank : 23?
2.I want to know the input data type of the model,eg. data range([0,1], [0,255]), data type(float、int8)?
 

tryagain

New member
Acoording to the rule "To be eligible for prizes, the participants' score must improve the baseline performance provided by the challenge organizers. " in the "Terms and Conditions",the psnr must larger than 27.65 and the Energy consumption must be less than 0.49?My understanding is correct?
 

ManYu

New member
Dear organizers

1.Energy consumption is negative(-0.01),Could you help to check my submission with ID: UpUpUp,submission rank : 23?
2.I want to know the input data type of the model,eg. data range([0,1], [0,255]), data type(float、int8)?
1. There might be some measurement error in the automatic evaluation pipeline. We will manually measure all submissions in the final phase.
2. There are no restrictions to the data type and data range of your model. However, the input data range for inference is [0, 255]. You need to additionally add the preprocessing part into your model.
 

ManYu

New member
Acoording to the rule "To be eligible for prizes, the participants' score must improve the baseline performance provided by the challenge organizers. " in the "Terms and Conditions",the psnr must larger than 27.65 and the Energy consumption must be less than 0.49?My understanding is correct?
Yes, that is correct.
 

eisblume

New member
Yes, that is correct.

Thanks for your clarification, but I have two questions regarding the rule:

1. Does it mean that the scoring formula in the "Learn the details" -> "evaluation" is not correct? Because the examples here should all be graded as invalid according to the above statement.


1658400428340.png

2. How can I get a precise energy consumption in develop phase? ( as you said that there might be some measurement error in the automatic evaluation pipeline), and if we cannot get the precise energy score, what will this be handled in the final ranking if my energy in the validation leaderboard is under the threshold but in your manual test is slightly over the boarderline?

Thanks and wait for your reply~
 
Last edited:
Top