Real-Time Video Super-Resolution Challenge

Jieson

New member
The input tensor of tflite model should accept 10 subsequent video frames and have a size of [1 x 180 x 320 x 30].
How to calculate the runtime of the model? Devide the AI-Benchmark output 'Avg latency' by 10 is the runtime per frame?
 

Jieson

New member
Hi,Can I submit val dataset results and get the score now? I see there is only one submitted result on codalab.
Are there examples of submission formats about ZIP archive?
 

Finn

New member
Hello, I'm trying to submit the results,but it failed.
Error:
Traceback (most recent call last):
File "/tmp/codalab/tmp2Ey5ZM/run/program/evaluate.py", line 69, in
print("Energy: {}".format(energy))
NameError: name 'energy' is not defined

There arenot dirs in ZIP archive ,only files。
The content of zip(A total of 302 documents including 300 pngs)
xxx.tflite, readme.txt, 000_00000009.png,....,029_00000099.png。

The content of reame.txt:
Runtime per frame [ms] : 50.00
Mobile device: Nubia
Acceleration : TFLite GPU Delegate
Model size [Mb] : 0.083
Extra Data [1] / No Extra Data [0] : 0
Other description : No

Can you help me?
I don't know what's wrong with my ZIP archive?
 

Finn

New member
Hello, I'm trying to submit the results,but it failed.
Error:
Traceback (most recent call last):
File "/tmp/codalab/tmp2Ey5ZM/run/program/evaluate.py", line 69, in
print("Energy: {}".format(energy))
NameError: name 'energy' is not defined

There arenot dirs in ZIP archive ,only files。
The content of zip(A total of 302 documents including 300 pngs)
xxx.tflite, readme.txt, 000_00000009.png,....,029_00000099.png。

The content of reame.txt:
Runtime per frame [ms] : 50.00
Mobile device: Nubia
Acceleration : TFLite GPU Delegate
Model size [Mb] : 0.083
Extra Data [1] / No Extra Data [0] : 0
Other description : No

Can you help me?
I don't know what's wrong with my ZIP archive?
Please give some information. Three days have passed.
 

Andrey Ignatov

Administrator
Staff member
Please give some information. Three days have passed.

Sorry for the delayed response. Unfortunately, Codalab is down now, so we cannot access any scripts or server logs. Will look at this issue as soon as it is running again.
 

afromero

New member
Please give some information. Three days have passed.
Hello Finn,
I just had access again to Codalab submissions and I checked yours.
It turns out your readme has the wrong format. The readme expected by the evaluation code should be as follows:

Latency per frame [ms] (-1 for unmeasured models) :
Energy consumption per inference [J] :
GPU memory usage per inference [MiB] :
Model size [MiB] :
Extra Data [1] / No Extra Data [0] :
Other description :

I am aware your readme has the format described in Codalab, we will fix that. You can submit again with this minor change :).
 

Finn

New member
Hello Finn,
I just had access again to Codalab submissions and I checked yours.
It turns out your readme has the wrong format. The readme expected by the evaluation code should be as follows:

Latency per frame [ms] (-1 for unmeasured models) :
Energy consumption per inference [J] :
GPU memory usage per inference [MiB] :
Model size [MiB] :
Extra Data [1] / No Extra Data [0] :
Other description :

I am aware your readme has the format described in Codalab, we will fix that. You can submit again with this minor chang
Thank you for reply。
what does Energy consumption per inference mean? Can you explain in more detail ?
I get latency and GPU memory from the app "AI-benchmark", But there isnot the item of energy.
How can we get this value?
 

Finn

New member
Hello Finn,
I just had access again to Codalab submissions and I checked yours.
It turns out your readme has the wrong format. The readme expected by the evaluation code should be as follows:

Latency per frame [ms] (-1 for unmeasured models) :
Energy consumption per inference [J] :
GPU memory usage per inference [MiB] :
Model size [MiB] :
Extra Data [1] / No Extra Data [0] :
Other description :

I am aware your readme has the format described in Codalab, we will fix that. You can submit again with this minor change :).
Hi, I found the readme in Codalab haven’t fixed, Will it change back to the original format later?
 

diggers

New member
Q1: is the test set only 300 frames? Can the model use the remaining 2700 frames? (the reds test dataset has 3000 frames)
Q2: "During the final test phase, the participants will be asked to submit all the restored frames by email ", Is that 300 or 3000?
 

diggers

New member
TFLite model is used 1) to check the runtime, and 2) to check whether the submitted results were obtained with it.
that's to say, the tflite need to deal with 100 frames inputs in the later? (result infered in 100 frames is different with that infered on 10 frames, because the VSR model)
 

diggers

New member
And for the scores, The score indicates that the PSNR is 1dB higher, which is equivalent to half of the running time. Obviously, if the model size is reduced to approximately 0, the output PSNR will be equal to the input PSNR,the score would be very high !
so, will you change the rule for fair comparison
 

Finn

New member
Hi,
Q1: Is the value of the input data limited to 0-1 or 0-255 , or does the user specify it in the users‘ readme?
Q2: What is the data format of input frame?
input: [1,180,320,30]
1) rgb or bgr?
2) if the format is rgb, the channels first (rr....r gg..g bb..b ) or last (rgb rgb ... rgb)?
 

diggers

New member
The final result is :
1. the PSNR results of our original model (such as pytroch code) on 3000 frames
2. the speed of the provided tflite model at 10 frames. And we must ensure that the output of tflite and pytorch Exactly the same when given 10 frames.
Is my following understanding correct? If there is any inconsistency, please point it out, thank you!!1
 

Andrey Ignatov

Administrator
Staff member
@diggers, yes that's correct. We might additionally also ask you to submit a TFLite model processing 100 subsequent frames (for measuring PSNR scores only).
 

afromero

New member
Hi,
Q1: Is the value of the input data limited to 0-1 or 0-255 , or does the user specify it in the users‘ readme?
Q2: What is the data format of input frame?
input: [1,180,320,30]
1) rgb or bgr?
2) if the format is rgb, the channels first (rr....r gg..g bb..b ) or last (rgb rgb ... rgb)?
Hello, 0-255 RGB images, and the user specifies the channel format: (rgb rgb ... rgb) or (rrrr... gggg... bbb...). For simplicity, The user can also specify 0-1 or 0-255.
 

Finn

New member
According to the codelab description,The model will be run using AI Benchmark (FP16 mode + TFLite GPU delegate).
It does not require the model to be float32 or float16 or int8.
Sometimes int8 quantization model is more faster than other.
Must the model be float16 ?
 

Andrey Ignatov

Administrator
Staff member
Must the model be float16 ?

You should submit the original FP32 model without doing any quantization (including FP16 post-training quantization). FP32 model is converted automatically to FP16 format by the TFLite GPU delegate.

Sometimes int8 quantization model is more faster than other.

No, that's not the case for the TFLite GPU delegate, please check my response here.
 

Finn

New member
hello, Are you sure the train cannot use Val dataset?
Will you evaluate the submission on your private test data?


1615878819870.png
 

yubinnzeng

New member
Hello Finn,
I just had access again to Codalab submissions and I checked yours.
It turns out your readme has the wrong format. The readme expected by the evaluation code should be as follows:

Latency per frame [ms] (-1 for unmeasured models) :
Energy consumption per inference [J] :
GPU memory usage per inference [MiB] :
Model size [MiB] :
Extra Data [1] / No Extra Data [0] :
Other description :

I am aware your readme has the format described in Codalab, we will fix that. You can submit again with this minor change :).
I tried to submit with the given readme format, but CodaLab keep raising error as below:

Traceback (most recent call last):
File "/tmp/codalab/tmpGW2Itw/run/program/evaluate.py", line 68, in
print("Latency: {}".format(latency))
NameError: name 'latency' is not defined

How can I fix it?
Thx.
 

afromero

New member
I tried to submit with the given readme format, but CodaLab keep raising error as below:

Traceback (most recent call last):
File "/tmp/codalab/tmpGW2Itw/run/program/evaluate.py", line 68, in
print("Latency: {}".format(latency))
NameError: name 'latency' is not defined

How can I fix it?
Thx.
Hello, I just checked your last submission and it has the form:

Code:
Runtime per frame [ms] : 
Mobile device: 
Acceleration : 
Model size [Mb] : 
Extra Data [1] / No Extra Data [0] : 0
Other description :

But according to the competition, it should have a different one:

Code:
Latency per frame [ms] (-1 for unmeasured models) : 10.43
Energy consumption per inference [J] : -1
GPU memory usage per inference [MiB] : -1
Model size [MiB] : 12
Extra Data [1] / No Extra Data [0] : 1
Other description : Solution based on ....
 

yubinnzeng

New member
Hello, I just checked your last submission and it has the form:

Code:
Runtime per frame [ms] :
Mobile device:
Acceleration :
Model size [Mb] :
Extra Data [1] / No Extra Data [0] : 0
Other description :

But according to the competition, it should have a different one:

Code:
Latency per frame [ms] (-1 for unmeasured models) : 10.43
Energy consumption per inference [J] : -1
GPU memory usage per inference [MiB] : -1
Model size [MiB] : 12
Extra Data [1] / No Extra Data [0] : 1
Other description : Solution based on ....
I have re-submmited with the official provided readme.txt. And it raises exactly the same error.
 

Finn

New member
hi, the use of extra data to train model possibly has any effect on the final psnr,
does it affect the ranks?
Will the use of external data or not use external data will rank separately?
 

Finn

New member
Final Submission Instructions
"model_none.tflite" should be:
- a floating-point FP32 model (no FP16 / INT16 / INT8 quantization).
- with an input tensor of size [1, None, None, 30] taking 10 RGB images as an input.
- with an output tensor of size [1, None, None, 30] producing the 10 final super-resolved image results.

The provided TFLite models will be applied to test images having the same type as the validation data, their outputs and runtime will be used to compute the final results of your submission. If your model performs any image pre-processing (rescaling, normalization, etc.) - it should be integrated directly into it, no additional scripts are accepted.

q1:how can I generate model_none.tflite?
The model can support any resolution, but when I convert tf to tflite , the resolution is required.

q2: The rules don't limit the input to 0-1, or 0-255。 If the input of my model is [0,1],there is a normalization for image.
So normalization should be integrated inside the model, Is it right?
 

afromero

New member
Final Submission Instructions
"model_none.tflite" should be:
- a floating-point FP32 model (no FP16 / INT16 / INT8 quantization).
- with an input tensor of size [1, None, None, 30] taking 10 RGB images as an input.
- with an output tensor of size [1, None, None, 30] producing the 10 final super-resolved image results.

The provided TFLite models will be applied to test images having the same type as the validation data, their outputs and runtime will be used to compute the final results of your submission. If your model performs any image pre-processing (rescaling, normalization, etc.) - it should be integrated directly into it, no additional scripts are accepted.

q1:how can I generate model_none.tflite?
The model can support any resolution, but when I convert tf to tflite , the resolution is required.

q2: The rules don't limit the input to 0-1, or 0-255。 If the input of my model is [0,1],there is a normalization for image.
So normalization should be integrated inside the model, Is it right?
Andrey can assist you with q1.

Regarding q2, we are assuming the input/output to be RGB [0,1] concatenated [rgb rgb ... rgb] as we mentioned here. You can also provide that information in the readme.
 

Jieson

New member
Set the experimental_new_converter option to True when converting a model with None dimensions.
Hi, I have set the experimental_new_converter option to True by default,but still cannot convert tf to model_none.tflite。Are there any other parameters that need to be set?
My tf version is 2.4
 

Andrey Ignatov

Administrator
Staff member
Hi, I have set the experimental_new_converter option to True by default,but still cannot convert tf to model_none.tflite。Are there any other parameters that need to be set?

That's very strange, can you also try to do the model conversion with TF-nightly? If it fails, please also attach the logs.
 

Jieson

New member
That's very strange, can you also try to do the model conversion with TF-nightly? If it fails, please also attach the logs.
1.we cannot convert our model to 'model_none.tflite' while 'model.tflite' is converted successful.
The error messages show in file 'log.txt'.
The network is realized by tf.keras ,it seems not support tf.transpose ops with None dims while converting to TFLITE.

2.The model convert code as follow:
converter = tf.lite.TFLiteConverter.from_keras_model(model_tf.build(input_shape=(1,None,None,30)))
converter.target_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
converter.experimental_new_converter = True
tflite_model = converter.convert()


log.txt:
Traceback (most recent call last):
File "/opt/conda/lib/python3.7/site-packages/tensorflow/python/framework/tensor_util.py", line 549, in make_tensor_proto
str_values = [compat.as_bytes(x) for x in proto_values]
File "/opt/conda/lib/python3.7/site-packages/tensorflow/python/framework/tensor_util.py", line 549, in <listcomp>
str_values = [compat.as_bytes(x) for x in proto_values]
File "/opt/conda/lib/python3.7/site-packages/tensorflow/python/util/compat.py", line 87, in as_bytes
(bytes_or_text,))
TypeError: Expected binary or unicode string, got None

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py", line 419, in build
self.call(x, **kwargs)
File "arch_shuffle_tf.py", line 83, in call
x_input = tf.transpose(tf.reshape(x_input,(x_input.shape[1],x_input.shape[2],T,C)),(2,0,1,3)) # T H W C
File "/opt/conda/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py", line 201, in wrapper
return target(*args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/tensorflow/python/ops/array_ops.py", line 195, in reshape
result = gen_array_ops.reshape(tensor, shape, name)
File "/opt/conda/lib/python3.7/site-packages/tensorflow/python/ops/gen_array_ops.py", line 8378, in reshape
"Reshape", tensor=tensor, shape=shape, name=name)
File "/opt/conda/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py", line 525, in _apply_op_helper
raise err
File "/opt/conda/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py", line 515, in _apply_op_helper
preferred_dtype=default_dtype)
File "/opt/conda/lib/python3.7/site-packages/tensorflow/python/profiler/trace.py", line 163, in wrapped
return func(*args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 1540, in convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/opt/conda/lib/python3.7/site-packages/tensorflow/python/framework/constant_op.py", line 339, in _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
File "/opt/conda/lib/python3.7/site-packages/tensorflow/python/framework/constant_op.py", line 265, in constant
allow_broadcast=True)
File "/opt/conda/lib/python3.7/site-packages/tensorflow/python/framework/constant_op.py", line 283, in _constant_impl
allow_broadcast=allow_broadcast))
File "/opt/conda/lib/python3.7/site-packages/tensorflow/python/framework/tensor_util.py", line 553, in make_tensor_proto
"supported type." % (type(values), values))
TypeError: Failed to convert object of type <class 'tuple'> to Tensor. Contents: (None, None, 10, 3). Consider casting elements to a supported type.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "arch_shuffle_tf.py", line 224, in <module>
converter = tf.lite.TFLiteConverter.from_keras_model(model_tf.build(input_shape=(1,None,None,30)))
File "/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py", line 421, in build
raise ValueError('You cannot build your model by calling `build` '
 

diggers

New member
Does the final result will be check by our provided TFLite models?(For PSNR) Due to VSR model, only 10 frames can't reproduce our 100 frames result
 

Andrey Ignatov

Administrator
Staff member
The model convert code as follow

The problem is that you are using the tf.lite.OpsSet.SELECT_TF_OPS option that allows the interpreter to use the standard TF (not TFLite) ops. Were you actually able to run the model with static input size using AI Benchmark (standard build, not a nightly one for developers)?
 
Top