Bokeh Effect Rendering Challenge

sdruix

New member
Hi,

We have a quick question regarding AIM challenge. Is it allowed to submit multiple models as one team?

We are asking this for the Bokeh challenge that will be evaluated with MOS/runtime. In this case, the validation metrics can be completely misleading and we would like to submit two configurations(two different architectures full/lite). Will this be allowed?

Thank you in advance
 

qmy_mi

New member
Hi,
We are submitting the results for codalab, but the reply “operands could not be broadcast together with shapes (1022,1584,3) (1022,1604,3)”. The shapes is different from the provided test images, so, if there is detailed submission rules?
 

Andrey Ignatov

Administrator
Staff member
We have a quick question regarding AIM challenge. Is it allowed to submit multiple models as one team?
For the final submission, you can upload two models - each one for the corresponding competition track. During the development phase, you can submit multiple solutions to get their scores.

We are submitting the results for codalab, but the reply “operands could not be broadcast together with shapes (1022,1584,3) (1022,1604,3)”. The shapes is different from the provided test images, so, if there is detailed submission rules?
The resolution of the processed images should be exactly the same as the resolution of the input images.
 

qmy_mi

New member
For the final submission, you can upload two models - each one for the corresponding competition track. During the development phase, you can submit multiple solutions to get their scores.


The resolution of the processed images should be exactly the same as the resolution of the input images.
So, if there is a fixed resolution of the input image for TFLITE ?
 

Andrey Ignatov

Administrator
Staff member
So, if there is a fixed resolution of the input image for TFLITE ?

You will need to submit two models: one with fixed input size, and one with dynamic input dimensions. The detailed instructions will be sent at the beginning of the final challenge phase.
 

Haotian Qian

New member
Hello, I would like to know how to correctly store readme and Tflite files in ZIP. I store 199 images, readme.txt and tflite file in ZIP directly as required, but there is an error when submitting the ZIP on Codalab.

The error information is:
“WARNING: Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap.
Traceback (most recent call last):
File "/tmp/codalab/tmpIaPls_/run/program/evaluation.py", line 95, in
raise Exception('Expected %d .png images'%len(ref_pngs))
Exception: Expected 200 .png images”


Why is that? Looking forward to your reply. Thank you very much
 
Last edited:

Msss

New member
You will need to submit two models: one with fixed input size, and one with dynamic input dimensions. The detailed instructions will be sent at the beginning of the final challenge phase.
1. There is no instruction regarding the fixed input size even though the final phase has been started. And no instruction for tflite as well.

2. It looks like it is not necessary to submit result images. Right?

Thank you
 

Haotian Qian

New member
1. There is no instruction regarding the fixed input size even though the final phase has been started. And no instruction for tflite as well.

2. It looks like it is not necessary to submit result images. Right?

Thank you
Hello, I would like to know how to correctly store readme and Tflite files in ZIP. I store 199 images, readme.txt and tflite file in ZIP directly as required, but there is an error when submitting the ZIP on Codalab.

The error information is:
“WARNING: Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap.
Traceback (most recent call last):
File "/tmp/codalab/tmpIaPls_/run/program/evaluation.py", line 95, in
raise Exception('Expected %d .png images'%len(ref_pngs))
Exception: Expected 200 .png images”


Why is that? Looking forward to your reply. Thank you very much
 

sdruix

New member
Hi,

According to the submission guidelines, it's necessary to submit all the code. Is it possible to participate just by sharing models and test code(including model)? What are you going to use the training code for?
 

Andrey Ignatov

Administrator
Staff member
thank you!Besides,I would like to know whether the maximum number of submission in the test phase is 5 times in total or 5 times every day ?

5 times in total. Note that test submissions are used just for uploading your final results, and only your last submission would be considered.

Is it possible to participate just by sharing models and test code(including model)? What are you going to use the training code for?

Each participant should upload training codes that might be used for checking your submission for reproducibility. All submissions without training codes will be disqualified.
 

Haotian Qian

New member
When I submitted the results in CodaLab, I accidentally submit the wrong file for the fifth time. Could I send the email with the final file to you,or open a new account to submit the correct file? Thank you very much~
 

Haotian Qian

New member
Does tflite support dynamic input? My initial TensorFlow model has input with shape [1,None,None,3]. When I convert my model to tflite, there is an error:1659096110661.png
What should I do? Looking forward to your reply, thank you very much.
 

Andrey Ignatov

Administrator
Staff member
Does tflite support dynamic input?
What should I do?

TFLite supports dynamic input in general, but it seems that you have a blur_image tensor that does not support this. In this case, you can add two additional image resize tensors before and after this layer: the first one would resize the image to the size expected by the blur_image tensor, while the second one would be resizing its output to the original size.
 

xiaokaoji

New member
Hi, are we allowed to do padding outside the model?
Our model needs input shape to be divisible by a factor. For model_none.tflite with arbitrary input shape (required in submission), we cannot know the input shape and cannot decide the pad size.
 

Haotian Qian

New member
TFLite supports dynamic input in general, but it seems that you have a blur_image tensor that does not support this. In this case, you can add two additional image resize tensors before and after this layer: the first one would resize the image to the size expected by the blur_image tensor, while the second one would be resizing its output to the original size.
inputs = keras.layers.Input(name='blur_image', shape=(None, None, 3))
inputs1=model1(inputs,16,9)
Thank you! Blur_img is the input of our model, not a op or layer. The shape of the input is [1,None,None,3] here, but it seems that tflite do not support input shape with [1,None,None,3]. What should I do
 

Haotian Qian

New member
Hi, are we allowed to do padding outside the model?
Our model needs input shape to be divisible by a factor. For model_none.tflite with arbitrary input shape (required in submission), we cannot know the input shape and cannot decide the pad size.
Hi! Can I know how to generate model_none.tflite from input shape of [None,None,3]? We meet a big problem that tflite does not support input with a shape of [None,None,3]. Thank you very much if you can share
 

xiaokaoji

New member
Hi! Can I know how to generate model_none.tflite from input shape of [None,None,3]? We meet a big problem that tflite does not support input with a shape of [None,None,3]. Thank you very much if you can share
we build our model with input shape [None, None, 3], and set shape to [None, None, 3] using concrete_func when converting. it can be successfully converted. we convert our model from saved_model. hope this helps!
 

Andrey Ignatov

Administrator
Staff member
Our model needs input shape to be divisible by a factor.

You mean your model requires the input dimensions to be multiple of 2^n? Ok, add a readme.txt file with these requirements to the Model/ folder.

We meet a big problem that tflite does not support input with a shape of [None,None,3].

You can refer to Keras and TensorFlow examples - the model should be successfully converted to TFLite if you set the input tensor dimensions to [None, None, 3].
 

Haotian Qian

New member
we build our model with input shape [None, None, 3], and set shape to [None, None, 3] using concrete_func when converting. it can be successfully converted. we convert our model from saved_model. hope this helps!
Would you please show some pieces of code of how you convert using concrete_func? I tried so many times but failed. Thank you very much
 

Haotian Qian

New member
Did you enable the experimental TFLite converter (converter.experimental_new_converter = True)?

Besides that, what TF version are you using? Try to install the latest TF nightly build.
I convert it successfully now. But at the inference stage, I need to resize the input tensor, otherwise the input shape of model_none.tflite is [1,1,1,3]. I don't know whether it satisfies your request. I've searched so many websites(such as Github and stackoverflow), and the only solution for dynamic input is resizing. Is it appropriate?
 

Haotian Qian

New member
Hello, I don't know why, but my Codalab is unable to upload my results now. No matter what I upload, it will always display "Sorry, failed to upload file." May I send the final result to your email? Thank you very much
 

Msss

New member
Have some questions on the final scores.

On the codalab page, the score function would be like this.

1666591488536.png

However, it don't seem like the final scores are coming from the function.

e.g.
2^( 2 * 2.6 ) /28.1 = 1.308 (not 74)
2^( 2 * 3.5 ) /89.3 = 1.433 (not 28)

1666591650991.png
 

Andrey Ignatov

Administrator
Staff member
However, it don't seem like the final scores are coming from the function.
The scores reported in this table were using PSNR instead of MOS. In any case, in this challenge we have two official winners - team Antins_cs and ENERZAi, thus this is not changing anything.
 
Top