All questions related to Real-Time Image Super-Resolution Challenge can be asked in this thread.
NN API returned error ANEURALNETWOORKS_BAD_DATA
clip_by_value
Hi LishenWhen I was testing I ran into the following problem:
NN API returned error ANEURALNETWOORKS_BAD_DATA at line 1968 while setting new operand value for tensor 'model/lambda_2/clip_by_value/Minimum/y;StatefulPartitionedCall/model/lambda_2/clip_by_value/Minimum/y'.
Tensorflow version 2.4.1
Some questions:
1、How it can be happened?
2、Does the clip operation need to be inside the model? I see it is not in within the demo https://github.com/aiff22/MAI-2021-Workshop/blob/main/fsrcnn_quantization/fsrcnn.py
Hi, mrblue3325Hi Lishen
May I know that do you use the demo https://github.com/aiff22/MAI-2021-Workshop/blob/main/fsrcnn_quantization/fsrcnn.py to develop?
I notice that the quantization part in that demo for the input size is fixed (TFLITE_MODEL_INPUT_SHAPE = [1, 360, 640, 1])?
Do you know how to change it to TFLITE_MODEL_INPUT_SHAPE = [1, None, None, 1], so as to fulfill the submission requirement?
As I heard that tflite not really support for dynamic input very friendly.
Thanks.
Well, the website has shown https://codalab.lisn.upsaclay.fr/competitions/1755#participate that, the input channel size should be 3. And try to set experimental_new_converter to True.Hi Lishen
May I know that do you use the demo https://github.com/aiff22/MAI-2021-Workshop/blob/main/fsrcnn_quantization/fsrcnn.py to develop?
I notice that the quantization part in that demo for the input size is fixed (TFLITE_MODEL_INPUT_SHAPE = [1, 360, 640, 1])?
Do you know how to change it to TFLITE_MODEL_INPUT_SHAPE = [1, None, None, 1], so as to fulfill the submission requirement?
As I heard that tflite not really support for dynamic input very friendly.
Thanks.
Understood~ THANKS~Well, the website has shown https://codalab.lisn.upsaclay.fr/competitions/1755#participate that, the input channel size should be 3. And try to set experimental_new_converter to True.
Do you mean the 100 inference results are SRed images?Hello,
I have a question that, what is the submission form in this (validation) phase?
I had thought I should turn in (with readme.txt) 100 .png images, which are inference results for validation set.
But it has a size more than 200MB, which is bigger than the restriction 30MB, so that I couldn't turn it in.
Thank you.
Seongmin
Right.Do you mean the 100 inference results are SRed images?
I have the same problem.
Dear Andrey Ignatov,NNAPI does not support clip_by_value op, thus you should remove it prior to model conversion.
may i know, if there are any updates on the benchmarking functionality?Hello,
Any updates about the possibility of measuring latency using target platform?
Maciej
Hello,Right.
Hello,hello,
If clip op cannot be inside the model, I have two questions,
1. So when we submit the model, how will the official code to verify the PSNR?
2.whether to keep the same model when verifying PSNR and Runtime? what if the operation is not supported by the SoC, but can successfully verify PSNR.hell
neither, format like “0801.png" is ok. and remember to add "readme.txt" and model.tflite in your zip file.Hello,
May I know how can I upload the sr images, the whole zip is almost 300MB?
Also, the name of the image should be 801x3.png or 801.png?
Thanks.
Thanks.neither, format like “0801.png" is ok. and remember to add "readme.txt" and model.tflite in your zip file.
Quantization | CPU (ms) | GPU (ms) | NNAPI (ms) |
Float32 | 381 | 116 | 110 |
Int8 | 158 | 56 | 4794 (too slow) |
input_shape = [1, 360, 640, 3]
model_path = '.ckpt/model'
model = tf.saved_model.load(model_path)
concrete_func = model.signatures[tf.saved_model.DEFAULT_SERVING_SIGNATURE_DEF_KEY]
concrete_func.inputs[0].set_shape(input_shape)
converter = tf.lite.TFLiteConverter.from_concrete_functions([concrete_func])
converter.experimental_new_converter = True
converter.experimental_new_quantizer = True
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_dataset_gen
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
tflite_model = converter.convert()
open("tflite/{}.tflite".format(name), "wb").write(tflite_model)
Hello,I want to know where to get the timing report. Thanks for your reply!The good news is the timing report is published, but does anyone know how to find out our submission id?
you need to submit your results, and the report is here: https://github.com/mdenna-synaptics/codalab2022/blob/main/report.jsonHello,I want to know where to get the timing report. Thanks for your reply!
Thanks too much!you need to submit your results, and the report is here: https://github.com/mdenna-synaptics/codalab2022/blob/main/report.json
however, i have few doubts. it seems that the same layers latency vary a lot between submissions. moreover, for a majority of submissions the Transpose layer latency is larger, than full model latency...
Hello, for the submission I got a problem.you need to submit your results, and the report is here: https://github.com/mdenna-synaptics/codalab2022/blob/main/report.json
however, i have few doubts. it seems that the same layers latency vary a lot between submissions. moreover, for a majority of submissions the Transpose layer latency is larger, than full model latency...
i have the same questionHello, for the submission I got a problem.
I notice that the online system has 30MB upload limitation.
How can u submit the full size sr images, which cost 300MB?
I notice that the online system has 30MB upload limitation.
How can u submit the full size sr images, which cost 300MB?
Can we add other datasets to train the model, such as Flickr2K?
The good news is the timing report is published, but does anyone know how to find out our submission id?
Thanks for reply.The max file size limit was increased to 500MB.
Yes, you can use any other datasets for training your models.
The published runtime results now include your usernames and submission dates.
Do you uplaod the 100 sr pngs and run the evaluation by the platform successfully?i have the same question
Yes I do think so, since 360,640 is for testing speed.View attachment 75
Hi, @Andrey Ignatov , I find 'model_runtime.tflite' hasn't been mentioned above. Does 'model_runtime.tflite' here means 'model.tflite' with the input size [1,360,640,3]?
Yes, I just put 100 SR images, tflite model, readme.txt in the root path and zip them, the file size is OKYes I do think so, since 360,640 is for testing speed.
Congrats. I saw u achieved a nice score.
May I know how you submit the zip file?
Do you zip the 100 sr images naming 0801.png.etc, and the file is like over 300MB right?
Because I always get an error during the submission and am frustrated about that.
File "/tmp/codalab/tmpNCzOv2/run/program/evaluation.py", line 95, in
raise Exception('Expected %d .png images'%len(ref_pngs))
Exception: Expected 100 .png images
HelloThe max file size limit was increased to 500MB.
Yes, you can use any other datasets for training your models.
The published runtime results now include your usernames and submission dates.
Do you use any operation after reorglayers?Hello
I have a question that, my timing results have an ActivationLayer after the ReorgLayer2. However, the TFLite model I have submitted should not have an ActivationLayer. Which operation does this ActivationLayer come from?
Thank you.
View attachment 78
I have the same questions.Do you use any operation after reorglayers?
hello, do you have this problem when you remove clip layer: https://github.com/mdenna-synaptics/codalab2022/issues/8 https://github.com/mdenna-synaptics/codalab2022/issues/9I have the same questions.
Is that refer to the clip layer (which act as minimum and relu layer after quantised)?
That's why I asked whether the final official influence code contain an np.clip function for the output or not?
If yes, we can remove the clip layer and save the time.
Do you mean that the organizer will not help us perform the clip operation? I don't understand this.which is not included in the final code
what i mean is that you need to include this clip operation in the model (similarly to the code you shared)Do you mean that the organizer will not help us perform the clip operation? I don't understand this.
Last year's best player (ABPN), there is a clip operator in the code. Code in: https://github.com/NJU-Jet/SR_Mobil...419f1cb026ec041b/solvers/networks/base7.py#L9
(Sorry, I'm not proficient in English. I hope you can understand it and discuss it together)
Thank you for your reply. I agree with you.what i mean is that you need to include this clip operation in the model (similarly to the code you shared)
Yes.Hi, @Andrey Ignatov , I find 'model_runtime.tflite' hasn't been mentioned above. Does 'model_runtime.tflite' here means 'model.tflite' with the input size [1,360,640,3]?
Only the total latency is taken into account.In the competition, how would it be counted? Would it be sum of latencies from layers, or do we need to care about total latency?
Yes.If not, we need the clip layer in our model, and this cost us more 8ms in total.
Yes.Is it still possible to measure the speed during the test phase?