Any technical questions and questions related to model conversion to TFLite format can be asked in this thread.
We will accept this for models converted from PyTorch. However, be prepared that we might ask you to provide the same model with additional input sizes if automatic tensor resizing won't work for your model.Also, would it be acceptable to use a static-input TFLite model for quality evaluation instead? Thanks in advance!
Does this error also occur during CPU-based inference?running in FP16+GPU Delegate mode
One possible issue might be the dimensionality of your tensor: GPU delegate might be expecting a 4D tensor, but it received a 3D one.GPU Delegate does not support BATCH MATMUL
If you are still unable to solve this problem, you can send us your model by email.I'm not sure what to do about it.
Thank you for your reply. The input size of the TFLite model I got from my conversion is fixed for the sRGB Enhancement Challenge [1,3,1024,1024]. I tried inference using CPU+FP16 and it worked even though it has slower inference time. However, I noticed that the Learn the Details/Evaluation of the sRGB Enhancement Challenge has a relevant note about the choice of mode: Testing Your Model on the Target Adreno / Mali Mobile GPUs section requires that I need to choose FP16+TFLite GPU Delegate when using the AI Benchmark. So this seems to indicate that this mode doesn't support enough arithmetic operations. If the target platform must use this mode, how do I address this technical issue to get higher inference speed? Or, if the target platform allows FP16+CPU mode, will all the teams' models be compared uniformly in this mode?Does this error also occur during CPU-based inference?
One possible issue might be the dimensionality of your tensor: GPU delegate might be expecting a 4D tensor, but it received a 3D one.
If you are still unable to solve this problem, you can send us your model by email.
By default, all solutions will be evaluated using FP16 + TFLite GPU Delegate mode. However, CPU backend will be used for solutions not supporting this option.Or, if the target platform allows FP16+CPU mode
TFLite GPU delegate is the easiest mode in terms of model adaptation as it supports nearly all common ops. However, yes, there might be some ops or layers requiring small adaptations. You've already been provided with a hint as for why BATCH MATMUL might be failing in your case.So this seems to indicate that this mode doesn't support enough arithmetic operations.
Thank you very much for your reply, I will consider your approach!By default, all solutions will be evaluated using FP16 + TFLite GPU Delegate mode. However, CPU backend will be used for solutions not supporting this option.
TFLite GPU delegate is the easiest mode in terms of model adaptation as it supports nearly all common ops. However, yes, there might be some ops or layers requiring small adaptations. You've already been provided with a hint as for why BATCH MATMUL might be failing in your case.
And, in principle, the general adaptation scenario looks as follows:
1. Identifying the op that is failing.
2. Removing this op from the model, converting it again, and checking that the issue is gone.
3. Trying one of the following options:
- Changing layer parameters, some non-default options are sometimes not implemented by the delegates
- Using or making an alternative PyTorch / TF layer implementation comprised of supported ops
- Replacing the layer with some other similar one
the format did not automatically switch from NCHW to NHWC
Dear organizers,Are you using the ai_edge_torch plugin for model conversion like in this tutorial?
It's not an issue with the input channels - it's a problem with a bilinear resize layer. The issue is self-explanatory: half_pixel_center and align_corner options cannot be used at the same time. Try changing the parameters of the resize layer in your model.While checking the model, we suspect it might be related to the input channels.
Thank you very much for your guidance!It's not an issue with the input channels - it's a problem with a bilinear resize layer. The issue is self-explanatory: half_pixel_center and align_corner options cannot be used at the same time. Try changing the parameters of the resize layer in your model.
Dear organizers,Yes, there is indeed an issue with dynamic input size for PyTorch to TFLite conversion.
We will accept this for models converted from PyTorch. However, be prepared that we might ask you to provide the same model with additional input sizes if automatic tensor resizing won't work for your model.
The correct one is:Could you please confirm whether my understanding is correct?
Because even after adding that line, my exported model still has a fixed input size.
dim_2 = 8 * torch.export.Dim("height", min=8, max=256)
dim_3 = 8 * torch.export.Dim("width", min=8, max=256)
edge_model = ai_edge_torch.convert(model.eval(), sample_input, dynamic_shapes=({2:dim_2, 3:dim_3},))
Thank you very much for your guidance!The correct one is:
Code:dim_2 = 8 * torch.export.Dim("height", min=8, max=256) dim_3 = 8 * torch.export.Dim("width", min=8, max=256) edge_model = ai_edge_torch.convert(model.eval(), sample_input, dynamic_shapes=({2:dim_2, 3:dim_3},))
However, this option is broken in the latest PyTorch releases. Therefore, we've allowed submission of TFLite models with a static size when they are converted from PyTorch.