I have solved my problem, if you need some sort of linear mapping include it at the end of the model as a linear operator and as a floating point operator assuimg ranges 0-1 then use the tflite quantization it takes care of everything and calculates quantizations differently approximates the floating point operation as a fixed point operation and outputs in 0-255 range.

and that's what the challenge requires from a submission (IMHO)

As a guidance to whom willing to keep working in this area, The problem in this challenge can be approched from many different aspects including

Quantization Robust SR Model Design (Smart Design)

Non-disturbing Quantization Method Design (Smart Quantization)

Pretrained Heavy model to Lite Model Design with Quantization in Mind (Smart Prunning/Sparsification)

Pretrained Heavy model guiding Lite Model Optimization (Smart Optimization)

...

regarding to quantization and my insight to the problem from the point of the view of optimization and why not every model is suitable for uint8 conversion in SR problem, I am planing to submit in a challenge paper if (BIG IF) I am invited to send one, so hope to see you in workshop proceedings