Recent content by xindongzhang

  1. X

    Real-Time Image Super-Resolution Challenge

    Thanks for you sharing, I have solved this problems too in the last 10 hours with very simple revise of network, only just one line of code. I really appreciate it in the four days journey of this competition.
  2. X

    Real-Time Image Super-Resolution Challenge

    Thanks for you reply, and the hints you provided may be very insightful and good research problems.
  3. X

    Real-Time Image Super-Resolution Challenge

    Have you tried de-quantizing the uint8-output node first?
  4. X

    Real-Time Image Super-Resolution Challenge

    If the mean and std of the output node is not close to (0.0, 1.0), simply restore results by uint8-output would make the SR image darker or brighter from visual perception.
  5. X

    Real-Time Image Super-Resolution Challenge

    Thanks for your reply. I have checked this before, the sample of this code is about classification. We don't need to decode the result, since linear transformation does not affect the argmax of the output and predicted class. However the SR task is quite different from classification problem, if...
  6. X

    Real-Time Image Super-Resolution Challenge

    Thanks for your reply. I have checked the "output scale and zero point" of the output node based your provided scripts? Both the input and output were in the uint8 type, it does change a a lot if you did not de-quantize the output node. Let's put it simply, fully-quantized network can make sure...
  7. X

    Real-Time Image Super-Resolution Challenge

    Could you please check https://www.tensorflow.org/lite/performance/post_training_integer_quant. For the uint8-inference, the input still needs to be further quantized before fed into the uint8-network(If the mean and std of input hapends to be (0.0, 1,0), you could straightly feed it to the...
  8. X

    Real-Time Image Super-Resolution Challenge

    I still have no idea, just waiting for the official reply. As my experiment from the offical scripts, large quantized error to fp32-model was introduces if we didn't de-quantize the output node.
  9. X

    Real-Time Image Super-Resolution Challenge

    From my experiments, one main reason is that the output from uint8-model should be further de-quantized by "zero point & scale" of output node. If you do that, the loss of psnr should be acceptable。
  10. X

    Real-Time Image Super-Resolution Challenge

    Furthermore, I have test the quantize script you provided. If we did not dequantize the output of uint8-model, large numerical error of outputs from fp32-model and uint8-model were introduced. However when scaling the output of uint8-model by zero-point & scale, the result of uint8-model are...
  11. X

    Real-Time Image Super-Resolution Challenge

    Thanks, I understand what you mean. But uint8 is only the bound of quantized Op, it is not the actually space of fp32-output. Only if we de-quantize the output of network, we then get the correct numerical range and result.
  12. X

    Real-Time Image Super-Resolution Challenge

    I am also confused with it, have you solved this problem? Thanks.
  13. X

    Real-Time Image Super-Resolution Challenge

    Thanks for your code for testing. However as for the uint8-inference, I wonder if the output should be rescaled by "output_scale & output_zero_point" to match the correct range? Thanks, and hoping for you reply.
Top