Search results

  1. Q

    TF Lite runtime

    According to my current knowledge, the float32 or float16 TF Lite model should run faster than the uint8 model on a GPU device. So there is no need to perform int8 quantization for image denoising task which running on Exynos mail GPU. Is this correct? How about the image super-resolution task...
  2. Q

    TF Lite quantization

    I used this example to quantize my model. However, after I print( input_details[0]['dtype'] ), the input type is still float32. So how to get uint8 input type? Thanks.
Top