Hi,
Can you provide a python code for inference with the model_none.tflite? I want to know whether the converted model is right.
#read model
interpreter = tf.lite.Interpreter(model_path=model_path)
#read image & raw_img is int8 np array
raw_img = np.array(Image.open(imgName).convert('RGB'))...
ok~I understand what you mean for inference part, but I am confused about that both zero point & scale are generated after tf model to quantization tflite (is it right?) and how to add these op to tflite
Thank you~
Thank you for your reply~
Can the quantization part be integrated into int8 model? The type of scale & zero point is float and is produced after converting tf model to tflite model.
Hi,
the email said
I inference my int8 tflite model with the following code
#load int8 model and get int quantization info(zero point & scale for input and output)
interpreter = tf.lite.Interpreter(model_path)
input_details = interpreter.get_input_details()
output_details =...
Hi,
I want to know
(1) whether the model submitted this week will be tested for running time and show in here.
https://docs.google.com/spreadsheets/d/e/2PACX-1vSrVUUgwcjGzEzDr5p7It7DRpCP45f2jY9nUiOB78uQzziq8Ejy6w2_7qhP7Fmy5K5X4jfa9iZC5xpb/pubhtml
(2) when will the test set be available for...
Hi, thank you for your advise and I can use quantization-aware training now~ I trained a small model with quantization-aware training now.
(1) inference with tf model, the psnr is 29.33892
(2) inference with tflite int8, the psnr is 29.24394
I want to know is it reasonable. Thank you again~
Thank you for your sharing~
I try to use Lambda layer and get another error.(same to tf.concat).
Do you train successfully by passing a `tfmot.quantization.keras.QuantizeConfig` instance to the `quantize_annotate_layer`?
Hi,
I try to use quantization-aware training and failed. The error is "TypeError: tf__call() got an unexpected keyword argument 'block_size'".
I print the layer of my model and find that the depth_to_space layer has a parameter 'block_size'
It seems that "tf.nn.depth_to_space" layer cannot be...
Hello
I got "Inference fails" in https://docs.google.com/spreadsheets/d/e/2PACX-1vSrVUUgwcjGzEzDr5p7It7DRpCP45f2jY9nUiOB78uQzziq8Ejy6w2_7qhP7Fmy5K5X4jfa9iZC5xpb/pubhtml. But it run sucessfully in AI Benchmark with Int8 and CPU option. And shape of input tensor is [1,360,640,3], shape of output...
Hello, I am confused the dimension of AI Benchmark input since the input shape is dynamic.
I successfully used tflite with model.resize_tensor_input on the linux platform but crashed on AI Benchmark.