TF lite quantization error

kia350

New member
# Load trained SavedModel
model = tf.saved_model.load(saved_model_path)

# Setup fixed input shape

concrete_func = model.signatures[tf.saved_model.DEFAULT_SERVING_SIGNATURE_DEF_KEY]
concrete_func.inputs[0].set_shape(input_shape)
# Get tf.lite converter instance
converter = tf.lite.TFLiteConverter.from_concrete_functions([concrete_func])
# Use full integer operations in quantized model
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
# Set input and output dtypes to UINT8 (uncomment the following two lines to generate an integer only model)
# converter.inference_input_type = tf.uint8
# converter.inference_output_type = tf.uint8
# Provide representative dataset for quantization calibration
converter.representative_dataset = representative_dataset_gen
# Convert to 8-bit TensorFlow Lite model
return converter.convert()

hi,all. run this code but got the error:
'tf.Transpose' op is neither a custom op nor a flex op

TF=2.9
does anyone meet this error?
 
Top