-
Notifications
You must be signed in to change notification settings - Fork 164
Description
Hello,
I am trying to learn about quantization so was playing with a github repo trying to quantize it into int8 format. I have used the following code to quantize the model.
modelClass = DTLN_model()
modelClass.build_DTLN_model(norm_stft=False)
modelClass.model.load_weights(model_path)
converter = tf.lite.TFLiteConverter.from_keras_model(modelClass.model)
converter.experimental_new_converter = True
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS,
tf.lite.OpsSet.SELECT_TF_OPS
]
converter._experimental_lower_tensor_list_ops = False
converter.target_spec.supported_types = [tf.int8]
converter.representative_dataset = lambda: generate_representative_data(num_samples)
tflite_model = converter.convert()
with open('saved_model.tflite', 'wb') as f:
f.write(tflite_model)
And for the representative data, I have converted the data into numpy, saved them as .npy and then used the following code to use them as representative data.
But after I run the code I get the following error:
error: 'tf.TensorListSetItem' op is neither a custom op nor a flex op
I have tried to follow the doc and some github issues like tensorflow/tensorflow#34350 (comment) and also went through a similar question Issue with tf.ParseExampleV2 when converting to Tensorflow Lite : "op is neither a custom op nor a flex op"
But none of those seemed to be helpful in my case.
Can anyone help me figure out what I am doing wrong? Thanks in advance.
I am adding the full error in my first comment.