You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello everyone, please I need help in quantising a model through Post Training Quantisation. So this is my Model gotten basically from Sionna Tutorial on [End-to-end Learning with Autoencoders](https://nvlabs.github.io/sionna/phy/tutorials/Autoencoder.html)
So these are my code blocks in Google Colab
*Neural Demapper
classNeuralDemapper(Layer):
def__init__(self):
super().__init__()
self._dense_1=Dense(128, 'relu')
self._dense_2=Dense(128, 'relu')
self._dense_3=Dense(num_bits_per_symbol, None) # The feature correspond to the LLRs for every bits carried by a symboldefcall(self, y, no):
# Using log10 scale helps with the performanceno_db=log10(no)
# Stacking the real and imaginary components of the complex received samples# and the noise varianceno_db=tf.tile(no_db, [1, num_symbols_per_codeword]) # [batch size, num_symbols_per_codeword]z=tf.stack([tf.math.real(y),
tf.math.imag(y),
no_db], axis=2) # [batch size, num_symbols_per_codeword, 3]llr=self._dense_1(z)
llr=self._dense_2(llr)
llr=self._dense_3(llr) # [batch size, num_symbols_per_codeword, num_bits_per_symbol]returnllr
********** Trainable End-to-end System : Conventional Training
importtensorflow_model_optimizationastfmotclassE2ESystemConventionalTraining(Model):
def__init__(self, training):
super().__init__()
self._training=training################## Transmitter################self._binary_source=BinarySource()
# To reduce the computational complexity of training, the outer code is not used when training,# as it is not requiredifnotself._training:
# num_bits_per_symbol is required for the interleaverself._encoder=LDPC5GEncoder(k, n, num_bits_per_symbol)
# Trainable constellation# We initialize a custom constellation with qam pointsqam_points=Constellation("qam", num_bits_per_symbol).pointsself.constellation=Constellation("custom",
num_bits_per_symbol,
points=qam_points,
normalize=True,
center=True)
# To make the constellation trainable, we need to create seperate# variables for the real and imaginary partsself.points_r=self.add_weight(shape=qam_points.shape,
initializer="zeros")
self.points_i=self.add_weight(shape=qam_points.shape,
initializer="zeros")
self.points_r.assign(tf.math.real(qam_points))
self.points_i.assign(tf.math.imag(qam_points))
self._mapper=Mapper(constellation=self.constellation)
################## Channel################self._channel=AWGN()
################## Receiver################# We use the previously defined neural network for demappingself._demapper=NeuralDemapper# To reduce the computational complexity of training, the outer code is not used when training,# as it is not requiredifnotself._training:
self._decoder=LDPC5GDecoder(self._encoder, hard_out=True)
################## Loss function#################ifself._training:
self._bce=tf.keras.losses.BinaryCrossentropy(from_logits=True)
defcall(self, batch_size, ebno_db):
# Set the constellation points equal to a complex tensor constructed# from two real-valued variablespoints=tf.complex(self.points_r, self.points_i)
self.constellation.points=points# If `ebno_db` is a scalar, a tensor with shape [batch size] is created as it is what is expected by some layersiflen(ebno_db.shape) ==0:
ebno_db=tf.fill([batch_size], ebno_db)
no=ebnodb2no(ebno_db, num_bits_per_symbol, coderate)
no=expand_to_rank(no, 2)
################## Transmitter################# Outer coding is only performed if not trainingifself._training:
c=self._binary_source([batch_size, n])
else:
b=self._binary_source([batch_size, k])
c=self._encoder(b)
# Modulationx=self._mapper(c) # x [batch size, num_symbols_per_codeword]################## Channel################y=self._channel(x, no) # [batch size, num_symbols_per_codeword]################## Receiver################llr=self._demapper(y, no)
llr=tf.reshape(llr, [batch_size, n])
# If training, outer decoding is not performed and the BCE is returnedifself._training:
loss=self._bce(c, llr)
returnlosselse:
# Outer decodingb_hat=self._decoder(llr)
returnb,b_hat# Ground truth and reconstructed information bits returned for BER/BLER computation
# Instantiate and train the end-to-end systemmodel=E2ESystemConventionalTraining(training=True)
conventional_training(model)
# Save weightssave_weights(model, model_weights_path_conventional_training)
Quantisation attempt of Model This is my question
# Function to apply PTQdefapply_ptq(model):
quantized_model=tfmot.quantization.keras.quantize_model(model)
returnquantized_modeldefsave_quantized_weights(model, model_weights_path):
weights=model.get_weights()
withopen(model_weights_path, 'wb') asf:
pickle.dump(weights, f)
# Apply PTQ to the model after trainingquantized_model=apply_ptq(model)
# Save weights for the quantized modelsave_quantized_weights(quantized_model, model_weights_path_quantized)
Error: ---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-119-c1cbbb9e0680>](https://localhost:8080/#) in <cell line: 0>()
8 pickle.dump(weights, f)
9 # Apply PTQ to the model after training
---> 10 quantized_model = apply_ptq(model)
11
12 # Save weights for the quantized model
1 frames
[/usr/local/lib/python3.11/dist-packages/tensorflow_model_optimization/python/core/quantization/keras/quantize.py](https://localhost:8080/#) in quantize_model(to_quantize, quantized_layer_name_prefix)
133 and to_quantize._is_graph_network
134 ): # pylint: disable=protected-access
--> 135 raise ValueError(
136 '`to_quantize` can only either be a keras Sequential or '
137 'Functional model.'
ValueError: `to_quantize` can only either be a keras Sequential or Functional model.
So this error gets thrown and Im really stuck on this . Please I would really appreciate your help, Thanks a lot in advance!
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello everyone, please I need help in quantising a model through Post Training Quantisation. So this is my Model gotten basically from Sionna Tutorial on [End-to-end Learning with Autoencoders](https://nvlabs.github.io/sionna/phy/tutorials/Autoencoder.html)
So these are my code blocks in Google Colab
*Neural Demapper
Quantisation attempt of Model This is my question
So this error gets thrown and Im really stuck on this . Please I would really appreciate your help, Thanks a lot in advance!
Beta Was this translation helpful? Give feedback.
All reactions