Accuracy failure #604
ntsakoulis
started this conversation in
General
Replies: 1 comment 1 reply
-
@ntsakoulis What is the accuracy of the QKeras model compared to the hls4ml model? |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
python command results from model.predict(test_set) with files csim_results.log and rtl_cosim_results.log. The log files are equivalent but vary significantly from the python results. I use mae as accuracy metric
x = layers.Flatten()(inputs)
x = QDense(64, kernel_quantizer = quantized_bits(6,2,1),bias_quantizer = quantized_bits(6,2,1))(x)
x = QActivation("quantized_relu(6,0)")(x)
x = layers.Dropout(0.3)(x)
x = QDense(64, kernel_quantizer = quantized_bits(6,2,1),bias_quantizer = quantized_bits(6,2,1))(x)
x = QActivation("quantized_tanh(6,2)")(x)
x = QDense(64, kernel_quantizer = quantized_bits(6,2,1),bias_quantizer = quantized_bits(6,2,1))(x)
x = QActivation("quantized_tanh(6,2)")(x)
x = layers.Dropout(0.3)(x)
x = QDense(64, kernel_quantizer = quantized_bits(6,2,1),bias_quantizer = quantized_bits(6,2,1))(x)
x = QActivation("quantized_tanh(6,2)")(x)
x = layers.Dropout(0.3)(x)
x = QDense(1, kernel_quantizer = quantized_bits(6,2,1),bias_quantizer = quantized_bits(6,2,1))(x)
outputs = QActivation("quantized_tanh(6,2)")(x)
model = keras.Model(inputs, outputs)
KerasJson: model_Q_ANN_8.json
KerasH5: model_Q_ANN_8_weights.h5
OutputDir: my-hls-q_ann8
ProjectName: project_Q_ANN8
InputData: data_standarized.dat
OutputPredictions: predicted_scaled_Q_A8.dat
XilinxPart: xczu7ev-ffvc1156-2-e
ClockPeriod: 5ns
IOType: io_stream # options: io_stream/io_parallel
HLSConfig:
Model:
Precision: ap_fixed<14,6>
ReuseFactor: 4
Strategy: Resource # options: Latency/Resource
first one is my model and second the configuration file yml,
thanks,
Beta Was this translation helpful? Give feedback.
All reactions