Error when Converting Quantized ONNX Model to HLS with hls4ml: Unsupported Operation Type Quant #970
Unanswered
dianamahdi
asked this question in
Q&A
Replies: 1 comment
-
Try #832 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I'm working on converting a quantized ONNX model to HLS for deployment on an FPGA using hls4ml. However, I encounter an error related to unsupported operation types during the conversion process. Below is the relevant code snippet and the error message:
import hls4ml
model_onnx = "cybsec-mlp-ready.onnx"
hls_config = hls4ml.utils.config_from_onnx_model(model_onnx, granularity='name')
cfg = hls4ml.converters.create_config(backend='Vivado')
cfg['ClockPeriod'] = 10
cfg['HLSConfig'] = hls_config
cfg['OnnxModel'] = model_onnx
cfg['Backend'] = 'VivadoAccelerator'
cfg['OutputDir'] = 'qdnn-dsd_accelerator/'
cfg['Board'] = 'pynq-z2'
model_hls = hls4ml.converters.onnx_to_hls(cfg)
model_hls.compile()
Interpreting Model ...
Output layers: ['/8/Softmax']
Input shape: [None, 16]
Topology:
Exception Traceback (most recent call last)
Cell In[3], line 13
10 cfg['OutputDir'] = 'qdnn-dsd_accelerator/'
11 cfg['Board'] = 'pynq-z2'
---> 13 model_hls = hls4ml.converters.onnx_to_hls(cfg)
15 model_hls.compile()
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\hls4ml\converters\onnx_to_hls.py:281, in onnx_to_hls(config)
279 for node in graph.node:
280 if node.op_type not in supported_layers:
--> 281 raise Exception(f'ERROR: Unsupported operation type: {node.op_type}')
283 # If not the first layer then input shape is taken from last layer's output
284 if layer_counter != 0:
Exception: ERROR: Unsupported operation type: Quant
Beta Was this translation helpful? Give feedback.
All reactions