-
Notifications
You must be signed in to change notification settings - Fork 473
Add functionality to use granularity option also for pytorch models #1051
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 12 commits
463efc7
c9dca1b
bb91de3
73110ea
8f89188
f741e75
3cd27f1
b7faed3
f7c69e4
2fc57ae
fd35b89
e76f381
c1d8c5e
546348c
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -6,3 +6,4 @@ sphinx_github_changelog | |
sphinx_rtd_theme | ||
tensorflow<=2.15 | ||
toposort>=1.5.0 | ||
torch |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -269,6 +269,7 @@ def make_layer_config(layer): | |
|
||
def config_from_pytorch_model( | ||
model, | ||
input_shape, | ||
granularity='model', | ||
backend=None, | ||
default_precision='ap_fixed<16,6>', | ||
|
@@ -284,6 +285,7 @@ def config_from_pytorch_model( | |
|
||
Args: | ||
model: PyTorch model | ||
input_shape (list): The shape of the input tensor. First element is the batch size, needs to be None | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We have a nice opportunity here to get rid of the first There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Good idea, that was always very ugly. Done. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. In general I think we need to be careful about the batch dimensions. For ONNX it's usually just 1 (but can really be any value, e.g. 10), not None. I think we try to get rid of the batch dimension pretty quickly. |
||
granularity (str, optional): Granularity of the created config. Defaults to 'model'. | ||
Can be set to 'model', 'type' and 'layer'. | ||
|
||
|
@@ -321,6 +323,77 @@ def config_from_pytorch_model( | |
model_config['Strategy'] = 'Latency' | ||
|
||
config['Model'] = model_config | ||
config['PytorchModel'] = model | ||
config['InputShape'] = input_shape | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I would add a check if the passed input shape makes sense. Later on if it doesn't it's not easy to figure out why. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I have been thinking about that. In Pytorch it seems like the exact input shape is not determined by the model architecture, so it's not possible to completely infer it during parsing. We we can still check some general features, like the number of dimensions of the input, based on the type of the first layer. I'll implement something along those lines. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I actually meant much simpler, just check if input shape is a list/iterable and not There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ah, got it. I changed it to enforce that the input shape is a tuple for a single input or a list of tuples for multiple inputs. |
||
|
||
if granularity.lower() not in ['model', 'type', 'name']: | ||
raise Exception( | ||
f'Invalid configuration granularity specified, expected "model", "type" or "name" got "{granularity}"' | ||
) | ||
|
||
if backend is not None: | ||
backend = hls4ml.backends.get_backend(backend) | ||
|
||
( | ||
layer_list, | ||
_, | ||
) = hls4ml.converters.parse_pytorch_model(config) | ||
|
||
def make_layer_config(layer): | ||
cls_name = layer['class_name'] | ||
if 'config' in layer.keys(): | ||
if 'activation' in layer['config'].keys(): | ||
if layer['config']['activation'] == 'softmax': | ||
cls_name = 'Softmax' | ||
|
||
layer_cls = hls4ml.model.layers.layer_map[cls_name] | ||
if backend is not None: | ||
layer_cls = backend.create_layer_class(layer_cls) | ||
|
||
layer_config = {} | ||
|
||
config_attrs = [a for a in layer_cls.expected_attributes if a.configurable] | ||
for attr in config_attrs: | ||
if isinstance(attr, hls4ml.model.attributes.TypeAttribute): | ||
precision_cfg = layer_config.setdefault('Precision', {}) | ||
name = attr.name | ||
if name.endswith('_t'): | ||
name = name[:-2] | ||
if attr.default is None: | ||
precision_cfg[name] = default_precision | ||
else: | ||
precision_cfg[name] = str(attr.default) | ||
else: | ||
if attr.default is not None: | ||
layer_config[attr.config_name] = attr.default | ||
|
||
jmitrevs marked this conversation as resolved.
Show resolved
Hide resolved
|
||
if layer['class_name'] == 'Input': | ||
dtype = layer['config']['dtype'] | ||
if dtype.startswith('int') or dtype.startswith('uint'): | ||
typename = dtype[: dtype.index('int') + 3] | ||
width = int(dtype[dtype.index('int') + 3 :]) | ||
layer_config['Precision']['result'] = f'ap_{typename}<{width}>' | ||
# elif bool, q[u]int, ... | ||
|
||
return layer_config | ||
|
||
if granularity.lower() == 'type': | ||
type_config = {} | ||
for layer in layer_list: | ||
if layer['class_name'] in type_config: | ||
continue | ||
layer_config = make_layer_config(layer) | ||
type_config[layer['class_name']] = layer_config | ||
|
||
config['LayerType'] = type_config | ||
|
||
elif granularity.lower() == 'name': | ||
name_config = {} | ||
for layer in layer_list: | ||
layer_config = make_layer_config(layer) | ||
name_config[layer['name']] = layer_config | ||
|
||
config['LayerName'] = name_config | ||
|
||
return config | ||
|
||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -32,6 +32,7 @@ install_requires = | |
tabulate | ||
tensorflow | ||
tensorflow-model-optimization<=0.7.5 | ||
torch | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Do we need this? This will force all installations of hls4ml to have both tf and torch, and since these have huge dependencies themselves it may bork user's environment with the partial updates. Ideally we would have hls4ml with optional extras installed to support various frontends (e.g., Also, if we go down this route, we should remove the checks for existence of pytorch on the system in There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think we can get away with not making torch a requirement by moving the |
||
python_requires = >=3.10 | ||
include_package_data = True | ||
scripts = scripts/hls4ml | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we shouldn't have a default of
None
for input shape because this will propagate further and then lead to errors which won't make it clear what is the original causeThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, we're now raising an exception if that parameter is not found. No point in continuing.