Skip to content

Keras v3 Support #1116

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 30 commits into
base: main
Choose a base branch
from
Open

Keras v3 Support #1116

wants to merge 30 commits into from

Conversation

calad0i
Copy link
Contributor

@calad0i calad0i commented Nov 8, 2024

A# Description

Add keras v3 specific object-based parser and some layer handlers (no h5 or json loading supported). The current keras parser doesn't work with v3 functional models in general.

Type of change

  • New feature (non-breaking change which adds functionality)

Tests

test/pytest/test_keras_v3_api.py

Test Configuration:

Requires keras>=3.0. Skips the whole module if this requirement is not sufficed.

Checklist

  • I have read the guidelines for contributing.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have made corresponding changes to the documentation.
  • My changes generate no new warnings.
  • I have installed and run pre-commit on the files I edited or added.
  • I have added tests that prove my fix is effective or that my feature works.

@jmitrevs jmitrevs added this to the v1.1.0 milestone Nov 8, 2024
@calad0i
Copy link
Contributor Author

calad0i commented Nov 8, 2024

This PR will be rebased after its prerequisites are merged to resolve the conflicts.

@calad0i calad0i added the please test Trigger testing by creating local PR branch label Mar 11, 2025
@vloncar vloncar self-requested a review March 11, 2025 21:40
@vloncar vloncar self-assigned this Mar 11, 2025
@bo3z bo3z modified the milestones: v1.1.0, v1.2.0 Apr 8, 2025
if keras.__version__ > '3.0':
layer_list, *_ = hls4ml.converters.parse_keras_v3_model(model)
else:
model_arch = json.loads(model.to_json())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know if the case was used in practice, but the old logic allows model to be a dict, and only if it's not not a dict, only then model_arch = json.loads(model.to_json()). Does the logic here work if model is a dict?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dict is a representation of a json. we work with a dict that was a json of a model whether made by us or, in the old days, by the user.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

model.get_config has slightly different behavior, where objects may be passed as-is (not serialized). v2 converter assumes a dict of fully serialized model config, so serialize+deserialize is used here.

@calad0i calad0i added please test Trigger testing by creating local PR branch and removed please test Trigger testing by creating local PR branch labels Apr 16, 2025
@calad0i calad0i added please test Trigger testing by creating local PR branch and removed please test Trigger testing by creating local PR branch labels Apr 18, 2025
@calad0i calad0i added please test Trigger testing by creating local PR branch and removed please test Trigger testing by creating local PR branch labels Apr 18, 2025
@calad0i calad0i added please test Trigger testing by creating local PR branch and removed please test Trigger testing by creating local PR branch labels Apr 18, 2025
@calad0i calad0i mentioned this pull request May 5, 2025
Copy link
Contributor

@vloncar vloncar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cosmetics mostly. run ruff on changed files before merging


The ``data_format='channels_first'`` parameter of Keras layers is supported, but not extensively tested. All HLS implementations in ``hls4ml`` are based on ``channels_last`` data format and need to be converted to that format before the HLS code can be emitted. We encourage users of ``channels_first`` to report their experiences to developers on GitHub.


* `QKeras <https://github.com/fastmachinelearning/qkeras>`_
The equivalent QKeras API and its quantizers are also supported by ``hls4ml``. QKeras is not compatible with Keras v3.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mention the alternatives (from Marius, HGQ2, via quantizers lib etc)

* `TensorFlow <https://pypi.org/project/tensorflow/>`_ (version 2.8 to 2.14) and `QKeras <https://pypi.org/project/qkeras/>`_ are required by the Keras converter. One may want to install newer versions of QKeras from GitHub. Newer versions of TensorFlow can be used, but QKeras and hl4ml do not currently support Keras v3.
The ``hls4ml`` library requires python 3.10 or later, and depends on a number of Python packages and external tools for synthesis and simulation. Python dependencies are automatically managed by ``pip`` or ``conda``.

The following Python packages are all optional and are only required if you intend to use the corresponding converter. Only install the packages you need.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the advice "only install packages you need" was already given above

@@ -9,6 +9,7 @@
from hls4ml.converters.keras_to_hls import get_supported_keras_layers # noqa: F401
from hls4ml.converters.keras_to_hls import parse_keras_model # noqa: F401
from hls4ml.converters.keras_to_hls import keras_to_hls, register_keras_layer_handler
from hls4ml.converters.keras_v3_to_hls import parse_keras_v3_model # noqa: F401
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rename keras_to_hls to keras_v2_to_hls

# EinsumDense config
params = default_params.copy()
params['strategy'] = strategy
params['n_free0'] = node.attributes.attributes['n_free0']
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use either node.attributes[] or node.get_attr()

static const unsigned strategy;
static const unsigned reuse_factor;
static const unsigned multiplier_limit;
static const bool store_weights_in_bram = false; // NOT USED
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove it then

@@ -0,0 +1,516 @@
import math
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

keep for now for internal testing (as it is not triggered by the CI), but will be abandoned in the future in favor of spread out tests

def get_io_tensors(layer: 'keras.Layer', node_whitelist: set[int] | None = None):
'''Given a keras layer, return a list of tuples of input and output
tensors. If the layer is called only once (i.e., no shared layers),
the list will contain only one tuple.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add an example where this will happen

return None
return handler(layer, inp_tensors, out_tensors)

def v2_call(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe add a comment on the rough list of layers that use this feature as this is to be deprecated

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can't really think of useful layers not yet covered. Haven't seen the fallback warning for a while.

@calad0i calad0i requested a review from vloncar May 27, 2025 14:44
@calad0i calad0i added please test Trigger testing by creating local PR branch and removed please test Trigger testing by creating local PR branch labels May 27, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
please test Trigger testing by creating local PR branch
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants