Skip to content

Keras v3 Support #1116

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 32 commits into from
May 30, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
32 commits
Select commit Hold shift + click to select a range
47f3e3c
add keras v3 object parser
calad0i Mar 7, 2025
5debe71
add keras v3 layer handlers
calad0i Mar 7, 2025
755be89
expose kv3 parser to config interface
calad0i Mar 7, 2025
a1c2227
add kv3 converter test
calad0i Mar 8, 2025
067ef9e
einsumdense and einsum
calad0i Mar 7, 2025
d8bb729
add einsum templates
calad0i Mar 8, 2025
303db72
einsumdense test
calad0i Mar 8, 2025
56c0731
support kv3 parsed batchnorm
calad0i Mar 7, 2025
fe3fcd0
fix einsum/einsum dense regression issue
calad0i Mar 8, 2025
54a297e
preemptive distributed_arithmetic flag for einsum ops
calad0i Mar 11, 2025
3509666
update doc for kv3
calad0i Mar 11, 2025
c81028e
more documentation
calad0i Mar 11, 2025
eed6330
backport validate einsum function
calad0i Apr 16, 2025
cda903e
docstring style
calad0i Apr 16, 2025
eccde4e
quote format
calad0i Apr 16, 2025
6dfeb99
restore example-models version
calad0i Apr 18, 2025
8284757
pre-commit update
calad0i Apr 18, 2025
35e94d0
Merge branch 'main' into keras-v3
calad0i Apr 18, 2025
e5ad92c
kv3 handler update
calad0i May 27, 2025
6aec7f6
force keras>=3.10
calad0i May 27, 2025
64261aa
isolate merge handlers
calad0i May 27, 2025
b8ed033
rm abomination
calad0i May 27, 2025
009ae8e
mv xpose config gen to utils
calad0i May 27, 2025
3ea3490
attributes.attributes -> attributes
calad0i May 27, 2025
5d4bdfe
isolate keras v2 and v3 to hls
calad0i May 27, 2025
150a3f6
update tests for api changes
calad0i May 27, 2025
ec914d1
update docs
calad0i May 27, 2025
312328e
mv einops to vivado backend, rm unused args
calad0i May 27, 2025
a15a353
Merge branch 'main' into keras-v3
calad0i May 27, 2025
9c585aa
post merge fix
calad0i May 27, 2025
8153522
quality-of-life changes
calad0i May 30, 2025
c4733b2
fix some qol changes
calad0i May 30, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 13 additions & 5 deletions docs/frontend/keras.rst
Original file line number Diff line number Diff line change
@@ -1,11 +1,19 @@
================
Keras and QKeras
================
================================
Keras and its quantized variants
================================

Keras and the quantization library QKeras are well supported in ``hls4ml``. Currently, the Keras v2 (``tf.keras``) is the preferred version, and the future versions of ``hls4ml`` will expand support for Keras v3. The frontend is based on the parsing the serialized json representation of the model.
Keras and the quantization library QKeras are well supported in ``hls4ml``. Both Keras v2 (``tf.keras``) and the new Keras v3 are supported. While the Keras v2 support is based on parsing the serialized json representation of the model, the Keras v3 support uses direct model inspection.

Currently, ``hls4ml`` can parse most Keras layers, including core layers, convolutional layers, pooling layers, recurrent layers, merging/reshaping layers and activation layers, implemented either via sequential or functional API. Notably missing are the attention and normalization layers. The equivalent QKeras API and quantizers are also supported. The ``Lambda`` layers don't save their state in the serialized format and are thus impossible to parse. In this case, the ``Lambda`` layers can be implemented as custom layers and parsed via the :ref:`Extension API`.
Currently, ``hls4ml`` can parse most Keras layers, including core layers, convolutional layers, pooling layers, recurrent layers, merging/reshaping layers and activation layers, implemented either via sequential or functional API. Notably missing are the attention and normalization layers. The ``Lambda`` layers don't save their state in the serialized format and are thus impossible to parse. In this case, the ``Lambda`` layers can be implemented as custom layers and parsed via the :ref:`Extension API`.

The ``data_format='channels_first'`` parameter of Keras layers is supported, but not extensively tested. All HLS implementations in ``hls4ml`` are based on ``channels_last`` data format and need to be converted to that format before the HLS code can be emitted. We encourage users of ``channels_first`` to report their experiences to developers on GitHub.


* `QKeras <https://github.com/fastmachinelearning/qkeras>`_
The equivalent QKeras API and its quantizers are also supported by ``hls4ml``. QKeras is not compatible with Keras v3. Currently, only HGQ2 is compatible with Keras v3 (see below).
* `HGQ <https://github.com/calad0i/HGQ>`_
The equivalent HGQ API is also supported. HGQ is not compatible with Keras v3. See `advanced/HGQ <../advanced/hgq.html>`__ for more information.
* `HGQ2 <https://github.com/calad0i/HGQ2>`_
HGQ2 is based on Keras v3. Its support in hls4ml is currently under development.

The development team of ``hls4ml`` is currently exploring options for QKeras alternative and will provide a drop-in replacement API compatible with Keras v3.
20 changes: 16 additions & 4 deletions docs/intro/setup.rst
Original file line number Diff line number Diff line change
Expand Up @@ -37,14 +37,26 @@ version can be installed directly from ``git``:
Dependencies
============

The ``hls4ml`` library requires python 3.10 or later, and depends on a number of Python packages and external tools for synthesis and simulation. Python dependencies are automatically managed
by ``pip`` or ``conda``.
.. note::
As of version 1.1.0+, all conversion frontend specific packages are optional. Only install the packages you need.

* `TensorFlow <https://pypi.org/project/tensorflow/>`_ (version 2.8 to 2.14) and `QKeras <https://pypi.org/project/qkeras/>`_ are required by the Keras converter. One may want to install newer versions of QKeras from GitHub. Newer versions of TensorFlow can be used, but QKeras and hl4ml do not currently support Keras v3.
The ``hls4ml`` library requires python 3.10 or later, and depends on a number of Python packages and external tools for synthesis and simulation. Python dependencies are automatically managed by ``pip`` or ``conda``.

The following Python packages are all optional and are only required if you intend to use the corresponding converter.

* `Keras <https://pypi.org/project/keras/>`_ is required by the Keras converter.
* `TensorFlow <https://pypi.org/project/tensorflow/>`_ (version 2.8 to 2.14) is required by the Keras v2 converter (keras v2 is included in TensorFlow).
* `Keras <https://pypi.org/project/keras/>` 3.0 or above is required by the Keras v3 converter. Keras v3 supports multiple backends for training and inference, and the conversion is not tied any specific backend. Notice that Keras v3 may **not** coexist with Keras v2 in the same Python environment.

* `ONNX <https://pypi.org/project/onnx/>`_ (version 1.4.0 and newer) is required by the ONNX converter.

* `PyTorch <https://pytorch.org/get-started>`_ package is optional. If not installed, the PyTorch converter will not be available.
* `PyTorch <https://pytorch.org/get-started>`_ is required by the PyTorch converter.

* Quantization support
* `QKeras <https://github.com/fastmachinelearning/qkeras>`_: based on Keras v2. See `frontend/keras <../frontend/keras.html>`_ for more details
* `HGQ <https://github.com/calad0i/HGQ>`_: Based on Keras v2. See `advanced/HGQ <../advanced/hgq.html>`_ for more details.
* `Brevitas <https://xilinx.github.io/brevitas/>`_: Based on PyTorch. See `frontend/pytorch <../frontend/pytorch.html>`_ for more details.
* `QONNX <https://github.com/fastmachinelearning/qonnx>`_: Based on ONNX. See `frontend/onnx <../frontend/onnx.html>`_ for more details.

Running C simulation from Python requires a C++11-compatible compiler. On Linux, a GCC C++ compiler ``g++`` is required. Any version from a recent
Linux should work. On MacOS, the *clang*-based ``g++`` is enough. For the oneAPI backend, one must have oneAPI installed, along with the FPGA compiler,
Expand Down
27 changes: 0 additions & 27 deletions hls4ml/backends/fpga/fpga_backend.py
Original file line number Diff line number Diff line change
Expand Up @@ -917,33 +917,6 @@ def generate_conv2d_line_buffer_fn(

return generated_code

@staticmethod
def permute_config_gen(name: str, shape: tuple[int, ...], perm: tuple[int, ...]):
"""
Generate new shape and perm_strides for a permute operation. Operates by mapping the output index
to input input index by:
- unravel the output index
- map each dimension to the corresponding stride in the input tensor, sum
The operation can be expressed as:

new_shape = tuple(shape[i] for i in perm)
strides = np.cumprod((shapes[1:] + (1,))[::-1])[::-1]
perm_strides = [strides[i] for i in perm]
out[index] = inp[np.dot(np.unravel_index(index, new_shape), perm_strides)]

Args:
name (str): The name of the configuration.
shape (tuple[int, ...]): The shape of the input tensor.
perm (tuple[int, ...]): The permutation of the dimensions.

Returns:
(new_shape, perm_strides) (tuple, tuple): the output shape and permutation strides.
"""
new_shape = tuple(shape[i] for i in perm)
strides = np.cumprod((shape[1:] + (1,))[::-1])[::-1]
perm_strides = tuple(int(strides[i]) for i in perm)
return (new_shape, perm_strides)

@model_optimizer()
def write_hls(self, model):
self.writer.write_hls(model)
Expand Down
13 changes: 3 additions & 10 deletions hls4ml/backends/oneapi/passes/reshaping_templates.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
from hls4ml.backends.oneapi.oneapi_template import StreamFunctionCallTemplate, TaskSequenceTemplate
from hls4ml.backends.template import FunctionCallTemplate, LayerConfigTemplate
from hls4ml.model.layers import Reshape, Resize, Transpose, ZeroPadding1D, ZeroPadding2D
from hls4ml.utils.transpose_utils import transpose_config_gen

# ZeroPadding templates

Expand Down Expand Up @@ -185,16 +186,8 @@ def format(self, node):
perm = tuple(node.get_attr('perm'))
name = f'config{node.index}'

new_shape, perm_strides = node.model.config.backend.permute_config_gen(name, shape, perm)
return transpose_config_template.format(
dims=len(shape),
N=int(np.prod(shape)),
from_shape=', '.join(str(x) for x in shape),
perm=', '.join(str(x) for x in perm),
perm_strides=', '.join(str(x) for x in perm_strides),
to_shape=', '.join(str(x) for x in new_shape),
config_name=name,
)
conf = transpose_config_gen(name, shape, perm)
return transpose_config_template.format(**conf)


class TransposeFunctionTemplate(FunctionCallTemplate):
Expand Down
109 changes: 109 additions & 0 deletions hls4ml/backends/vivado/passes/einsum.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,109 @@
from math import ceil

from hls4ml.backends.backend import get_backend
from hls4ml.backends.template import FunctionCallTemplate, LayerConfigTemplate
from hls4ml.model.layers import Einsum
from hls4ml.utils.transpose_utils import transpose_config_gen

from .reshaping_templates import transpose_config_template

# Shared Dense template
# Einsum template

einsum_config_template = '''
struct config{index} {{
typedef config{index}_tpose_inp0 tpose_inp0_config;
typedef config{index}_tpose_inp1 tpose_inp1_config;
typedef config{index}_tpose_out tpose_out_conf;

typedef {accum_t.name} accum_t;

// Layer Sizes
static const unsigned n_free0 = {n_free0};
static const unsigned n_free1 = {n_free1};
static const unsigned n_contract = {n_contract};
static const unsigned n_inplace = {n_inplace};

// Resource reuse info
static const unsigned io_type = nnet::{iotype};
static const unsigned strategy = nnet::{strategy};
static const unsigned reuse_factor = {reuse_factor};
static const unsigned multiplier_limit = {multiplier_limit};
static const bool store_weights_in_bram = false; // NOT USED

template <class x_T, class y_T>
using product = nnet::product::{product_type}<x_T, y_T>;
}};
'''

einsum_function_template = 'nnet::einsum<{input0_t}, {input1_t}, {output_t}, {config}>({input0}, {input1}, {output});'

einsum_include_list = ['nnet_utils/nnet_einsum.h']


class EinsumConfigTemplate(LayerConfigTemplate):
def __init__(self):
super().__init__(Einsum)
self.template = einsum_config_template

def format(self, node: Einsum):
default_params = self._default_config_params(node)

strategy = node.attributes['strategy']
io_type = node.model.config.get_config_value('IOType')

assert io_type == 'io_parallel', 'EinsumDense layer only supports io_parallel for now'
assert strategy.lower() == 'latency', 'EinsumDense layer only supports Latency strategy for now'

# EinsumDense config
params = default_params.copy()
params['strategy'] = strategy
params['n_free0'] = node.attributes['n_free0']
params['n_free1'] = node.attributes['n_free1']
params['n_contract'] = node.attributes['n_contract']
params['n_inplace'] = node.attributes['n_inplace']
inp0_t = node.get_input_variable(node.inputs[0]).type.precision
inp1_t = node.get_input_variable(node.inputs[1]).type.precision
params['product_type'] = get_backend('vivado').product_type(inp0_t, inp1_t)

total_mults = params['n_free0'] * params['n_free1'] * params['n_contract'] * params['n_inplace']
params['multiplier_limit'] = ceil(total_mults / params['reuse_factor'])

einsum_conf = self.template.format(**params)

# inp/out transpose config
inp0_shape = node.attributes['inp0_shape']
inp1_shape = node.attributes['inp1_shape']
out_interpert_shape = node.attributes['out_interpert_shape']
inp0_tpose_idxs = node.attributes['inp0_tpose_idxs']
inp1_tpose_idxs = node.attributes['inp1_tpose_idxs']
out_tpose_idxs = node.attributes['out_tpose_idxs']
tpose_inp0_config_name = f'config{node.index}_tpose_inp0'
tpose_inp1_config_name = f'config{node.index}_tpose_inp1'
tpose_out_conf_name = f'config{node.index}_tpose_out'

conf = transpose_config_gen(tpose_inp0_config_name, inp0_shape, inp0_tpose_idxs)
inp0_tpose_conf = transpose_config_template.format(**conf)
conf = transpose_config_gen(tpose_inp1_config_name, inp1_shape, inp1_tpose_idxs)
inp1_tpose_conf = transpose_config_template.format(**conf)
conf = transpose_config_gen(tpose_out_conf_name, out_interpert_shape, out_tpose_idxs)
out_tpose_conf = transpose_config_template.format(**conf)

return '\n\n'.join((inp0_tpose_conf, inp1_tpose_conf, out_tpose_conf, einsum_conf))


class EinsumFunctionTemplate(FunctionCallTemplate):
def __init__(self):
super().__init__(Einsum, include_header=einsum_include_list)
self.template = einsum_function_template

def format(self, node: Einsum):
params = {}
params['config'] = f'config{node.index}'
params['input0_t'] = node.get_input_variable(node.inputs[0]).type.name
params['input1_t'] = node.get_input_variable(node.inputs[1]).type.name
params['output_t'] = node.get_output_variable().type.name
params['input0'] = node.get_input_variable(node.inputs[0]).name
params['input1'] = node.get_input_variable(node.inputs[1]).name
params['output'] = node.get_output_variable().name
return self.template.format(**params)
147 changes: 147 additions & 0 deletions hls4ml/backends/vivado/passes/einsum_dense.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,147 @@
from hls4ml.backends.backend import get_backend
from hls4ml.backends.template import FunctionCallTemplate, LayerConfigTemplate
from hls4ml.model.layers import EinsumDense
from hls4ml.utils.transpose_utils import transpose_config_gen

from .reshaping_templates import transpose_config_template

# Shared Dense template

dense_config_template = '''struct config{index}_dense : nnet::dense_config {{
static const unsigned n_in = {n_in};
static const unsigned n_out = {n_out};
static const unsigned reuse_factor = {reuse};
static const unsigned strategy = nnet::{strategy};
static const unsigned n_zeros = {nzeros};
static const unsigned multiplier_limit = DIV_ROUNDUP(n_in * n_out, reuse_factor) - n_zeros / reuse_factor;
typedef {accum_t.name} accum_t;
typedef {bias_t.name} bias_t;
typedef {weight_t.name} weight_t;
template<class data_T, class res_T, class CONFIG_T>
using kernel = nnet::{dense_function}<data_T, res_T, CONFIG_T>;
template<class x_T, class y_T>
using product = nnet::product::{product_type}<x_T, y_T>;
}};\n'''

# EinsumDense template

einsum_dense_config_template = '''
struct config{index} {{
typedef config{index}_tpose_inp tpose_inp_conf;
typedef config{index}_tpose_out tpose_out_conf;
{kernel_config};

typedef {accum_t.name} accum_t;
typedef {bias_t.name} bias_t;

// Layer Sizes
static const unsigned n_free_data = {n_free_data};
static const unsigned n_free_kernel = {n_free_kernel};
static const unsigned n_contract = {n_contract};
static const unsigned n_inplace = {n_inplace};

// Resource reuse info
static const unsigned io_type = nnet::{iotype};
static const unsigned strategy = nnet::{strategy};
static const unsigned reuse_factor = {reuse_factor};
static const unsigned parallelization_factor = {parallelization_factor}; // Only useful when n_inplace > 1
}};
'''

einsum_dense_function_template = 'nnet::einsum_dense<{input_t}, {output_t}, {config}>({input}, {output}, {w}, {b});'
einsum_dense_da_function_template = 'nnet::einsum_dense<{input_t}, {output_t}, {config}>({input}, {output}, {b});'

einsum_dense_include_list = ['nnet_utils/nnet_einsum_dense.h', 'nnet_utils/nnet_dense.h']


class EinsumDenseConfigTemplate(LayerConfigTemplate):
def __init__(self):
super().__init__(EinsumDense)
self.template = einsum_dense_config_template
self.dense_template = dense_config_template

def dense_config(self, node: EinsumDense):
dense_params = self._default_config_params(node)
strategy = node.attributes['strategy']
dense_params['strategy'] = strategy
dense_params['n_in'] = node.attributes['n_contract']
dense_params['n_out'] = node.attributes['n_free_kernel']
if node.attributes['n_inplace'] == 1:
dense_params['nzeros'] = node.get_weights('weight').nzeros # type: ignore
else:
dense_params['nzeros'] = '-1; // Not making sense when kernels are switching'
dense_params['product_type'] = get_backend('vivado').product_type(
node.get_input_variable().type.precision, node.get_weights('weight').type.precision # type: ignore
)

dense_params['dense_function'] = 'DenseLatency' # Latency only for now

dense_config = self.dense_template.format(**dense_params)
return dense_config

def format(self, node: EinsumDense):
default_params = self._default_config_params(node)

strategy = node.attributes['strategy']
io_type = node.model.config.get_config_value('IOType')

assert io_type == 'io_parallel', 'EinsumDense layer only supports io_parallel and distributed_arithmetic'

# EinsumDense config
params = default_params.copy()
params['strategy'] = strategy
params['n_free_data'] = node.attributes['n_free_data']
params['n_free_kernel'] = node.attributes['n_free_kernel']
params['n_contract'] = node.attributes['n_contract']
params['n_inplace'] = node.attributes['n_inplace']
if strategy.lower() == 'latency':
params['kernel_config'] = f'typedef config{node.index}_dense dense_conf'
else:
assert strategy.lower() == 'distributed_arithmetic', 'EinsumDense layer only supports Latency strategy for now'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

leftover comment

inp_t = node.get_input_variable().type.name
result_t = node.get_output_variable().type.name
index = node.index
conf = f'constexpr static auto da_kernel = nnet::einsum_dense{index}_da_kernel<{inp_t}, {result_t}>'
params['kernel_config'] = conf
pf = node.attributes['parallelization_factor']
if pf < 0:
pf = params['n_inplace']
params['parallelization_factor'] = pf

einsum_conf = self.template.format(**params)

# inp/out transpose config
inp_shape = node.attributes['inp_shape']
out_interpert_shape = node.attributes['out_interpert_shape']
inp_tpose_idxs = node.attributes['inp_tpose_idxs']
out_tpose_idxs = node.attributes['out_tpose_idxs']
tpose_inp_conf_name = f'config{node.index}_tpose_inp'
tpose_out_conf_name = f'config{node.index}_tpose_out'

conf = transpose_config_gen(tpose_inp_conf_name, inp_shape, inp_tpose_idxs)
inp_tpose_conf = transpose_config_template.format(**conf)
conf = transpose_config_gen(tpose_out_conf_name, out_interpert_shape, out_tpose_idxs)
out_tpose_conf = transpose_config_template.format(**conf)

if strategy.lower() == 'distributed_arithmetic':
return '\n\n'.join((inp_tpose_conf, out_tpose_conf, einsum_conf))

dense_config = self.dense_config(node)
return '\n\n'.join((inp_tpose_conf, out_tpose_conf, dense_config, einsum_conf))


class EinsumDenseFunctionTemplate(FunctionCallTemplate):
def __init__(self):
super().__init__(EinsumDense, include_header=einsum_dense_include_list)
self.template = einsum_dense_function_template

def format(self, node):
params = self._default_function_params(node)
params['b'] = node.get_weights('bias').name

strategy = node.attributes['strategy']
if strategy == 'distributed_arithmetic':
return einsum_dense_da_function_template.format(**params)

params['w'] = node.get_weights('weight').name
return einsum_dense_function_template.format(**params)
Loading
Loading