ONNX FHE Runtime is a set of FHE basd ONNX components to build privacy preserving AI models.
- Install CMake 3.22(or above), gcc or clang
- Install OpenMP(this is not necessary but highly recommended)
Using OpenFHE installed on your system.
Clone repository with submodules. Configure cmake and run build
$ mkdir .build
$ cd .build && cmake -DCMAKE_BUILD_TYPE=Release -DUSE_SYSTEM_OPENFHE=On -DCMAKE_INSTALL_PREFIX=./install
$ make build && make install
Installation folder will contain onnx FHE runtime library and OpenFHE libs.
Using OpenFHE submodule.
Clone repository with submodules. Configure cmake and run build
$ mkdir .build
$ cd .build && cmake -DCMAKE_BUILD_TYPE=Release -DUSE_SYSTEM_OPENFHE=Off -DCMAKE_INSTALL_PREFIX=./install
$ make build && make install
Installation folder will contain onnx FHE runtime library only.
Find the example of inference in example folder. There is two models, original and FHE based model. inference.py
runs both models and print results.
To run script you need to have openfhe-python
bindings, openfhe installed on your system. Also onnx
and onnxruntime
should be installed.
To install OpenFHE and openfhe-python bindings please refer to origin github repositories.
To install onnx
and onnxruntime
you may use the following command
$ pip install onnx onnxruntime
Once everything is installed you may execute the inference:
$ python inference.py