Replies: 2 comments 4 replies
-
Can you post the full code snippet here ? I'll take a closer look |
Beta Was this translation helpful? Give feedback.
0 replies
-
Sure, here you go import keras
import tf2onnx
import tensorflow as tf
import os
import ezkl
import cv2
model_keras = keras.saving.load_model('../../models/one-class.keras')
model_keras.summary(expand_nested=True)
# ONNX Conversion
model_path = './one-class.onnx'
onnx_model, _ = tf2onnx.convert.from_keras(
model_keras,
input_signature=(
tf.TensorSpec(
[
1,
model_keras.inputs[0].shape[1],
model_keras.inputs[0].shape[2],
model_keras.inputs[0].shape[3]
],
dtype=model_keras.inputs[0].dtype,
name=model_keras.inputs[0].name,
),
),
inputs_as_nchw=['input_layer_1'],
output_path=model_path,
opset=12, # the ONNX version to export the model to
)
# Create circuit
output_root = './zk'
settings_path = os.path.join(output_root, 'settings.json')
data_path = os.path.join(output_root, 'input.json')
cal_path = os.path.join(output_root, 'cal_data.json')
compiled_model_path = os.path.join(output_root, 'network.compiled')
pk_path = os.path.join(output_root, 'test.pk')
vk_path = os.path.join(output_root, 'test.vk')
py_run_args = ezkl.PyRunArgs()
py_run_args.input_visibility = "public"
py_run_args.output_visibility = "public"
py_run_args.param_visibility = "fixed" # private by default
res = ezkl.gen_settings(model_path, settings_path, py_run_args=py_run_args, )
assert res == True
# Create calibration data
# Capture set of data points
num_data_points = 4
dataset_path = '../../models/one_class'
data_points = []
for i, file in enumerate(os.listdir(os.path.join(dataset_path, 'target'))):
if i >= num_data_points:
break
if '.png' in file:
image = cv2.imread(os.path.join(dataset_path, 'cells', file), cv2.IMREAD_GRAYSCALE)
image = cv2.resize(image, (128, 128))
data_points.append(image)
for i, file in enumerate(os.listdir(os.path.join(dataset_path, 'not_target'))):
if i >= num_data_points:
break
if '.png' in file:
image = cv2.imread(os.path.join(dataset_path, 'not', file), cv2.IMREAD_GRAYSCALE)
image = cv2.resize(image, (128, 128))
data_points.append(image)
# Stack the data points to create a batch
train_data_batch = tf.cast(tf.stack(data_points), tf.float32)
x = train_data_batch.numpy().reshape([-1]).tolist()
data = dict(input_data = [x])
# Serialize data into file:
json.dump( data, open(cal_path, 'w' ))
# Calibrate
ezkl calibrate-settings --data ./zk/cal_data.json --model ./one-class.onnx --settings-path ./zk/settings.json
res = await ezkl.calibrate_settings(cal_path, model_path, settings_path, "resources")
# Compile
res = ezkl.compile_circuit(model_path, compiled_model_path, settings_path)
assert res == True
# Setup
res = ezkl.get_srs(settings_path)
res = ezkl.setup(
compiled_model_path,
vk_path,
pk_path,
)
assert res == True
assert os.path.isfile(vk_path)
assert os.path.isfile(pk_path)
assert os.path.isfile(settings_path) The result of this (i I skip the calibration step) is the following error :
(it was executed on Jupyter notebook) |
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, first of all, awesome project! really easy to use!
Now to my problem, I have a small Keras conv net:
Model Summary
Wich I converted to ONNX and created a setting files following the basic procedure:
Conversion procedure
ONNX file creation:Generated Settings
Calibrated (cannot finish, too long)
Compile and get SRS
Now, I have some problems and questions.
Question 1
First, and most important, the setup function is not working:
it returns:
thread '<unnamed>' panicked at src/graph/vars.rs:444:21: dynamic lookup or shuffle should only have one block
I don't know why it happened or how to proceed. The network itself is really simple, no exotic layers.
Question 2
The
ezkl.calibrate_settings
is very slow, taking ~4 hs per step on a i9 CPU (I never finished the calibration process but I read it is optional).Question 3
Can calibration be avoided if the model is created using quantization-aware training?
If so, where can I find more data on the quantization types recommended?
Beta Was this translation helpful? Give feedback.
All reactions