Skip to content

Export Models to ONNX (Feature Request) #9

@Christopher-Thornton

Description

@Christopher-Thornton

As mentioned in #5 , the repository currently doesn't have a class for inferencing new data, and requires tweaking the model definitions to make this possible. It would greatly benefit the users if the repo had this out-of-the-box functionality.

I suggest a function be created to export a checkpoint model to ONNX, as is standard in pytorch-lightning for inferencing in production (from docs), along with any required compatibility changes to the model definitions.

torch.onnx.export(model,
                  args=example_input,
                  f=outname,
                  input_names=['input_ids', 'attention_mask', 'seq_len'],
                  output_names=['label'],
                  export_params=True)

Ideally a separate module e.g. inference.py can be written with optimized imports and a class to preprocess (tokenize) and run inferences from a collection of dialog strings.

Metadata

Metadata

Labels

No labels
No labels

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions