The HF documentation says that you can now export seq2seq to ONNX with the OnnxSeq2SeqConfigWithPast class.
https://huggingface.co/docs/transformers/v4.23.1/en/main_classes/onnx#onnx-configurations
This was added with this PR in March huggingface/transformers#14700
Perhaps it is sufficient to be incorporated into txtai now? It would be great to be able to use ONNX versions of the various HF models, for their increased performance.
Additionally, it seems to support ViT models, along with other enhancements that have been made since then. Here's the history for that class https://github.com/huggingface/transformers/commits/main/src/transformers/onnx/config.py