Skip to content
#

onnx-runtime

Here are 59 public repositories matching this topic...

A curated list of awesome inference deployment framework of artificial intelligence (AI) models. OpenVINO, TensorRT, MediaPipe, TensorFlow Lite, TensorFlow Serving, ONNX Runtime, LibTorch, NCNN, TNN, MNN, TVM, MACE, Paddle Lite, MegEngine Lite, OpenPPL, Bolt, ExecuTorch.

  • Updated May 3, 2024
  • Python

Babylon.cpp is a C and C++ library for grapheme to phoneme conversion and text to speech synthesis. For phonemization a ONNX runtime port of the DeepPhonemizer model is used. For speech synthesis VITS models are used. Piper models are compatible after a conversion script is run.

  • Updated Aug 28, 2024
  • Python

This project includes implementations of YOLOv8, RT-DETR-V2(RTDETR), MobileSAM, and NanoSAM on TensorRT, ONNX Runtime, and RKNN, along with support for asynchronous inference workflows. It provides a user-friendly deep learning deployment tool for seamless algorithm migration across different inference frameworks.

  • Updated Jun 11, 2025
  • C++

Improve this page

Add a description, image, and links to the onnx-runtime topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the onnx-runtime topic, visit your repo's landing page and select "manage topics."

Learn more