Skip to content

yuekaizhang/Triton-OpenAI-Speech

Repository files navigation

Triton-OpenAI-Speech

OpenAI-Compatible Frontend for Triton Inference ASR/TTS Server

Quick Start

Before starting, launch one of the supported ASR/TTS services using Docker Compose.

Model Repo Supported
Spark-TTS Yes
F5-TTS Yes
Cosyvoice2 Yes

Then, launch the OpenAI-compatible API bridge server.

docker compose up

Simple Test

bash tests/test.sh

Usage

tts_server.py [-h] [--host HOST] [--port PORT] [--url URL]
                     [--ref_audios_dir REF_AUDIOS_DIR]
                     [--default_sample_rate DEFAULT_SAMPLE_RATE]

options:
  -h, --help            show this help message and exit
  --host HOST           Host to bind the server to
  --port PORT           Port to bind the server to
  --url URL             Triton server URL
  --ref_audios_dir REF_AUDIOS_DIR
                        Path to reference audio files
  --default_sample_rate DEFAULT_SAMPLE_RATE
                        Default sample rate

About

OpenAI-Compatible Frontend for Nvidia Triton Inference ASR/TTS Server

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published