OpenAI-Compatible Frontend for Triton Inference ASR/TTS Server
Before starting, launch one of the supported ASR/TTS services using Docker Compose.
Model Repo | Supported |
---|---|
Spark-TTS | Yes |
F5-TTS | Yes |
Cosyvoice2 | Yes |
Then, launch the OpenAI-compatible API bridge server.
docker compose up
bash tests/test.sh
tts_server.py [-h] [--host HOST] [--port PORT] [--url URL]
[--ref_audios_dir REF_AUDIOS_DIR]
[--default_sample_rate DEFAULT_SAMPLE_RATE]
options:
-h, --help show this help message and exit
--host HOST Host to bind the server to
--port PORT Port to bind the server to
--url URL Triton server URL
--ref_audios_dir REF_AUDIOS_DIR
Path to reference audio files
--default_sample_rate DEFAULT_SAMPLE_RATE
Default sample rate