Skip to content

Commit 15419cf

Browse files
committed
update docs (#62)
1 parent d9d7f11 commit 15419cf

File tree

4 files changed

+68
-5
lines changed

4 files changed

+68
-5
lines changed

README.MD

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
<div align="center">
22
<img src="docs/zh/_img/icon.png" width="450" alt="FlashTTS Logo"/>
33

4-
[📘 Documentation](docs/zh/README.MD)
4+
[📘 Documentation](docs/zh/README.MD) | [📚 Deepwiki](https://deepwiki.com/HuiResearch/FlashTTS)
55

66
[中文](README.MD) | [English](README_EN.MD)
77

@@ -212,6 +212,7 @@ flashtts infer \
212212
--host 0.0.0.0 \
213213
--port 8000
214214
```
215+
详细部署说明,请参考:[server.md](docs/zh/server/server.md)
215216

216217
## ⚡ 推理速度
217218

README_EN.MD

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
<div align="center">
22
<img src="docs/zh/_img/icon.png" width="450" alt="FlashTTS Logo"/>
33

4-
[📘 Documentation](docs/en/README.MD)
4+
[📘 Documentation](docs/zh/README.MD) | [📚 Deepwiki](https://deepwiki.com/HuiResearch/FlashTTS)
55

66
[中文](README.MD) | [English](README_EN.MD)
77

@@ -164,6 +164,8 @@ Server deployment:
164164
--port 8000
165165
```
166166

167+
For detailed deployment,please refer to: [server.md](docs/en/server/server.md)
168+
167169
## ⚡ Inference Speed
168170

169171
Test environment: `A800 GPU` · Model: `Spark-TTS-0.5B` · Test script: [speed_test.py](examples/speed_test.py)

docs/en/server/server.md

Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,7 @@
55
1. Refer to the installation guide: [installation.md](../get_started/installation.md)
66
2. Start the server:
77

8+
- spark tts
89
```bash
910
flashtts serve \
1011
--model_path Spark-TTS-0.5B \ # Change to your model path if needed
@@ -20,6 +21,35 @@
2021
--host 0.0.0.0 \
2122
--port 8000
2223
```
24+
- mega tts
25+
```bash
26+
flashtts serve \
27+
--model_path MegaTTS3 \ # Change to your model path if needed
28+
--backend vllm \ # vllm、sglang、torch、llama-cpp、mlx-lm任选一个
29+
--llm_device cuda \
30+
--tokenizer_device cuda \
31+
--llm_attn_implementation sdpa \ # Recommended for torch backend
32+
--torch_dtype "float16" \
33+
--max_length 8192 \
34+
--llm_gpu_memory_utilization 0.6 \
35+
--host 0.0.0.0 \
36+
--port 8000
37+
```
38+
- orphpeus tts
39+
```bash
40+
flashtts serve \
41+
--model_path orpheus-3b-0.1-ft-bf16 \ # Change to your model path if needed
42+
--lang english \
43+
--backend vllm \ # vllm、sglang、torch、llama-cpp、mlx-lm任选一个
44+
--llm_device cuda \
45+
--detokenizer_device cuda \
46+
--llm_attn_implementation sdpa \ # Recommended for torch backend
47+
--torch_dtype "float16" \
48+
--max_length 8192 \
49+
--llm_gpu_memory_utilization 0.6 \
50+
--host 0.0.0.0 \
51+
--port 8000
52+
```
2353

2454
3. Access the web interface:
2555
```

docs/zh/server/server.md

Lines changed: 33 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,23 +4,53 @@
44

55
1. 参考安装文档: [installation.md](../get_started/installation.md)
66
2. 启动服务:
7+
8+
- spark tts
79
```bash
8-
910
flashtts serve \
1011
--model_path Spark-TTS-0.5B \ # 可修改为自己的模型地址
1112
--backend vllm \ # vllm、sglang、torch、llama-cpp、mlx-lm任选一个
1213
--llm_device cuda \
1314
--tokenizer_device cuda \
1415
--detokenizer_device cuda \
1516
--wav2vec_attn_implementation sdpa \
16-
--llm_attn_implementation sdpa \ # 如果使用torch engine,最好开启加速
17+
--llm_attn_implementation sdpa \ # 如果backend为torch,最好开启加速
1718
--torch_dtype "bfloat16" \ # 对于spark-tts模型,不支持bfloat16的设备,只能设置为float32.
1819
--max_length 32768 \
1920
--llm_gpu_memory_utilization 0.6 \
2021
--host 0.0.0.0 \
2122
--port 8000
22-
2323
```
24+
- mega tts
25+
```bash
26+
flashtts serve \
27+
--model_path MegaTTS3 \ # 可修改为自己的模型地址
28+
--backend vllm \ # vllm、sglang、torch、llama-cpp、mlx-lm任选一个
29+
--llm_device cuda \
30+
--tokenizer_device cuda \
31+
--llm_attn_implementation sdpa \ # 如果backend为torch,最好开启加速
32+
--torch_dtype "float16" \
33+
--max_length 8192 \
34+
--llm_gpu_memory_utilization 0.6 \
35+
--host 0.0.0.0 \
36+
--port 8000
37+
```
38+
- orphpeus tts
39+
```bash
40+
flashtts serve \
41+
--model_path orpheus-3b-0.1-ft-bf16 \ # 可修改为自己的模型地址
42+
--lang english \
43+
--backend vllm \ # vllm、sglang、torch、llama-cpp、mlx-lm任选一个
44+
--llm_device cuda \
45+
--detokenizer_device cuda \
46+
--llm_attn_implementation sdpa \ # 如果backend为torch,最好开启加速
47+
--torch_dtype "float16" \
48+
--max_length 8192 \
49+
--llm_gpu_memory_utilization 0.6 \
50+
--host 0.0.0.0 \
51+
--port 8000
52+
```
53+
2454
3. 在浏览器中访问页面
2555

2656
```

0 commit comments

Comments
 (0)