Skip to content

Commit 0395ab3

Browse files
authored
[Doc] Add graph mode user doc (#1083)
Add graph mode user guide doc. Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
1 parent 9a4eb94 commit 0395ab3

File tree

2 files changed

+83
-0
lines changed

2 files changed

+83
-0
lines changed

docs/source/index.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -47,6 +47,7 @@ user_guide/suppoted_features
4747
user_guide/supported_models
4848
user_guide/env_vars
4949
user_guide/additional_config
50+
user_guide/graph_mode.md
5051
user_guide/release_notes
5152
:::
5253

docs/source/user_guide/graph_mode.md

Lines changed: 82 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,82 @@
1+
# Graph Mode Guide
2+
3+
4+
This feature is currently experimental. In future versions, there may be behavioral changes around configuration, coverage, performance improvement.
5+
6+
This guide provides instructions for using Ascend Graph Mode with vLLM Ascend. Please note that graph mode is only available on V1 Engine. And only Qwen, DeepSeek series models are well tested in 0.9.0rc1. We'll make it stable and generalize in the next release.
7+
8+
## Getting Started
9+
10+
From v0.9.0rc1 with V1 Engine, vLLM Ascend will run models in graph mode by default to keep the same behavior with vLLM. If you hit any issues, please feel free to open an issue on GitHub and fallback to eager mode temporarily by set `enforce_eager=True` when initializing the model.
11+
12+
There are two kinds for graph mode supported by vLLM Ascend:
13+
- **ACLGraph**: This is the default graph mode supported by vLLM Ascend. In v0.9.0rc1, only Qwen series models are well tested.
14+
- **TorchAirGraph**: This is the GE graph mode. In v0.9.0rc1, only DeepSeek series models are supported.
15+
16+
## Using ACLGraph
17+
ACLGraph is enabled by default. Take Qwen series models as an example, just set to use V1 Engine is enough.
18+
19+
offline example:
20+
21+
```python
22+
import os
23+
24+
from vllm import LLM
25+
26+
os.environ["VLLM_USE_V1"] = 1
27+
28+
model = LLM(model="Qwen/Qwen2-7B-Instruct")
29+
outputs = model.generate("Hello, how are you?")
30+
```
31+
32+
online example:
33+
34+
```shell
35+
vllm serve Qwen/Qwen2-7B-Instruct
36+
```
37+
38+
## Using TorchAirGraph
39+
40+
If you want to run DeepSeek series models with graph mode, you should use [TorchAirGraph](https://www.hiascend.com/document/detail/zh/Pytorch/700/modthirdparty/torchairuseguide/torchair_0002.html). In this case, additional config is required.
41+
42+
offline example:
43+
44+
```python
45+
import os
46+
from vllm import LLM
47+
48+
os.environ["VLLM_USE_V1"] = 1
49+
50+
model = LLM(model="deepseek-ai/DeepSeek-R1-0528", additional_config={"torchair_graph_config": {"enable": True}})
51+
outputs = model.generate("Hello, how are you?")
52+
```
53+
54+
online example:
55+
56+
```shell
57+
vllm serve Qwen/Qwen2-7B-Instruct --additional-config='{"torchair_graph_config": {"enable": True}}'
58+
```
59+
60+
You can find more detail about additional config [here](./additional_config.md)
61+
62+
## Fallback to Eager Mode
63+
64+
If both `ACLGraph` and `TorchAirGraph` fail to run, you should fallback to eager mode.
65+
66+
offline example:
67+
68+
```python
69+
import os
70+
from vllm import LLM
71+
72+
os.environ["VLLM_USE_V1"] = 1
73+
74+
model = LLM(model="someother_model_weight", enforce_eager=True)
75+
outputs = model.generate("Hello, how are you?")
76+
```
77+
78+
online example:
79+
80+
```shell
81+
vllm serve Qwen/Qwen2-7B-Instruct --enforce-eager
82+
```

0 commit comments

Comments
 (0)