Skip to content

Commit 3d1e6a5

Browse files
authored
[Doc] Update user doc index (#1581)
Add user doc index to make the user guide more clear - vLLM version: v0.9.1 - vLLM main: vllm-project/vllm@49e8c7e Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
1 parent c744643 commit 3d1e6a5

File tree

16 files changed

+42
-28
lines changed

16 files changed

+42
-28
lines changed

docs/source/index.md

Lines changed: 3 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -43,16 +43,10 @@ faqs
4343
:::{toctree}
4444
:caption: User Guide
4545
:maxdepth: 1
46-
user_guide/suppoted_features
47-
user_guide/supported_models
48-
user_guide/env_vars
49-
user_guide/additional_config
50-
user_guide/sleep_mode
51-
user_guide/graph_mode.md
52-
user_guide/lora.md
53-
user_guide/quantization.md
46+
user_guide/support_matrix/index
47+
user_guide/configuration/index
48+
user_guide/feature_guide/index
5449
user_guide/release_notes
55-
user_guide/structured_output
5650
:::
5751

5852
% How to contribute to the vLLM Ascend project

docs/source/tutorials/multi_node.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ hccn_tool -i 0 -ping -g address 10.20.0.20
5454
```
5555

5656
## Run with docker
57-
Assume you have two Altas 800 A2(64G*8) nodes, and want to deploy the `deepseek-v3-w8a8` quantitative model across multi-node.
57+
Assume you have two Atlas 800 A2(64G*8) nodes, and want to deploy the `deepseek-v3-w8a8` quantitative model across multi-node.
5858

5959
```shell
6060
# Define the image and container name

docs/source/user_guide/env_vars.md renamed to docs/source/user_guide/configuration/env_vars.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
vllm-ascend uses the following environment variables to configure the system:
44

5-
:::{literalinclude} ../../../vllm_ascend/envs.py
5+
:::{literalinclude} ../../../../vllm_ascend/envs.py
66
:language: python
77
:start-after: begin-env-vars-definition
88
:end-before: end-env-vars-definition
Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
# Configuration Guide
2+
3+
This section provides a detailed configuration guide of vLLM Ascend.
4+
5+
:::{toctree}
6+
:caption: Configuration Guide
7+
:maxdepth: 1
8+
env_vars
9+
additional_config
10+
:::

docs/source/user_guide/graph_mode.md renamed to docs/source/user_guide/feature_guide/graph_mode.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@ online example:
5959
vllm serve Qwen/Qwen2-7B-Instruct --additional-config='{"torchair_graph_config": {"enabled": true},"ascend_scheduler_config": {"enabled": true,}}'
6060
```
6161

62-
You can find more detail about additional config [here](./additional_config.md)
62+
You can find more detail about additional config [here](../configuration/additional_config.md).
6363

6464
## Fallback to Eager Mode
6565

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
# Feature Guide
2+
3+
This section provides a detailed usage guide of vLLM Ascend features.
4+
5+
:::{toctree}
6+
:caption: Feature Guide
7+
:maxdepth: 1
8+
graph_mode
9+
quantization
10+
sleep_mode
11+
structured_output
12+
lora
13+
:::

docs/source/user_guide/lora.md renamed to docs/source/user_guide/feature_guide/lora.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# LoRA Adapters
1+
# LoRA Adapters Guide
22

33
Like vLLM, vllm-ascend supports LoRA as well. The usage and more details can be found in [vLLM official document](https://docs.vllm.ai/en/latest/features/lora.html).
44

0 commit comments

Comments
 (0)