You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
### What this PR does / why we need it?
- Added instructions for verifying multi-node communication environment.
- Included explanations of Ray-related environment variables for
configuration.
- Provided detailed steps for launching services in a multi-node
environment.
### Does this PR introduce _any_ user-facing change?
no
### How was this patch tested?
manually tested.
Signed-off-by: jinyuxin <jinyuxin2@huawei.com>
Multi-node inference is suitable for scenarios where the model cannot be deployed on a single NPU. In such cases, the model can be distributed using tensor parallelism and pipeline parallelism. The specific parallelism strategies will be covered in the following sections. To successfully deploy multi-node inference, the following three steps need to be completed:
4
4
5
-
Run docker container on each machine:
5
+
***Verify Multi-Node Communication Environment**
6
+
***Set Up and Start the Ray Cluster**
7
+
***Start the Online Inference Service on multinode**
6
8
7
-
```{code-block} bash
8
-
:substitutions:
9
9
10
+
## Verify Multi-Node Communication Environment
11
+
12
+
### Physical Layer Requirements:
13
+
14
+
- The physical machines must be located on the same WLAN, with network connectivity.
15
+
- All NPUs are connected with optical modules, and the connection status must be normal.
16
+
17
+
### Verification Process:
18
+
19
+
Execute the following commands on each node in sequence. The results must all be `success` and the status must be `UP`:
# Execute on the target node (replace with actual IP)
45
+
hccn_tool -i 0 -ping -g address 10.20.0.20
46
+
```
47
+
48
+
## Set Up and Start the Ray Cluster
49
+
### Setting Up the Basic Container
50
+
To ensure a consistent execution environment across all nodes, including the model path and Python environment, it is recommended to use Docker images.
51
+
52
+
For setting up a multi-node inference cluster with Ray, **containerized deployment** is the preferred approach. Containers should be started on both the master and worker nodes, with the `--net=host` option to enable proper network connectivity.
53
+
54
+
Below is the example container setup command, which should be executed on **all nodes** :
Choose one machine as head node, the other are worker nodes, then start ray on each machine:
87
+
### Start Ray Cluster
88
+
After setting up the containers and installing vllm-ascend on each node, follow the steps below to start the Ray cluster and execute inference tasks.
89
+
90
+
Choose one machine as the head node and the others as worker nodes. Before proceeding, use `ip addr` to check your `nic_name` (network interface name).
91
+
92
+
Set the `ASCEND_RT_VISIBLE_DEVICES` environment variable to specify the NPU devices to use. For Ray versions above 2.1, also set the `RAY_EXPERIMENTAL_NOSET_ASCEND_RT_VISIBLE_DEVICES` variable to avoid device recognition issues. The `--num-gpus` parameter defines the number of NPUs to be used on each node.
93
+
94
+
Below are the commands for the head and worker nodes:
95
+
96
+
**Head node**:
34
97
35
98
:::{note}
36
-
Check out your `nic_name` by command `ip addr`.
99
+
When starting a Ray cluster for multi-node inference, the environment variables on each node must be set **before** starting the Ray cluster for them to take effect.
100
+
Updating the environment variables requires restarting the Ray cluster.
When starting a Ray cluster for multi-node inference, the environment variables on each node must be set **before** starting the Ray cluster for them to take effect. Updating the environment variables requires restarting the Ray cluster.
ray start --address='{head_node_ip}:{port_num}' --num-gpus=8 --node-ip-address={local_ip}
56
126
```
57
-
58
-
:::{note}
59
-
If you're running DeepSeek V3/R1, please remove `quantization_config` section in `config.json` file since it's not supported by vllm-ascend currently.
127
+
:::{tip}
128
+
Before starting the Ray cluster, set the `export ASCEND_PROCESS_LOG_PATH={plog_save_path}` environment variable on each node to redirect the Ascend plog, which helps in debugging issues during multi-node execution.
60
129
:::
61
130
62
-
Start the vLLM server on head node:
131
+
132
+
Once the cluster is started on multiple nodes, execute `ray status` and `ray list nodes` to verify the Ray cluster's status. You should see the correct number of nodes and NPUs listed.
133
+
134
+
135
+
## Start the Online Inference Service on multinode
136
+
In the container, you can use vLLM as if all NPUs were on a single node. vLLM will utilize NPU resources across all nodes in the Ray cluster. You only need to run the vllm command on one node.
137
+
138
+
To set up parallelism, the common practice is to set the `tensor-parallel-size` to the number of NPUs per node, and the `pipeline-parallel-size` to the number of nodes.
139
+
140
+
For example, with 16 NPUs across 2 nodes (8 NPUs per node), set the tensor parallel size to 8 and the pipeline parallel size to 2:
63
141
64
142
```shell
65
-
export VLLM_HOST_IP={head_node_ip}
66
-
export HCCL_CONNECT_TIMEOUT=120
67
-
export ASCEND_PROCESS_LOG_PATH={plog_save_path}
68
-
export HCCL_IF_IP={head_node_ip}
69
-
70
-
if [ -d"{plog_save_path}" ];then
71
-
rm -rf {plog_save_path}
72
-
echo">>> remove {plog_save_path}"
73
-
fi
74
-
75
-
LOG_FILE="multinode_$(date +%Y%m%d_%H%M).log"
76
-
VLLM_TORCH_PROFILER_DIR=./vllm_profile
77
-
python -m vllm.entrypoints.openai.api_server \
143
+
python -m vllm.entrypoints.openai.api_server \
78
144
--model="Deepseek/DeepSeek-V2-Lite-Chat" \
79
145
--trust-remote-code \
80
146
--enforce-eager \
81
-
--max-model-len {max_model_len} \
82
147
--distributed_executor_backend "ray" \
83
-
--tensor-parallel-size 16 \
84
-
--disable-log-requests \
85
-
--disable-log-stats \
148
+
--tensor-parallel-size 8 \
149
+
--pipeline-parallel-size 2 \
86
150
--disable-frontend-multiprocessing \
87
-
--port {port_num} \
151
+
--port {port_num}
88
152
```
153
+
:::{note}
154
+
Pipeline parallelism currently requires AsyncLLMEngine, hence the `--disable-frontend-multiprocessing` is set.
155
+
:::
156
+
157
+
Alternatively, if you want to use only tensor parallelism, set the tensor parallel size to the total number of NPUs in the cluster. For example, with 16 NPUs across 2 nodes, set the tensor parallel size to 16:
158
+
```shell
159
+
python -m vllm.entrypoints.openai.api_server \
160
+
--model="Deepseek/DeepSeek-V2-Lite-Chat" \
161
+
--trust-remote-code \
162
+
--distributed_executor_backend "ray" \
163
+
--enforce-eager \
164
+
--tensor-parallel-size 16 \
165
+
--port {port_num}
166
+
```
167
+
168
+
:::{note}
169
+
If you're running DeepSeek V3/R1, please remove `quantization_config` section in `config.json` file since it's not supported by vllm-ascend currentlly.
170
+
:::
89
171
90
172
Once your server is started, you can query the model with input prompts:
91
173
@@ -109,5 +191,5 @@ Logs of the vllm server:
109
191
110
192
```
111
193
INFO: 127.0.0.1:59384 - "POST /v1/completions HTTP/1.1" 200 OK
0 commit comments