Skip to content

Commit d912781

Browse files
authored
[doc] Add more details for Ray-based DP (#20948)
Signed-off-by: Rui Qiao <ruisearch42@gmail.com>
1 parent 20149d8 commit d912781

File tree

1 file changed

+10
-2
lines changed

1 file changed

+10
-2
lines changed

docs/serving/data_parallel_deployment.md

Lines changed: 10 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -57,12 +57,20 @@ vllm serve $MODEL --headless --data-parallel-size 4 --data-parallel-size-local 4
5757
--data-parallel-address 10.99.48.128 --data-parallel-rpc-port 13345
5858
```
5959

60-
This DP mode can also be used with Ray, in which case only a single launch command is needed irrespective of the number of nodes:
60+
This DP mode can also be used with Ray by specifying `--data-parallel-backend=ray`:
6161

6262
```bash
63-
vllm serve $MODEL --data-parallel-size 16 --tensor-parallel-size 2 --data-parallel-backend=ray
63+
vllm serve $MODEL --data-parallel-size 4 --data-parallel-size-local 2 \
64+
--data-parallel-backend=ray
6465
```
6566

67+
There are several notable differences when using Ray:
68+
69+
- A single launch command (on any node) is needed to start all local and remote DP ranks, therefore it is more convenient compared to launching on each node
70+
- There is no need to specify `--data-parallel-address`, and the node where the command is run is used as `--data-parallel-address`
71+
- There is no need to specify `--data-parallel-rpc-port`
72+
- Remote DP ranks will be allocated based on node resources of the Ray cluster
73+
6674
Currently, the internal DP load balancing is done within the API server process(es) and is based on the running and waiting queues in each of the engines. This could be made more sophisticated in future by incorporating KV cache aware logic.
6775

6876
When deploying large DP sizes using this method, the API server process can become a bottleneck. In this case, the orthogonal `--api-server-count` command line option can be used to scale this out (for example `--api-server-count=4`). This is transparent to users - a single HTTP endpoint / port is still exposed. Note that this API server scale-out is "internal" and still confined to the "head" node.

0 commit comments

Comments
 (0)