Skip to content

Commit 9cbce42

Browse files
authored
[MISC] Remove useless patch (#1366)
### What this PR does / why we need it? `stateless_init_dp_group` in vllm works with non-cuda platform now. Remove this useless patch. Which was introduced in vllm-ascend by e74331a (v0.8.4rc2) vLLM upstream merged: vllm-project/vllm@3e472d8 (v0.8.0) ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? CI passed Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
1 parent 5177bef commit 9cbce42

File tree

2 files changed

+0
-30
lines changed

2 files changed

+0
-30
lines changed

vllm_ascend/patch/__init__.py

Lines changed: 0 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -56,16 +56,6 @@
5656
# Need a PR to vllm to support get port from environment.
5757
# Future Plan:
5858
# Remove those patch when vllm merged them
59-
# 3. `vllm.config.ParallelConfig.ParallelConfig.stateless_init_dp_group`
60-
# Why:
61-
# vLLM use gloo backend by default to initialize stateless dp process gourp, but we want to use hccl here to
62-
# get better performance
63-
# How:
64-
# adopt nccl backend to init process group.(Now we still use gloo, it's just a placeholder, we'll use nccl in the future)
65-
# Related PR (if no, explain why):
66-
# Need a PR to vllm to support more backend.
67-
# Future Plan:
68-
# Remove those patch when vllm support more backend.
6959
#
7060
# * Worker Patch:
7161
# ===============

vllm_ascend/patch/platform/patch_common/patch_distributed.py

Lines changed: 0 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -21,10 +21,7 @@
2121
import vllm
2222
import vllm.distributed
2323
import vllm.envs as envs
24-
from torch.distributed import ProcessGroup
2524
from vllm.config import ParallelConfig
26-
from vllm.distributed.utils import \
27-
stateless_init_torch_distributed_process_group
2825

2926
from vllm_ascend.utils import NullHandle, is_310p
3027

@@ -65,25 +62,8 @@ def parallel_config_get_dp_port(self) -> int:
6562
return port
6663

6764

68-
def stateless_init_dp_group(self) -> "ProcessGroup":
69-
# TODO(Yizhou): Currently we have to set the backend to gloo
70-
# because in vllm.config.ParallelConfig.has_unfinished_dp the
71-
# device is set to cpu. We need to fix this in the future.
72-
# We need to compare the performance of gloo and hccl and then
73-
# decide which one to use.
74-
dp_group = stateless_init_torch_distributed_process_group(
75-
self.data_parallel_master_ip,
76-
self.get_next_dp_init_port(),
77-
self.data_parallel_rank,
78-
self.data_parallel_size,
79-
backend="gloo")
80-
81-
return dp_group
82-
83-
8465
vllm.distributed.parallel_state.destroy_model_parallel = ascend_destroy_model_parallel
8566
ParallelConfig.get_next_dp_init_port = parallel_config_get_dp_port
86-
ParallelConfig.stateless_init_dp_group = stateless_init_dp_group
8767

8868

8969
def communication_adaptation_310p():

0 commit comments

Comments
 (0)