Skip to content

Commit c40784c

Browse files
authored
[BugFix][Intel GPU] Use refactored API for dist_backend in V1 worker (#20596)
Signed-off-by: ratnampa <ratnam.parikh@intel.com>
1 parent baed180 commit c40784c

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

vllm/v1/worker/xpu_worker.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -148,11 +148,11 @@ def init_device(self):
148148
os.environ["CCL_ATL_TRANSPORT"] = ENV_CCL_ATL_TRANSPORT
149149
os.environ["LOCAL_WORLD_SIZE"] = ENV_LOCAL_WORLD_SIZE
150150
os.environ["LOCAL_RANK"] = str(self.local_rank)
151-
dist_backend = "ccl"
152151

153152
init_worker_distributed_environment(self.vllm_config, self.rank,
154153
self.distributed_init_method,
155-
self.local_rank, dist_backend)
154+
self.local_rank,
155+
current_platform.dist_backend)
156156

157157
# global all_reduce needed for overall oneccl warm up
158158
torch.distributed.all_reduce(torch.zeros(1).xpu())

0 commit comments

Comments
 (0)