-
Notifications
You must be signed in to change notification settings - Fork 234
[DP][V1] Fix rank set in DP scenario & Bump torch-npu version to 2.5.1.post1.dev20250528 #1235
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This should be merged after #884 |
I like the PR which remove the patch code. Let's merge this asap. |
realliujiaxu
approved these changes
Jun 16, 2025
6f48563
to
a92e5fb
Compare
This pull request has conflicts, please resolve those before we can evaluate the pull request. |
Signed-off-by: MengqingCao <cmq0113@163.com>
Signed-off-by: MengqingCao <cmq0113@163.com>
Signed-off-by: Icey <1790571317@qq.com> Signed-off-by: MengqingCao <cmq0113@163.com>
83 tasks
Yikun
approved these changes
Jun 16, 2025
ganyi1996ppo
pushed a commit
that referenced
this pull request
Jun 17, 2025
… to 2.5.1.post1.dev20250528 (#1247) ### What this PR does / why we need it? Cherry-pick form #1235 1. Fix rank set in DP scenario. The new poc version of torch-npu support setting `ASCEND_RT_VISIBLE_DEVICES` dynamically, thus we could use the rank set in `DPEngineCoreProc` directly instead of calculating local rank across dp by hand in the patched `_init_data_parallel` Closes: #1170 2. Bump torch-npu version to 2.5.1.post1.dev20250528 Closes: #1242 Closes: #1232 ### How was this patch tested? CI passed with new added test. --------- Signed-off-by: Icey <1790571317@qq.com> Signed-off-by: MengqingCao <cmq0113@163.com> Co-authored-by: Icey <1790571317@qq.com>
songshanhu07
added a commit
to songshanhu07/vllm-ascend
that referenced
this pull request
Jun 17, 2025
…to main * 'main' of https://github.com/vllm-project/vllm-ascend: (22 commits) [Bugfix] Remove cuda related lines and add additional pip mirror (vllm-project#1252) [refactor] Refactoring AscendFusedMoE (vllm-project#1229) [Doc] Refactor and init user story page (vllm-project#1224) [Doctest] add installation doctest (vllm-project#1179) [DP][V1] Fix rank set in DP scenario & Bump torch-npu version to 2.5.1.post1.dev20250528 (vllm-project#1235) Fix the device error when using ray as vllm-acend backend (vllm-project#884) [CI] Add unit test framework (vllm-project#1201) [Build] Speedup image build (vllm-project#1216) [CI] Make e2e test to be preemptible and simple (vllm-project#1217) Waiting for BMM NZ support(Improve TPOP 2ms performance) (vllm-project#1131) [Doc] fix VLLM_USE_V1 value in graph mode docs (vllm-project#1226) vllm-ascend support chunked prefill (vllm-project#1172) [CI/UT][Graph] Add ut for torchair graph mode (vllm-project#1103) Add ShouJian Zheng (@jianzs) as vLLM Ascend maintainer (vllm-project#1203) [CI] Recover ut for ascend scheduler only in ci of v1. (vllm-project#1180) Support multistream of MLA vector operations (vllm-project#1135) [Doc] Add Referer header for CANN package download url. (vllm-project#1192) [fix] fix bug in 1p1d disaggregated_prefill example (vllm-project#1184) [CI][Benchmark] Add qwen2.5-7b test (vllm-project#1104) [CI][Benchmark] Add new model and v1 test to perf benchmarks (vllm-project#1099) ... Sync with upstream main branch# the commit.
Yikun
added a commit
to Yikun/vllm-ascend
that referenced
this pull request
Jun 21, 2025
… to 2.5.1.post1.dev20250528 (vllm-project#1235)" This reverts commit 96fa7ff.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
accuracy-test
enable all accuracy test for PR
ci/build
documentation
Improvements or additions to documentation
long-term-test
enable long term test for PR
module:tests
pd-test
enable pd test for PR
ready
read for review
ready-for-test
start test by label for PR
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What this PR does / why we need it?
ASCEND_RT_VISIBLE_DEVICES
dynamically, thus we could use the rank set inDPEngineCoreProc
directly instead of calculating local rank across dp by hand in the patched_init_data_parallel
Closes: #1170
Closes: #1242
Closes: #1232
How was this patch tested?
CI passed with new added test.