Skip to content

Commit c30ddb8

Browse files
Yikunwangxiyuanleo-ponyshen-shanshan
authored
Bump v0.9.1rc1 release (#1349)
### What this PR does / why we need it? Bump v0.9.1rc1 release Closes: #1341 Closes: #1334 ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? CI passed --------- Signed-off-by: Shanshan Shen <87969357+shen-shanshan@users.noreply.github.com> Signed-off-by: Yikun Jiang <yikunkero@gmail.com> Signed-off-by: leo-pony <nengjunma@outlook.com> Co-authored-by: wangxiyuan <wangxiyuan1007@gmail.com> Co-authored-by: leo-pony <nengjunma@outlook.com> Co-authored-by: shen-shanshan <467638484@qq.com>
1 parent 097e714 commit c30ddb8

File tree

9 files changed

+474
-13
lines changed

9 files changed

+474
-13
lines changed

.github/workflows/nightly_benchmarks.yaml

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -51,9 +51,6 @@ jobs:
5151
matrix:
5252
include:
5353
- vllm_branch: v0.9.1
54-
vllm_ascend_branch: main
55-
vllm_use_v1: 0
56-
- vllm_branch: v0.9.0
5754
vllm_ascend_branch: main
5855
vllm_use_v1: 1
5956
max-parallel: 1

docs/source/conf.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -65,15 +65,15 @@
6565
# the branch of vllm, used in vllm clone
6666
# - main branch: 'main'
6767
# - vX.Y.Z branch: 'vX.Y.Z'
68-
'vllm_version': 'v0.9.0',
68+
'vllm_version': 'v0.9.1',
6969
# the branch of vllm-ascend, used in vllm-ascend clone and image tag
7070
# - main branch: 'main'
7171
# - vX.Y.Z branch: latest vllm-ascend release tag
72-
'vllm_ascend_version': 'v0.9.0rc2',
72+
'vllm_ascend_version': 'v0.9.1rc1',
7373
# the newest release version of vllm-ascend and matched vLLM, used in pip install.
7474
# This value should be updated when cut down release.
75-
'pip_vllm_ascend_version': "0.9.0rc2",
76-
'pip_vllm_version': "0.9.0",
75+
'pip_vllm_ascend_version': "0.9.1rc1",
76+
'pip_vllm_version': "0.9.1",
7777
# CANN image tag
7878
'cann_image_tag': "8.1.rc1-910b-ubuntu22.04-py3.10",
7979
}

docs/source/developer_guide/versioning_policy.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,7 @@ Following is the Release Compatibility Matrix for vLLM Ascend Plugin:
2222

2323
| vLLM Ascend | vLLM | Python | Stable CANN | PyTorch/torch_npu | MindIE Turbo |
2424
|-------------|--------------|------------------|-------------|--------------------|--------------|
25+
| v0.9.1rc1 | v0.9.1 | >= 3.9, < 3.12 | 8.1.RC1 | 2.5.1 / 2.5.1.post1.dev20250528 | |
2526
| v0.9.0rc2 | v0.9.0 | >= 3.9, < 3.12 | 8.1.RC1 | 2.5.1 / 2.5.1 | |
2627
| v0.9.0rc1 | v0.9.0 | >= 3.9, < 3.12 | 8.1.RC1 | 2.5.1 / 2.5.1 | |
2728
| v0.8.5rc1 | v0.8.5.post1 | >= 3.9, < 3.12 | 8.1.RC1 | 2.5.1 / 2.5.1 | |
@@ -35,6 +36,7 @@ Following is the Release Compatibility Matrix for vLLM Ascend Plugin:
3536

3637
| Date | Event |
3738
|------------|-------------------------------------------|
39+
| 2025.06.22 | Release candidates, v0.9.1rc1 |
3840
| 2025.06.10 | Release candidates, v0.9.0rc2 |
3941
| 2025.06.09 | Release candidates, v0.9.0rc1 |
4042
| 2025.05.29 | v0.7.x post release, v0.7.3.post1 |

docs/source/faqs.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
## Version Specific FAQs
44

55
- [[v0.7.3.post1] FAQ & Feedback](https://github.com/vllm-project/vllm-ascend/issues/1007)
6-
- [[v0.9.0rc2] FAQ & Feedback](https://github.com/vllm-project/vllm-ascend/issues/1115)
6+
- [[v0.9.1rc1] FAQ & Feedback](https://github.com/vllm-project/vllm-ascend/issues/1351)
77

88
## General FAQs
99

docs/source/tutorials/index.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,8 @@
66
single_npu
77
single_npu_multimodal
88
multi_npu
9+
multi_npu_moge
910
multi_npu_quantization
11+
single_node_300i
1012
multi_node
1113
:::
Lines changed: 117 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,117 @@
1+
# Multi-NPU (Pangu Pro MoE 72B)
2+
3+
## Run vllm-ascend on Multi-NPU
4+
5+
Run docker container:
6+
7+
```{code-block} bash
8+
:substitutions:
9+
# Update the vllm-ascend image
10+
export IMAGE=quay.io/ascend/vllm-ascend:|vllm_ascend_version|
11+
docker run --rm \
12+
--name vllm-ascend \
13+
--device /dev/davinci0 \
14+
--device /dev/davinci1 \
15+
--device /dev/davinci2 \
16+
--device /dev/davinci3 \
17+
--device /dev/davinci_manager \
18+
--device /dev/devmm_svm \
19+
--device /dev/hisi_hdc \
20+
-v /usr/local/dcmi:/usr/local/dcmi \
21+
-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
22+
-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \
23+
-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \
24+
-v /etc/ascend_install.info:/etc/ascend_install.info \
25+
-v /root/.cache:/root/.cache \
26+
-p 8000:8000 \
27+
-it $IMAGE bash
28+
```
29+
30+
Setup environment variables:
31+
32+
```bash
33+
# Set `max_split_size_mb` to reduce memory fragmentation and avoid out of memory
34+
export PYTORCH_NPU_ALLOC_CONF=max_split_size_mb:256
35+
```
36+
37+
### Online Inference on Multi-NPU
38+
39+
Run the following script to start the vLLM server on Multi-NPU:
40+
41+
```bash
42+
vllm serve /path/to/pangu-pro-moe-model \
43+
--tensor-parallel-size 4 \
44+
--trust-remote-code \
45+
--enforce-eager
46+
```
47+
48+
Once your server is started, you can query the model with input prompts:
49+
50+
```bash
51+
curl http://localhost:8000/v1/completions \
52+
-H "Content-Type: application/json" \
53+
-d '{
54+
"model": "/path/to/pangu-pro-moe-model",
55+
"prompt": "The future of AI is",
56+
"max_tokens": 128,
57+
"temperature": 0
58+
}'
59+
```
60+
61+
If you run this successfully, you can see the info shown below:
62+
63+
```json
64+
{"id":"cmpl-013558085d774d66bf30c704decb762a","object":"text_completion","created":1750472788,"model":"/path/to/pangu-pro-moe-model","choices":[{"index":0,"text":" not just about creating smarter machines but about fostering collaboration between humans and AI systems. This partnership can lead to more efficient problem-solving, innovative solutions, and a better quality of life for people around the globe.\n\nHowever, achieving this future requires addressing several challenges. Ethical considerations, such as bias in AI algorithms and privacy concerns, must be prioritized. Additionally, ensuring that AI technologies are accessible to all and do not exacerbate existing inequalities is crucial.\n\nIn conclusion, AI stands at the forefront of technological advancement, with vast potential to transform industries and everyday life. By embracing its opportunities while responsibly managing its risks, we can harn","logprobs":null,"finish_reason":"length","stop_reason":null,"prompt_logprobs":null}],"usage":{"prompt_tokens":6,"total_tokens":134,"completion_tokens":128,"prompt_tokens_details":null},"kv_transfer_params":null}
65+
```
66+
67+
### Offline Inference on Multi-NPU
68+
69+
Run the following script to execute offline inference on multi-NPU:
70+
71+
```python
72+
import gc
73+
74+
import torch
75+
76+
from vllm import LLM, SamplingParams
77+
from vllm.distributed.parallel_state import (destroy_distributed_environment,
78+
destroy_model_parallel)
79+
80+
def clean_up():
81+
destroy_model_parallel()
82+
destroy_distributed_environment()
83+
gc.collect()
84+
torch.npu.empty_cache()
85+
86+
87+
if __name__ == "__main__":
88+
89+
prompts = [
90+
"Hello, my name is",
91+
"The future of AI is",
92+
]
93+
sampling_params = SamplingParams(temperature=0.6, top_p=0.95, top_k=40)
94+
95+
llm = LLM(model="/path/to/pangu-pro-moe-model",
96+
tensor_parallel_size=4,
97+
distributed_executor_backend="mp",
98+
max_model_len=1024,
99+
trust_remote_code=True,
100+
enforce_eager=True)
101+
102+
outputs = llm.generate(prompts, sampling_params)
103+
for output in outputs:
104+
prompt = output.prompt
105+
generated_text = output.outputs[0].text
106+
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
107+
108+
del llm
109+
clean_up()
110+
```
111+
112+
If you run this script successfully, you can see the info shown below:
113+
114+
```bash
115+
Prompt: 'Hello, my name is', Generated text: ' Daniel and I am an 8th grade student at York Middle School. I'
116+
Prompt: 'The future of AI is', Generated text: ' following you. As the technology advances, a new report from the Institute for the'
117+
```

0 commit comments

Comments
 (0)