|
| 1 | +# Multi-NPU (Qwen3-30B-A3B) |
| 2 | + |
| 3 | +## Run vllm-ascend on Multi-NPU with Qwen3 MoE |
| 4 | + |
| 5 | +Run docker container: |
| 6 | + |
| 7 | +```{code-block} bash |
| 8 | + :substitutions: |
| 9 | +# Update the vllm-ascend image |
| 10 | +export IMAGE=quay.io/ascend/vllm-ascend:|vllm_ascend_version| |
| 11 | +docker run --rm \ |
| 12 | +--name vllm-ascend \ |
| 13 | +--device /dev/davinci0 \ |
| 14 | +--device /dev/davinci1 \ |
| 15 | +--device /dev/davinci2 \ |
| 16 | +--device /dev/davinci3 \ |
| 17 | +--device /dev/davinci_manager \ |
| 18 | +--device /dev/devmm_svm \ |
| 19 | +--device /dev/hisi_hdc \ |
| 20 | +-v /usr/local/dcmi:/usr/local/dcmi \ |
| 21 | +-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \ |
| 22 | +-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \ |
| 23 | +-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \ |
| 24 | +-v /etc/ascend_install.info:/etc/ascend_install.info \ |
| 25 | +-v /root/.cache:/root/.cache \ |
| 26 | +-p 8000:8000 \ |
| 27 | +-it $IMAGE bash |
| 28 | +``` |
| 29 | + |
| 30 | +Setup environment variables: |
| 31 | + |
| 32 | +```bash |
| 33 | +# Load model from ModelScope to speed up download |
| 34 | +export VLLM_USE_MODELSCOPE=True |
| 35 | + |
| 36 | +# Set `max_split_size_mb` to reduce memory fragmentation and avoid out of memory |
| 37 | +export PYTORCH_NPU_ALLOC_CONF=max_split_size_mb:256 |
| 38 | + |
| 39 | +# For vllm-ascend 0.9.2+, the V1 engine is enabled by default and no longer needs to be explicitly specified. |
| 40 | +export VLLM_USE_V1=1 |
| 41 | +``` |
| 42 | + |
| 43 | +### Online Inference on Multi-NPU |
| 44 | + |
| 45 | +Run the following script to start the vLLM server on Multi-NPU: |
| 46 | + |
| 47 | +For an Atlas A2 with 64GB of NPU card memory, tensor-parallel-size should be at least 2, and for 32GB of memory, tensor-parallel-size should be at least 4. |
| 48 | + |
| 49 | +```bash |
| 50 | +vllm serve Qwen/Qwen3-30B-A3B --tensor-parallel-size 4 --enable_expert_parallel |
| 51 | +``` |
| 52 | + |
| 53 | +Once your server is started, you can query the model with input prompts |
| 54 | + |
| 55 | +```bash |
| 56 | +curl http://localhost:8000/v1/chat/completions -H "Content-Type: application/json" -d '{ |
| 57 | + "model": "Qwen/Qwen3-30B-A3B", |
| 58 | + "messages": [ |
| 59 | + {"role": "user", "content": "Give me a short introduction to large language models."} |
| 60 | + ], |
| 61 | + "temperature": 0.6, |
| 62 | + "top_p": 0.95, |
| 63 | + "top_k": 20, |
| 64 | + "max_tokens": 4096 |
| 65 | +}' |
| 66 | +``` |
| 67 | + |
| 68 | +### Offline Inference on Multi-NPU |
| 69 | + |
| 70 | +Run the following script to execute offline inference on multi-NPU: |
| 71 | + |
| 72 | +```python |
| 73 | +import gc |
| 74 | +import torch |
| 75 | + |
| 76 | +from vllm import LLM, SamplingParams |
| 77 | +from vllm.distributed.parallel_state import (destroy_distributed_environment, |
| 78 | + destroy_model_parallel) |
| 79 | + |
| 80 | +def clean_up(): |
| 81 | + destroy_model_parallel() |
| 82 | + destroy_distributed_environment() |
| 83 | + gc.collect() |
| 84 | + torch.npu.empty_cache() |
| 85 | + |
| 86 | +prompts = [ |
| 87 | + "Hello, my name is", |
| 88 | + "The future of AI is", |
| 89 | +] |
| 90 | +sampling_params = SamplingParams(temperature=0.6, top_p=0.95, top_k=40) |
| 91 | +llm = LLM(model="Qwen/Qwen3-30B-A3B", |
| 92 | + tensor_parallel_size=4, |
| 93 | + distributed_executor_backend="mp", |
| 94 | + max_model_len=4096, |
| 95 | + enable_expert_parallel=True) |
| 96 | + |
| 97 | +outputs = llm.generate(prompts, sampling_params) |
| 98 | +for output in outputs: |
| 99 | + prompt = output.prompt |
| 100 | + generated_text = output.outputs[0].text |
| 101 | + print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") |
| 102 | + |
| 103 | +del llm |
| 104 | +clean_up() |
| 105 | +``` |
| 106 | + |
| 107 | +If you run this script successfully, you can see the info shown below: |
| 108 | + |
| 109 | +```bash |
| 110 | +Prompt: 'Hello, my name is', Generated text: " Lucy. I'm from the UK and I'm 11 years old." |
| 111 | +Prompt: 'The future of AI is', Generated text: ' a topic that has captured the imagination of scientists, philosophers, and the general public' |
| 112 | +``` |
0 commit comments