|
| 1 | +# Multi-NPU (Pangu Pro MoE 72B) |
| 2 | + |
| 3 | +## Run vllm-ascend on Multi-NPU |
| 4 | + |
| 5 | +Run docker container: |
| 6 | + |
| 7 | +```{code-block} bash |
| 8 | + :substitutions: |
| 9 | +# Update the vllm-ascend image |
| 10 | +export IMAGE=quay.io/ascend/vllm-ascend:|vllm_ascend_version| |
| 11 | +docker run --rm \ |
| 12 | +--name vllm-ascend \ |
| 13 | +--device /dev/davinci0 \ |
| 14 | +--device /dev/davinci1 \ |
| 15 | +--device /dev/davinci2 \ |
| 16 | +--device /dev/davinci3 \ |
| 17 | +--device /dev/davinci_manager \ |
| 18 | +--device /dev/devmm_svm \ |
| 19 | +--device /dev/hisi_hdc \ |
| 20 | +-v /usr/local/dcmi:/usr/local/dcmi \ |
| 21 | +-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \ |
| 22 | +-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \ |
| 23 | +-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \ |
| 24 | +-v /etc/ascend_install.info:/etc/ascend_install.info \ |
| 25 | +-v /root/.cache:/root/.cache \ |
| 26 | +-p 8000:8000 \ |
| 27 | +-it $IMAGE bash |
| 28 | +``` |
| 29 | + |
| 30 | +Setup environment variables: |
| 31 | + |
| 32 | +```bash |
| 33 | +# Set `max_split_size_mb` to reduce memory fragmentation and avoid out of memory |
| 34 | +export PYTORCH_NPU_ALLOC_CONF=max_split_size_mb:256 |
| 35 | +``` |
| 36 | + |
| 37 | +### Online Inference on Multi-NPU |
| 38 | + |
| 39 | +Run the following script to start the vLLM server on Multi-NPU: |
| 40 | + |
| 41 | +```bash |
| 42 | +vllm serve /path/to/pangu-pro-moe-model \ |
| 43 | +--tensor-parallel-size 4 \ |
| 44 | +--trust-remote-code \ |
| 45 | +--enforce-eager |
| 46 | +``` |
| 47 | + |
| 48 | +Once your server is started, you can query the model with input prompts: |
| 49 | + |
| 50 | +```bash |
| 51 | +curl http://localhost:8000/v1/completions \ |
| 52 | + -H "Content-Type: application/json" \ |
| 53 | + -d '{ |
| 54 | + "model": "/path/to/pangu-pro-moe-model", |
| 55 | + "prompt": "The future of AI is", |
| 56 | + "max_tokens": 128, |
| 57 | + "temperature": 0 |
| 58 | + }' |
| 59 | +``` |
| 60 | + |
| 61 | +If you run this successfully, you can see the info shown below: |
| 62 | + |
| 63 | +```json |
| 64 | +{"id":"cmpl-013558085d774d66bf30c704decb762a","object":"text_completion","created":1750472788,"model":"/path/to/pangu-pro-moe-model","choices":[{"index":0,"text":" not just about creating smarter machines but about fostering collaboration between humans and AI systems. This partnership can lead to more efficient problem-solving, innovative solutions, and a better quality of life for people around the globe.\n\nHowever, achieving this future requires addressing several challenges. Ethical considerations, such as bias in AI algorithms and privacy concerns, must be prioritized. Additionally, ensuring that AI technologies are accessible to all and do not exacerbate existing inequalities is crucial.\n\nIn conclusion, AI stands at the forefront of technological advancement, with vast potential to transform industries and everyday life. By embracing its opportunities while responsibly managing its risks, we can harn","logprobs":null,"finish_reason":"length","stop_reason":null,"prompt_logprobs":null}],"usage":{"prompt_tokens":6,"total_tokens":134,"completion_tokens":128,"prompt_tokens_details":null},"kv_transfer_params":null} |
| 65 | +``` |
| 66 | + |
| 67 | +### Offline Inference on Multi-NPU |
| 68 | + |
| 69 | +Run the following script to execute offline inference on multi-NPU: |
| 70 | + |
| 71 | +```python |
| 72 | +import gc |
| 73 | + |
| 74 | +import torch |
| 75 | + |
| 76 | +from vllm import LLM, SamplingParams |
| 77 | +from vllm.distributed.parallel_state import (destroy_distributed_environment, |
| 78 | + destroy_model_parallel) |
| 79 | + |
| 80 | +def clean_up(): |
| 81 | + destroy_model_parallel() |
| 82 | + destroy_distributed_environment() |
| 83 | + gc.collect() |
| 84 | + torch.npu.empty_cache() |
| 85 | + |
| 86 | + |
| 87 | +if __name__ == "__main__": |
| 88 | + |
| 89 | + prompts = [ |
| 90 | + "Hello, my name is", |
| 91 | + "The future of AI is", |
| 92 | + ] |
| 93 | + sampling_params = SamplingParams(temperature=0.6, top_p=0.95, top_k=40) |
| 94 | + |
| 95 | + llm = LLM(model="/path/to/pangu-pro-moe-model", |
| 96 | + tensor_parallel_size=4, |
| 97 | + distributed_executor_backend="mp", |
| 98 | + max_model_len=1024, |
| 99 | + trust_remote_code=True, |
| 100 | + enforce_eager=True) |
| 101 | + |
| 102 | + outputs = llm.generate(prompts, sampling_params) |
| 103 | + for output in outputs: |
| 104 | + prompt = output.prompt |
| 105 | + generated_text = output.outputs[0].text |
| 106 | + print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") |
| 107 | + |
| 108 | + del llm |
| 109 | + clean_up() |
| 110 | +``` |
| 111 | + |
| 112 | +If you run this script successfully, you can see the info shown below: |
| 113 | + |
| 114 | +```bash |
| 115 | +Prompt: 'Hello, my name is', Generated text: ' Daniel and I am an 8th grade student at York Middle School. I' |
| 116 | +Prompt: 'The future of AI is', Generated text: ' following you. As the technology advances, a new report from the Institute for the' |
| 117 | +``` |
0 commit comments