Skip to content

Commit 5fa70b6

Browse files
authored
[Build] Update doc (#509)
1. install torch-npu before vllm-ascend to ensure custom ops build success. 2. set `COMPILE_CUSTOM_KERNELS=0` if users want to disable custom ops build. Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
1 parent 11ecbfd commit 5fa70b6

File tree

2 files changed

+30
-20
lines changed

2 files changed

+30
-20
lines changed

docs/source/developer_guide/contributing.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,9 @@ cd ..
2828
# Clone vllm-ascend and install
2929
git clone https://github.com/vllm-project/vllm-ascend.git
3030
cd vllm-ascend
31+
# install system requirement
32+
apt install -y gcc g++ cmake libnuma-dev
33+
# install project requirement
3134
pip install -r requirements-dev.txt
3235

3336
# Then you can run lint and mypy test
@@ -38,6 +41,8 @@ bash format.sh
3841
# pip install -e .
3942
# - build without deps for debugging in other OS
4043
# pip install -e . --no-deps
44+
# - build without custom ops
45+
# COMPILE_CUSTOM_KERNELS=0 pip install -e .
4146

4247
# Commit changed files using `-s`
4348
git commit -sm "your commit info"

docs/source/installation.md

Lines changed: 25 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -123,10 +123,30 @@ First install system dependencies:
123123

124124
```bash
125125
apt update -y
126-
apt install -y gcc g++ libnuma-dev
126+
apt install -y gcc g++ cmake libnuma-dev
127127
```
128128

129-
You can install `vllm` and `vllm-ascend` from **pre-built wheel**:
129+
Current version depends on a unreleased `torch-npu`, you need to install manually:
130+
131+
```
132+
# Once the packages are installed, you need to install `torch-npu` manually,
133+
# because that vllm-ascend relies on an unreleased version of torch-npu.
134+
# This step will be removed in the next vllm-ascend release.
135+
#
136+
# Here we take python 3.10 on aarch64 as an example. Feel free to install the correct version for your environment. See:
137+
#
138+
# https://pytorch-package.obs.cn-north-4.myhuaweicloud.com/pta/Daily/v2.5.1/20250320.3/pytorch_v2.5.1_py39.tar.gz
139+
# https://pytorch-package.obs.cn-north-4.myhuaweicloud.com/pta/Daily/v2.5.1/20250320.3/pytorch_v2.5.1_py310.tar.gz
140+
# https://pytorch-package.obs.cn-north-4.myhuaweicloud.com/pta/Daily/v2.5.1/20250320.3/pytorch_v2.5.1_py311.tar.gz
141+
#
142+
mkdir pta
143+
cd pta
144+
wget https://pytorch-package.obs.cn-north-4.myhuaweicloud.com/pta/Daily/v2.5.1/20250320.3/pytorch_v2.5.1_py310.tar.gz
145+
tar -xvf pytorch_v2.5.1_py310.tar.gz
146+
pip install ./torch_npu-2.5.1.dev20250320-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
147+
```
148+
149+
Then you can install `vllm` and `vllm-ascend` from **pre-built wheel**:
130150

131151
```{code-block} bash
132152
:substitutions:
@@ -156,25 +176,10 @@ pip install -e . --extra-index https://download.pytorch.org/whl/cpu/
156176
```
157177
:::
158178

159-
Current version depends on a unreleased `torch-npu`, you need to install manually:
160-
161-
```
162-
# Once the packages are installed, you need to install `torch-npu` manually,
163-
# because that vllm-ascend relies on an unreleased version of torch-npu.
164-
# This step will be removed in the next vllm-ascend release.
165-
#
166-
# Here we take python 3.10 on aarch64 as an example. Feel free to install the correct version for your environment. See:
167-
#
168-
# https://pytorch-package.obs.cn-north-4.myhuaweicloud.com/pta/Daily/v2.5.1/20250320.3/pytorch_v2.5.1_py39.tar.gz
169-
# https://pytorch-package.obs.cn-north-4.myhuaweicloud.com/pta/Daily/v2.5.1/20250320.3/pytorch_v2.5.1_py310.tar.gz
170-
# https://pytorch-package.obs.cn-north-4.myhuaweicloud.com/pta/Daily/v2.5.1/20250320.3/pytorch_v2.5.1_py311.tar.gz
171-
#
172-
mkdir pta
173-
cd pta
174-
wget https://pytorch-package.obs.cn-north-4.myhuaweicloud.com/pta/Daily/v2.5.1/20250320.3/pytorch_v2.5.1_py310.tar.gz
175-
tar -xvf pytorch_v2.5.1_py310.tar.gz
176-
pip install ./torch_npu-2.5.1.dev20250320-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
179+
```{note}
180+
vllm-ascend will build custom ops by default. If you don't want to build it, set `COMPILE_CUSTOM_KERNELS=0` environment to disable it.
177181
```
182+
178183
::::
179184

180185
::::{tab-item} Using docker

0 commit comments

Comments
 (0)