@@ -123,10 +123,30 @@ First install system dependencies:
123
123
124
124
``` bash
125
125
apt update -y
126
- apt install -y gcc g++ libnuma-dev
126
+ apt install -y gcc g++ cmake libnuma-dev
127
127
```
128
128
129
- You can install ` vllm ` and ` vllm-ascend ` from ** pre-built wheel** :
129
+ Current version depends on a unreleased ` torch-npu ` , you need to install manually:
130
+
131
+ ```
132
+ # Once the packages are installed, you need to install `torch-npu` manually,
133
+ # because that vllm-ascend relies on an unreleased version of torch-npu.
134
+ # This step will be removed in the next vllm-ascend release.
135
+ #
136
+ # Here we take python 3.10 on aarch64 as an example. Feel free to install the correct version for your environment. See:
137
+ #
138
+ # https://pytorch-package.obs.cn-north-4.myhuaweicloud.com/pta/Daily/v2.5.1/20250320.3/pytorch_v2.5.1_py39.tar.gz
139
+ # https://pytorch-package.obs.cn-north-4.myhuaweicloud.com/pta/Daily/v2.5.1/20250320.3/pytorch_v2.5.1_py310.tar.gz
140
+ # https://pytorch-package.obs.cn-north-4.myhuaweicloud.com/pta/Daily/v2.5.1/20250320.3/pytorch_v2.5.1_py311.tar.gz
141
+ #
142
+ mkdir pta
143
+ cd pta
144
+ wget https://pytorch-package.obs.cn-north-4.myhuaweicloud.com/pta/Daily/v2.5.1/20250320.3/pytorch_v2.5.1_py310.tar.gz
145
+ tar -xvf pytorch_v2.5.1_py310.tar.gz
146
+ pip install ./torch_npu-2.5.1.dev20250320-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
147
+ ```
148
+
149
+ Then you can install ` vllm ` and ` vllm-ascend ` from ** pre-built wheel** :
130
150
131
151
``` {code-block} bash
132
152
:substitutions:
@@ -156,25 +176,10 @@ pip install -e . --extra-index https://download.pytorch.org/whl/cpu/
156
176
```
157
177
:::
158
178
159
- Current version depends on a unreleased ` torch-npu ` , you need to install manually:
160
-
161
- ```
162
- # Once the packages are installed, you need to install `torch-npu` manually,
163
- # because that vllm-ascend relies on an unreleased version of torch-npu.
164
- # This step will be removed in the next vllm-ascend release.
165
- #
166
- # Here we take python 3.10 on aarch64 as an example. Feel free to install the correct version for your environment. See:
167
- #
168
- # https://pytorch-package.obs.cn-north-4.myhuaweicloud.com/pta/Daily/v2.5.1/20250320.3/pytorch_v2.5.1_py39.tar.gz
169
- # https://pytorch-package.obs.cn-north-4.myhuaweicloud.com/pta/Daily/v2.5.1/20250320.3/pytorch_v2.5.1_py310.tar.gz
170
- # https://pytorch-package.obs.cn-north-4.myhuaweicloud.com/pta/Daily/v2.5.1/20250320.3/pytorch_v2.5.1_py311.tar.gz
171
- #
172
- mkdir pta
173
- cd pta
174
- wget https://pytorch-package.obs.cn-north-4.myhuaweicloud.com/pta/Daily/v2.5.1/20250320.3/pytorch_v2.5.1_py310.tar.gz
175
- tar -xvf pytorch_v2.5.1_py310.tar.gz
176
- pip install ./torch_npu-2.5.1.dev20250320-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
179
+ ``` {note}
180
+ vllm-ascend will build custom ops by default. If you don't want to build it, set `COMPILE_CUSTOM_KERNELS=0` environment to disable it.
177
181
```
182
+
178
183
::::
179
184
180
185
::::{tab-item} Using docker
0 commit comments