Skip to content

Commit c5850d3

Browse files
authored
[Doc] Update installation (#596)
Many users facing a failed installation when using `pip install -e .`, this is mainly introduced by the released `torch-npu` version conflict with `torch>=2.5.1`. This conflict mainly exist in the temp env of pyproject build. This pr updates installation tutorial by using `python setup.py develop` to quick fix this. cc @wangxiyuan --------- Signed-off-by: MengqingCao <cmq0113@163.com>
1 parent a8d633f commit c5850d3

File tree

1 file changed

+14
-4
lines changed

1 file changed

+14
-4
lines changed

docs/source/installation.md

Lines changed: 14 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -148,6 +148,12 @@ pip install ./torch_npu-2.5.1.dev20250320-cp310-cp310-manylinux_2_17_aarch64.man
148148
cd ..
149149
```
150150

151+
**[Optinal]** Config the extra-index of `pip` if you are working on a **x86** machine, so that the torch with cpu could be found:
152+
153+
```bash
154+
pip config set global.extra-index-url https://download.pytorch.org/whl/cpu/
155+
```
156+
151157
Then you can install `vllm` and `vllm-ascend` from **pre-built wheel**:
152158

153159
```{code-block} bash
@@ -159,7 +165,11 @@ Then you can install `vllm` and `vllm-ascend` from **pre-built wheel**:
159165
pip install vllm==|pip_vllm_version|
160166
161167
# Install vllm-project/vllm-ascend from pypi.
162-
pip install vllm-ascend==|pip_vllm_ascend_version| --extra-index https://download.pytorch.org/whl/cpu/
168+
pip install vllm-ascend==|pip_vllm_ascend_version|
169+
```
170+
171+
```{note}
172+
If you failed to install vllm due to no triton version could be installed, please build from source code.
163173
```
164174

165175
:::{dropdown} Click here to see "Build from source code"
@@ -171,20 +181,20 @@ or build from **source code**:
171181
# Install vLLM
172182
git clone --depth 1 --branch |vllm_version| https://github.com/vllm-project/vllm
173183
cd vllm
174-
VLLM_TARGET_DEVICE=empty pip install . --extra-index https://download.pytorch.org/whl/cpu/
184+
VLLM_TARGET_DEVICE=empty pip install .
175185
cd ..
176186
177187
# Install vLLM Ascend
178188
git clone --depth 1 --branch |vllm_ascend_version| https://github.com/vllm-project/vllm-ascend.git
179189
cd vllm-ascend
180-
pip install -e . --extra-index https://download.pytorch.org/whl/cpu/
190+
python setup.py develop
181191
cd ..
182192
```
183193
:::
184194

185195
```{note}
186196
vllm-ascend will build custom ops by default. If you don't want to build it, set `COMPILE_CUSTOM_KERNELS=0` environment to disable it.
187-
To build custom ops, gcc/g++ higher than 8 and c++ 17 or higher is required. If you encourage a torch-npu version conflict, please install with `pip install --no-build-isolation -e .` to build on system env.
197+
To build custom ops, gcc/g++ higher than 8 and c++ 17 or higher is required. If you're using `pip install -e .` and encourage a torch-npu version conflict, please install with `pip install --no-build-isolation -e .` to build on system env.
188198
```
189199

190200
::::

0 commit comments

Comments
 (0)