Skip to content

Commit 917c6b7

Browse files
authored
[TEST][DOC] Fix doctest and add system package installation (#1375)
### What this PR does / why we need it? - Fix [doctest](https://github.com/vllm-project/vllm-ascend/actions/workflows/vllm_ascend_doctest.yaml?query=event%3Aschedule) - add system package installation - Add doc for run doctests - Cleanup all extra steps in .github/workflows/vllm_ascend_doctest.yaml - Change schedule job from 4 ---> 12 hours ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? - doctest CI passed - Local test with `/vllm-workspace/vllm-ascend/tests/e2e/run_doctests.sh`. Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
1 parent 08cfc7c commit 917c6b7

File tree

6 files changed

+82
-30
lines changed

6 files changed

+82
-30
lines changed

.github/workflows/vllm_ascend_doctest.yaml

Lines changed: 3 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -30,8 +30,8 @@ on:
3030
- 'tests/e2e/common.sh'
3131
- 'tests/e2e/run_doctests.sh'
3232
schedule:
33-
# Runs every 4 hours
34-
- cron: '0 */4 * * *'
33+
# Runs every 12 hours
34+
- cron: '0 */12 * * *'
3535

3636
# Bash shells do not use ~/.profile or ~/.bashrc so these shells need to be explicitly
3737
# declared as "shell: bash -el {0}" on steps that need to be properly activated.
@@ -65,37 +65,18 @@ jobs:
6565
cd /vllm-workspace/vllm
6666
git --no-pager log -1 || true
6767
68-
- name: Config OS mirrors - Ubuntu
69-
if: ${{ !endsWith(matrix.vllm_verison, '-openeuler') }}
70-
run: |
71-
sed -i 's|ports.ubuntu.com|mirrors.tuna.tsinghua.edu.cn|g' /etc/apt/sources.list
72-
apt-get update -y
73-
apt install -y gcc g++ libnuma-dev git curl jq
74-
75-
- name: Config OS mirrors - openEuler
76-
if: ${{ endsWith(matrix.vllm_verison, '-openeuler') }}
77-
run: |
78-
yum update -y
79-
yum install -y gcc g++ numactl-devel git curl jq
80-
81-
- name: Config pip mirrors
82-
run: |
83-
pip config set global.index-url https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple
84-
8568
- name: Checkout vllm-project/vllm-ascend repo
8669
uses: actions/checkout@v4
8770

8871
- name: Run vllm-ascend/tests/e2e/run_doctests.sh
8972
run: |
9073
# PWD: /__w/vllm-ascend/vllm-ascend
74+
# Make sure e2e tests are latest
9175
echo "Replacing /vllm-workspace/vllm-ascend/tests/e2e ..."
9276
rm -rf /vllm-workspace/vllm-ascend/tests/e2e
9377
mkdir -p /vllm-workspace/vllm-ascend/tests
9478
cp -r tests/e2e /vllm-workspace/vllm-ascend/tests/
9579
96-
# TODO(yikun): Remove this after conf.py merged
97-
cp docs/source/conf.py /vllm-workspace/vllm-ascend/docs/source/
98-
9980
# Simulate container to enter directory
10081
cd /workspace
10182

docs/source/developer_guide/contributing.md

Lines changed: 35 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -80,6 +80,41 @@ pip install -r requirements-dev.txt
8080
pytest tests/
8181
```
8282

83+
84+
### Run doctest
85+
86+
vllm-ascend provides a `vllm-ascend/tests/e2e/run_doctests.sh` command to run all doctests in the doc files.
87+
The doctest is a good way to make sure the docs are up to date and the examples are executable, you can run it locally as follows:
88+
89+
```{code-block} bash
90+
:substitutions:
91+
92+
# Update DEVICE according to your device (/dev/davinci[0-7])
93+
export DEVICE=/dev/davinci0
94+
# Update the vllm-ascend image
95+
export IMAGE=quay.io/ascend/vllm-ascend:|vllm_ascend_version|
96+
docker run --rm \
97+
--name vllm-ascend \
98+
--device $DEVICE \
99+
--device /dev/davinci_manager \
100+
--device /dev/devmm_svm \
101+
--device /dev/hisi_hdc \
102+
-v /usr/local/dcmi:/usr/local/dcmi \
103+
-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
104+
-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \
105+
-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \
106+
-v /etc/ascend_install.info:/etc/ascend_install.info \
107+
-v /root/.cache:/root/.cache \
108+
-p 8000:8000 \
109+
-it $IMAGE bash
110+
111+
# Run doctest
112+
/vllm-workspace/vllm-ascend/tests/e2e/run_doctests.sh
113+
```
114+
115+
This will reproduce the same environment as the CI: [vllm_ascend_doctest.yaml](https://github.com/vllm-project/vllm-ascend/blob/main/.github/workflows/vllm_ascend_doctest.yaml).
116+
117+
83118
## DCO and Signed-off-by
84119

85120
When contributing changes to this project, you must agree to the DCO. Commits must include a `Signed-off-by:` header which certifies agreement with the terms of the DCO.

docs/source/installation.md

Lines changed: 9 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -116,21 +116,22 @@ Once it's done, you can start to set up `vllm` and `vllm-ascend`.
116116
:selected:
117117
:sync: pip
118118

119-
First install system dependencies:
119+
First install system dependencies and config pip mirror:
120120

121121
```bash
122-
apt update -y
123-
apt install -y gcc g++ cmake libnuma-dev wget git
122+
# Using apt-get with mirror
123+
sed -i 's|ports.ubuntu.com|mirrors.tuna.tsinghua.edu.cn|g' /etc/apt/sources.list
124+
apt-get update -y && apt-get install -y gcc g++ cmake libnuma-dev wget git curl jq
125+
# Or using yum
126+
# yum update -y && yum install -y gcc g++ cmake numactl-devel wget git curl jq
127+
# Config pip mirror
128+
pip config set global.index-url https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple
124129
```
125130

126131
**[Optional]** Then config the extra-index of `pip` if you are working on a x86 machine or using torch-npu dev version:
127132

128133
```bash
129-
# For x86 machine
130-
pip config set global.extra-index-url https://download.pytorch.org/whl/cpu/
131-
# For torch-npu dev version
132-
pip config set global.extra-index-url https://mirrors.huaweicloud.com/ascend/repos/pypi
133-
# For x86 torch-npu dev version
134+
# For torch-npu dev version or x86 machine
134135
pip config set global.extra-index-url "https://download.pytorch.org/whl/cpu/ https://mirrors.huaweicloud.com/ascend/repos/pypi"
135136
```
136137

docs/source/quick_start.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -32,6 +32,8 @@ docker run --rm \
3232
-v /root/.cache:/root/.cache \
3333
-p 8000:8000 \
3434
-it $IMAGE bash
35+
# Install curl
36+
apt-get update -y && apt-get install -y curl
3537
```
3638
::::
3739

@@ -58,6 +60,8 @@ docker run --rm \
5860
-v /root/.cache:/root/.cache \
5961
-p 8000:8000 \
6062
-it $IMAGE bash
63+
# Install curl
64+
yum update -y && yum install -y curl
6165
```
6266
::::
6367
:::::

tests/e2e/doctests/001-quickstart-test.sh

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,16 @@
1616
# limitations under the License.
1717
# This file is a part of the vllm-ascend project.
1818
#
19+
function install_system_packages() {
20+
if command -v apt-get >/dev/null; then
21+
sed -i 's|ports.ubuntu.com|mirrors.tuna.tsinghua.edu.cn|g' /etc/apt/sources.list
22+
apt-get update -y && apt install -y curl
23+
elif command -v yum >/dev/null; then
24+
yum update -y && yum install -y curl
25+
else
26+
echo "Unknown package manager. Please install gcc, g++, numactl-devel, git, curl, and jq manually."
27+
fi
28+
}
1929

2030
function simple_test() {
2131
# Do real import test
@@ -28,6 +38,7 @@ function quickstart_offline_test() {
2838
}
2939

3040
function quickstart_online_test() {
41+
install_system_packages
3142
vllm serve Qwen/Qwen2.5-0.5B-Instruct &
3243
wait_url_ready "vllm serve" "localhost:8000/v1/models"
3344
# Do real curl test

tests/e2e/doctests/002-pip-binary-installation-test.sh

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -18,14 +18,34 @@
1818
#
1919
trap clean_venv EXIT
2020

21+
function install_system_packages() {
22+
if command -v apt-get >/dev/null; then
23+
sed -i 's|ports.ubuntu.com|mirrors.tuna.tsinghua.edu.cn|g' /etc/apt/sources.list
24+
apt-get update -y && apt-get install -y gcc g++ cmake libnuma-dev wget git curl jq
25+
elif command -v yum >/dev/null; then
26+
yum update -y && yum install -y gcc g++ cmake numactl-devel wget git curl jq
27+
else
28+
echo "Unknown package manager. Please install curl manually."
29+
fi
30+
}
31+
32+
function config_pip_mirror() {
33+
pip config set global.index-url https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple
34+
}
35+
2136
function install_binary_test() {
2237

38+
install_system_packages
39+
config_pip_mirror
2340
create_vllm_venv
2441

2542
PIP_VLLM_VERSION=$(get_version pip_vllm_version)
2643
PIP_VLLM_ASCEND_VERSION=$(get_version pip_vllm_ascend_version)
2744
_info "====> Install vllm==${PIP_VLLM_VERSION} and vllm-ascend ${PIP_VLLM_ASCEND_VERSION}"
2845

46+
# Setup extra-index-url for x86 & torch_npu dev version
47+
pip config set global.extra-index-url "https://download.pytorch.org/whl/cpu/ https://mirrors.huaweicloud.com/ascend/repos/pypi"
48+
2949
pip install vllm=="$(get_version pip_vllm_version)"
3050
pip install vllm-ascend=="$(get_version pip_vllm_ascend_version)"
3151

0 commit comments

Comments
 (0)