Skip to content

Commit 0c1d239

Browse files
authored
Add unit test local cpu guide and enable base testcase (#1566)
### What this PR does / why we need it? Use Base test and cleanup all manaul patch code - Cleanup EPLB config to avoid tmp test file - Use BaseTest with global cache - Add license - Add a doc to setup unit test in local env ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? CI passed Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
1 parent eb39054 commit 0c1d239

File tree

13 files changed

+239
-58
lines changed

13 files changed

+239
-58
lines changed

docs/source/developer_guide/contribution/testing.md

Lines changed: 108 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -9,8 +9,52 @@ The fastest way to setup test environment is to use the main branch container im
99
:::::{tab-set}
1010
:sync-group: e2e
1111

12-
::::{tab-item} Single card
12+
::::{tab-item} Local (CPU)
1313
:selected:
14+
:sync: cpu
15+
16+
You can run the unit tests on CPU with the following steps:
17+
18+
```{code-block} bash
19+
:substitutions:
20+
21+
cd ~/vllm-project/
22+
# ls
23+
# vllm vllm-ascend
24+
25+
# Use mirror to speedup download
26+
# docker pull quay.nju.edu.cn/ascend/cann:|cann_image_tag|
27+
export IMAGE=quay.io/ascend/cann:|cann_image_tag|
28+
docker run --rm --name vllm-ascend-ut \
29+
-v $(pwd):/vllm-project \
30+
-v ~/.cache:/root/.cache \
31+
-ti $IMAGE bash
32+
33+
# (Optional) Configure mirror to speedup download
34+
sed -i 's|ports.ubuntu.com|mirrors.huaweicloud.com|g' /etc/apt/sources.list
35+
pip config set global.index-url https://mirrors.huaweicloud.com/repository/pypi/simple/
36+
37+
# For torch-npu dev version or x86 machine
38+
export PIP_EXTRA_INDEX_URL="https://download.pytorch.org/whl/cpu/ https://mirrors.huaweicloud.com/ascend/repos/pypi"
39+
40+
apt-get update -y
41+
apt-get install -y python3-pip git vim wget net-tools gcc g++ cmake libnuma-dev curl gnupg2
42+
43+
# Install vllm
44+
cd /vllm-project/vllm
45+
VLLM_TARGET_DEVICE=empty python3 -m pip -v install .
46+
47+
# Install vllm-ascend
48+
cd /vllm-project/vllm-ascend
49+
# [IMPORTANT] Import LD_LIBRARY_PATH to enumerate the CANN environment under CPU
50+
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/Ascend/ascend-toolkit/latest/$(uname -m)-linux/devlib
51+
python3 -m pip install -r requirements-dev.txt
52+
python3 -m pip install -v .
53+
```
54+
55+
::::
56+
57+
::::{tab-item} Single card
1458
:sync: single
1559

1660
```{code-block} bash
@@ -36,6 +80,16 @@ docker run --rm \
3680
-it $IMAGE bash
3781
```
3882

83+
After starting the container, you should install the required packages:
84+
85+
```bash
86+
# Prepare
87+
pip config set global.index-url https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple
88+
89+
# Install required packages
90+
pip install -r requirements-dev.txt
91+
```
92+
3993
::::
4094

4195
::::{tab-item} Multi cards
@@ -63,20 +117,23 @@ docker run --rm \
63117
-p 8000:8000 \
64118
-it $IMAGE bash
65119
```
66-
::::
67-
68-
:::::
69120

70121
After starting the container, you should install the required packages:
71122

72123
```bash
124+
cd /vllm-workspace/vllm-ascend/
125+
73126
# Prepare
74127
pip config set global.index-url https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple
75128

76129
# Install required packages
77130
pip install -r requirements-dev.txt
78131
```
79132

133+
::::
134+
135+
:::::
136+
80137
## Running tests
81138

82139
### Unit test
@@ -89,14 +146,48 @@ There are several principles to follow when writing unit tests:
89146
- Example: [tests/ut/test_ascend_config.py](https://github.com/vllm-project/vllm-ascend/blob/main/tests/ut/test_ascend_config.py).
90147
- You can run the unit tests using `pytest`:
91148

92-
```bash
93-
cd /vllm-workspace/vllm-ascend/
94-
# Run all single card the tests
95-
pytest -sv tests/ut
149+
:::::{tab-set}
150+
:sync-group: e2e
96151

97-
# Run
98-
pytest -sv tests/ut/test_ascend_config.py
99-
```
152+
::::{tab-item} Local (CPU)
153+
:selected:
154+
:sync: cpu
155+
156+
```bash
157+
# Run unit tests
158+
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/Ascend/ascend-toolkit/latest/$(uname -m)-linux/devlib
159+
VLLM_USE_V1=1 TORCH_DEVICE_BACKEND_AUTOLOAD=0 pytest -sv tests/ut
160+
```
161+
162+
::::
163+
164+
::::{tab-item} Single card
165+
:sync: single
166+
167+
```bash
168+
cd /vllm-workspace/vllm-ascend/
169+
# Run all single card the tests
170+
pytest -sv tests/ut
171+
172+
# Run single test
173+
pytest -sv tests/ut/test_ascend_config.py
174+
```
175+
::::
176+
177+
::::{tab-item} Multi cards test
178+
:sync: multi
179+
180+
```bash
181+
cd /vllm-workspace/vllm-ascend/
182+
# Run all single card the tests
183+
pytest -sv tests/ut
184+
185+
# Run single test
186+
pytest -sv tests/ut/test_ascend_config.py
187+
```
188+
::::
189+
190+
:::::
100191

101192
### E2E test
102193

@@ -106,6 +197,12 @@ locally.
106197
:::::{tab-set}
107198
:sync-group: e2e
108199

200+
::::{tab-item} Local (CPU)
201+
:sync: cpu
202+
203+
You can't run e2e test on CPU.
204+
::::
205+
109206
::::{tab-item} Single card
110207
:selected:
111208
:sync: single

tests/ut/attention/test_attention_v1.py

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,3 @@
1-
import vllm_ascend.patch.worker.patch_common.patch_utils # type: ignore[import] # isort: skip # noqa
2-
31
from unittest.mock import MagicMock, patch
42

53
import torch

tests/ut/base.py

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,26 @@
1+
#
2+
# Licensed under the Apache License, Version 2.0 (the "License");
3+
# you may not use this file except in compliance with the License.
4+
# You may obtain a copy of the License at
5+
#
6+
# http://www.apache.org/licenses/LICENSE-2.0
7+
#
8+
# Unless required by applicable law or agreed to in writing, software
9+
# distributed under the License is distributed on an "AS IS" BASIS,
10+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11+
# See the License for the specific language governing permissions and
12+
# limitations under the License.
13+
# This file is a part of the vllm-ascend project.
14+
#
15+
116
import unittest
217

318
from vllm_ascend.utils import adapt_patch
419

20+
# fused moe ops test will hit the infer_schema error, we need add the patch
21+
# here to make the test pass.
22+
import vllm_ascend.patch.worker.patch_common.patch_utils # type: ignore[import] # isort: skip # noqa
23+
524

625
class TestBase(unittest.TestCase):
726

tests/ut/distributed/test_parallel_state.py

Lines changed: 17 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,16 +1,31 @@
1-
import unittest
1+
#
2+
# Licensed under the Apache License, Version 2.0 (the "License");
3+
# you may not use this file except in compliance with the License.
4+
# You may obtain a copy of the License at
5+
#
6+
# http://www.apache.org/licenses/LICENSE-2.0
7+
#
8+
# Unless required by applicable law or agreed to in writing, software
9+
# distributed under the License is distributed on an "AS IS" BASIS,
10+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11+
# See the License for the specific language governing permissions and
12+
# limitations under the License.
13+
# This file is a part of the vllm-ascend project.
14+
#
15+
216
from unittest.mock import MagicMock, patch
317

418
import pytest
519
from vllm.distributed.parallel_state import GroupCoordinator
620

721
import vllm_ascend
22+
from tests.ut.base import TestBase
823
from vllm_ascend.distributed.parallel_state import (
924
destory_ascend_model_parallel, get_ep_group, get_etp_group,
1025
init_ascend_model_parallel, model_parallel_initialized)
1126

1227

13-
class TestParallelState(unittest.TestCase):
28+
class TestParallelState(TestBase):
1429

1530
@patch('vllm_ascend.distributed.parallel_state._EP',
1631
new_callable=lambda: MagicMock(spec=GroupCoordinator))

tests/ut/ops/expert_map.json

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
{
2+
"moe_layer_count":
3+
1,
4+
"layer_list": [{
5+
"layer_id":
6+
0,
7+
"device_count":
8+
2,
9+
"device_list": [{
10+
"device_id": 0,
11+
"device_expert": [7, 2, 0, 3, 5]
12+
}, {
13+
"device_id": 1,
14+
"device_expert": [6, 1, 4, 7, 2]
15+
}]
16+
}]
17+
}

tests/ut/ops/test_expert_load_balancer.py

Lines changed: 25 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,26 @@
1-
# fused moe ops test will hit the infer_schema error, we need add the patch
2-
# here to make the test pass.
3-
import vllm_ascend.patch.worker.patch_common.patch_utils # type: ignore[import] # isort: skip # noqa
1+
#
2+
# Licensed under the Apache License, Version 2.0 (the "License");
3+
# you may not use this file except in compliance with the License.
4+
# You may obtain a copy of the License at
5+
#
6+
# http://www.apache.org/licenses/LICENSE-2.0
7+
#
8+
# Unless required by applicable law or agreed to in writing, software
9+
# distributed under the License is distributed on an "AS IS" BASIS,
10+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11+
# See the License for the specific language governing permissions and
12+
# limitations under the License.
13+
# This file is a part of the vllm-ascend project.
14+
#
415

516
import json
6-
import unittest
17+
import os
718
from typing import List, TypedDict
819
from unittest import mock
920

1021
import torch
1122

23+
from tests.ut.base import TestBase
1224
from vllm_ascend.ops.expert_load_balancer import ExpertLoadBalancer
1325

1426

@@ -28,31 +40,13 @@ class MockData(TypedDict):
2840
layer_list: List[Layer]
2941

3042

31-
MOCK_DATA: MockData = {
32-
"moe_layer_count":
33-
1,
34-
"layer_list": [{
35-
"layer_id":
36-
0,
37-
"device_count":
38-
2,
39-
"device_list": [{
40-
"device_id": 0,
41-
"device_expert": [7, 2, 0, 3, 5]
42-
}, {
43-
"device_id": 1,
44-
"device_expert": [6, 1, 4, 7, 2]
45-
}]
46-
}]
47-
}
48-
49-
50-
class TestExpertLoadBalancer(unittest.TestCase):
43+
class TestExpertLoadBalancer(TestBase):
5144

5245
def setUp(self):
53-
json_file = "expert_map.json"
54-
with open(json_file, 'w') as f:
55-
json.dump(MOCK_DATA, f)
46+
_TEST_DIR = os.path.dirname(__file__)
47+
json_file = _TEST_DIR + "/expert_map.json"
48+
with open(json_file, 'r') as f:
49+
self.expert_map: MockData = json.load(f)
5650

5751
self.expert_load_balancer = ExpertLoadBalancer(json_file,
5852
global_expert_num=8)
@@ -62,9 +56,9 @@ def test_init(self):
6256
self.assertIsInstance(self.expert_load_balancer.expert_map_tensor,
6357
torch.Tensor)
6458
self.assertEqual(self.expert_load_balancer.layers_num,
65-
MOCK_DATA["moe_layer_count"])
59+
self.expert_map["moe_layer_count"])
6660
self.assertEqual(self.expert_load_balancer.ranks_num,
67-
MOCK_DATA["layer_list"][0]["device_count"])
61+
self.expert_map["layer_list"][0]["device_count"])
6862

6963
def test_generate_index_dicts(self):
7064
tensor_2d = torch.tensor([[7, 2, 0, 3, 5], [6, 1, 4, 7, 2]])
@@ -142,6 +136,6 @@ def test_get_rank_log2phy_map(self):
142136
def test_get_global_redundant_expert_num(self):
143137
redundant_expert_num = self.expert_load_balancer.get_global_redundant_expert_num(
144138
)
145-
expected_redundant_expert_num = len(MOCK_DATA["layer_list"][0]["device_list"][0]["device_expert"]) * \
146-
MOCK_DATA["layer_list"][0]["device_count"] - 8
139+
expected_redundant_expert_num = len(self.expert_map["layer_list"][0]["device_list"][0]["device_expert"]) * \
140+
self.expert_map["layer_list"][0]["device_count"] - 8
147141
self.assertEqual(redundant_expert_num, expected_redundant_expert_num)

tests/ut/patch/worker/patch_common/test_patch_distributed.py

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,18 @@
1+
#
2+
# Licensed under the Apache License, Version 2.0 (the "License");
3+
# you may not use this file except in compliance with the License.
4+
# You may obtain a copy of the License at
5+
#
6+
# http://www.apache.org/licenses/LICENSE-2.0
7+
#
8+
# Unless required by applicable law or agreed to in writing, software
9+
# distributed under the License is distributed on an "AS IS" BASIS,
10+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11+
# See the License for the specific language governing permissions and
12+
# limitations under the License.
13+
# This file is a part of the vllm-ascend project.
14+
#
15+
116
from tests.ut.base import TestBase
217

318

tests/ut/patch/worker/patch_common/test_patch_sampler.py

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,20 @@
11
import importlib
22
import os
3-
import unittest
43
from unittest import mock
54

65
import torch
76
from vllm.v1.sample.ops import topk_topp_sampler
87

8+
from tests.ut.base import TestBase
99

10-
class TestTopKTopPSamplerOptimize(unittest.TestCase):
10+
11+
class TestTopKTopPSamplerOptimize(TestBase):
1112

1213
@mock.patch.dict(os.environ, {"VLLM_ASCEND_ENABLE_TOPK_OPTIMIZE": "1"})
1314
@mock.patch("torch_npu.npu_top_k_top_p")
1415
def test_npu_topk_topp_called_when_optimized(self, mock_npu_op):
16+
# We have to patch and reload because the patch will take effect
17+
# only after VLLM_ASCEND_ENABLE_TOPK_OPTIMIZE is set.
1518
import vllm_ascend.patch.worker.patch_0_9_1.patch_sampler
1619
importlib.reload(vllm_ascend.patch.worker.patch_0_9_1.patch_sampler)
1720

tests/ut/quantization/test_quant_config.py

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,3 @@
1-
import vllm_ascend.patch.worker.patch_common.patch_utils # type: ignore[import] # isort: skip # noqa
2-
31
from unittest.mock import MagicMock, patch
42

53
import torch

tests/ut/quantization/test_quantizer.py

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,3 @@
1-
import vllm_ascend.patch.worker.patch_common.patch_utils # type: ignore[import] # isort: skip # noqa
2-
31
from unittest.mock import MagicMock, patch
42

53
from tests.ut.base import TestBase

0 commit comments

Comments
 (0)