Skip to content

Commit 4e91018

Browse files
[CI/UT] Unify model usage via ModelScope in CI (#1207)
### What this PR does / why we need it? Unify Model Usage via ModelScope ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? CI passed Signed-off-by: hfadzxy <starmoon_zhang@163.com>
1 parent a5f3359 commit 4e91018

File tree

9 files changed

+17
-26
lines changed

9 files changed

+17
-26
lines changed

.github/workflows/accuracy_test.yaml

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -125,10 +125,9 @@ jobs:
125125
container:
126126
image: m.daocloud.io/quay.io/ascend/cann:8.1.rc1-910b-ubuntu22.04-py3.10
127127
env:
128-
HF_ENDPOINT: https://hf-mirror.com
129-
HF_TOKEN: ${{ secrets.HF_TOKEN }}
130128
DATASET_SOURCE: ModelScope
131129
VLLM_USE_MODELSCOPE: True
130+
USE_MODELSCOPE_HUB: 1
132131
# 1. If version specified (work_dispatch), do specified branch accuracy test
133132
# 2. If no version (labeled PR), do accuracy test by default ref:
134133
# The branch, tag or SHA to checkout. When checking out the repository that

.github/workflows/nightly_benchmarks.yaml

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -69,8 +69,7 @@ jobs:
6969
--device /dev/devmm_svm
7070
--device /dev/hisi_hdc
7171
env:
72-
HF_ENDPOINT: https://hf-mirror.com
73-
HF_TOKEN: ${{ secrets.HF_TOKEN }}
72+
VLLM_USE_MODELSCOPE: True
7473
ES_OM_DOMAIN: ${{ secrets.ES_OM_DOMAIN }}
7574
ES_OM_AUTHORIZATION: ${{ secrets.ES_OM_AUTHORIZATION }}
7675
VLLM_USE_V1: ${{ matrix.vllm_use_v1 }}

.github/workflows/vllm_ascend_test.yaml

Lines changed: 5 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -209,6 +209,7 @@ jobs:
209209
image: m.daocloud.io/quay.io/ascend/cann:8.1.rc1-910b-ubuntu22.04-py3.10
210210
env:
211211
VLLM_LOGGING_LEVEL: ERROR
212+
VLLM_USE_MODELSCOPE: True
212213
steps:
213214
- name: Check npu and CANN info
214215
run: |
@@ -257,9 +258,7 @@ jobs:
257258
VLLM_USE_MODELSCOPE: True
258259
run: |
259260
pytest -sv tests/e2e/singlecard/test_offline_inference.py
260-
# TODO: switch hf to modelscope
261-
VLLM_USE_MODELSCOPE=False HF_ENDPOINT=https://hf-mirror.com \
262-
pytest -sv tests/e2e/singlecard/test_ilama_lora.py
261+
pytest -sv tests/e2e/singlecard/test_ilama_lora.py
263262
pytest -sv tests/e2e/singlecard/test_guided_decoding.py
264263
pytest -sv tests/e2e/singlecard/test_camem.py
265264
pytest -sv tests/e2e/singlecard/test_embedding.py
@@ -277,9 +276,7 @@ jobs:
277276
VLLM_USE_MODELSCOPE: True
278277
run: |
279278
pytest -sv tests/e2e/singlecard/test_offline_inference.py
280-
# TODO: switch hf to modelscope
281-
VLLM_USE_MODELSCOPE=False HF_ENDPOINT=https://hf-mirror.com \
282-
pytest -sv tests/e2e/singlecard/test_ilama_lora.py
279+
pytest -sv tests/e2e/singlecard/test_ilama_lora.py
283280
pytest -sv tests/e2e/singlecard/test_guided_decoding.py
284281
pytest -sv tests/e2e/singlecard/test_camem.py
285282
pytest -sv tests/e2e/singlecard/test_prompt_embedding.py
@@ -357,9 +354,7 @@ jobs:
357354
VLLM_WORKER_MULTIPROC_METHOD: spawn
358355
VLLM_USE_MODELSCOPE: True
359356
run: |
360-
# TODO: switch hf to modelscope
361-
VLLM_USE_MODELSCOPE=False HF_ENDPOINT=https://hf-mirror.com \
362-
pytest -sv tests/e2e/multicard/test_ilama_lora_tp2.py
357+
pytest -sv tests/e2e/multicard/test_ilama_lora_tp2.py
363358
# Fixme: run VLLM_USE_MODELSCOPE=True pytest -sv tests/e2e/multicard/test_offline_inference_distributed.py will raise error.
364359
# To avoid oom, we need to run the test in a single process.
365360
pytest -sv tests/e2e/multicard/test_offline_inference_distributed.py::test_models_distributed_DeepSeek_multistream_moe
@@ -380,9 +375,7 @@ jobs:
380375
VLLM_USE_V1: 0
381376
VLLM_USE_MODELSCOPE: True
382377
run: |
383-
# TODO: switch hf to modelscope
384-
VLLM_USE_MODELSCOPE=False HF_ENDPOINT=https://hf-mirror.com \
385-
pytest -sv tests/e2e/multicard/test_ilama_lora_tp2.py
378+
pytest -sv tests/e2e/multicard/test_ilama_lora_tp2.py
386379
# Fixme: run VLLM_USE_MODELSCOPE=True pytest -sv tests/e2e/multicard/test_offline_inference_distributed.py will raise error.
387380
# To avoid oom, we need to run the test in a single process.
388381
pytest -sv tests/e2e/multicard/test_offline_inference_distributed.py::test_models_distributed_QwQ

.github/workflows/vllm_ascend_test_long_term.yaml

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -50,9 +50,8 @@ jobs:
5050
# TODO(yikun): Remove m.daocloud.io prefix when infra proxy ready
5151
image: m.daocloud.io/quay.io/ascend/cann:8.1.rc1-910b-ubuntu22.04-py3.10
5252
env:
53-
HF_ENDPOINT: https://hf-mirror.com
54-
HF_TOKEN: ${{ secrets.HF_TOKEN }}
5553
VLLM_LOGGING_LEVEL: ERROR
54+
VLLM_USE_MODELSCOPE: True
5655
steps:
5756
- name: Check npu and CANN info
5857
run: |

.github/workflows/vllm_ascend_test_pd.yaml

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -64,8 +64,7 @@ jobs:
6464
--device /dev/devmm_svm
6565
--device /dev/hisi_hdc
6666
env:
67-
HF_ENDPOINT: https://hf-mirror.com
68-
HF_TOKEN: ${{ secrets.HF_TOKEN }}
67+
VLLM_USE_MODELSCOPE: True
6968
steps:
7069
- name: Check npu and CANN info
7170
run: |

benchmarks/scripts/run-performance-benchmarks.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -295,7 +295,7 @@ main() {
295295
export VLLM_LOG_LEVEL="WARNING"
296296

297297
# set env
298-
export HF_ENDPOINT="https://hf-mirror.com"
298+
export VLLM_USE_MODELSCOPE=True
299299

300300
# prepare for benchmarking
301301
cd benchmarks || exit 1

tests/conftest.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@
2525
import numpy as np
2626
import pytest
2727
import torch
28-
from huggingface_hub import snapshot_download
28+
from modelscope import snapshot_download # type: ignore[import-untyped]
2929
from PIL import Image
3030
from torch import nn
3131
from transformers import (AutoConfig, AutoModelForCausalLM, AutoTokenizer,
@@ -387,7 +387,7 @@ def example_prompts() -> list[str]:
387387

388388
@pytest.fixture(scope="session")
389389
def ilama_lora_files():
390-
return snapshot_download(repo_id="jeeejeee/ilama-text2sql-spider")
390+
return snapshot_download(repo_id="vllm-ascend/ilama-text2sql-spider")
391391

392392

393393
class HfRunner:

tests/e2e/multicard/test_ilama_lora_tp2.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
11
import pytest
2+
from modelscope import snapshot_download # type: ignore
23

34
from tests.conftest import VllmRunner
45
from tests.e2e.singlecard.test_ilama_lora import (EXPECTED_LORA_OUTPUT,
@@ -7,7 +8,7 @@
78

89
@pytest.mark.parametrize("distributed_executor_backend", ["mp"])
910
def test_ilama_lora_tp2(distributed_executor_backend, ilama_lora_files):
10-
with VllmRunner(model_name=MODEL_PATH,
11+
with VllmRunner(snapshot_download(MODEL_PATH),
1112
enable_lora=True,
1213
max_loras=4,
1314
max_model_len=1024,

tests/e2e/singlecard/test_ilama_lora.py

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,11 @@
11
# SPDX-License-Identifier: Apache-2.0
22
import vllm
3+
from modelscope import snapshot_download # type: ignore
34
from vllm.lora.request import LoRARequest
45

56
from tests.conftest import VllmRunner
67

7-
MODEL_PATH = "ArthurZ/ilama-3.2-1B"
8+
MODEL_PATH = "vllm-ascend/ilama-3.2-1B"
89

910
PROMPT_TEMPLATE = """I want you to act as a SQL terminal in front of an example database, you need only to return the sql command to me.Below is an instruction that describes a task, Write a response that appropriately completes the request.\n"\n##Instruction:\nconcert_singer contains tables such as stadium, singer, concert, singer_in_concert. Table stadium has columns such as Stadium_ID, Location, Name, Capacity, Highest, Lowest, Average. Stadium_ID is the primary key.\nTable singer has columns such as Singer_ID, Name, Country, Song_Name, Song_release_year, Age, Is_male. Singer_ID is the primary key.\nTable concert has columns such as concert_ID, concert_Name, Theme, Stadium_ID, Year. concert_ID is the primary key.\nTable singer_in_concert has columns such as concert_ID, Singer_ID. concert_ID is the primary key.\nThe Stadium_ID of concert is the foreign key of Stadium_ID of stadium.\nThe Singer_ID of singer_in_concert is the foreign key of Singer_ID of singer.\nThe concert_ID of singer_in_concert is the foreign key of concert_ID of concert.\n\n###Input:\n{query}\n\n###Response:""" # noqa: E501
1011

@@ -44,7 +45,7 @@ def do_sample(llm: vllm.LLM, lora_path: str, lora_id: int) -> list[str]:
4445

4546

4647
def test_ilama_lora(ilama_lora_files):
47-
with VllmRunner(model_name=MODEL_PATH,
48+
with VllmRunner(snapshot_download(MODEL_PATH),
4849
enable_lora=True,
4950
max_loras=4,
5051
max_model_len=1024,

0 commit comments

Comments
 (0)