Skip to content

Commit 946aadb

Browse files
authored
[CI/Build] Split Entrypoints Test into LLM and API Server (#20945)
Signed-off-by: mgoin <mgoin64@gmail.com>
1 parent bcdfb2a commit 946aadb

File tree

1 file changed

+14
-4
lines changed

1 file changed

+14
-4
lines changed

.buildkite/test-pipeline.yaml

Lines changed: 14 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -117,16 +117,14 @@ steps:
117117
commands:
118118
- pytest -v -s core
119119

120-
- label: Entrypoints Test # 40min
120+
- label: Entrypoints Test (LLM) # 40min
121121
mirror_hardwares: [amdexperimental]
122122
working_dir: "/vllm-workspace/tests"
123123
fast_check: true
124124
torch_nightly: true
125125
source_file_dependencies:
126126
- vllm/
127127
- tests/entrypoints/llm
128-
- tests/entrypoints/openai
129-
- tests/entrypoints/test_chat_utils
130128
- tests/entrypoints/offline_mode
131129
commands:
132130
- export VLLM_WORKER_MULTIPROC_METHOD=spawn
@@ -135,9 +133,21 @@ steps:
135133
- pytest -v -s entrypoints/llm/test_generate.py # it needs a clean process
136134
- pytest -v -s entrypoints/llm/test_generate_multiple_loras.py # it needs a clean process
137135
- VLLM_USE_V1=0 pytest -v -s entrypoints/llm/test_guided_generate.py # it needs a clean process
136+
- VLLM_USE_V1=0 pytest -v -s entrypoints/offline_mode # Needs to avoid interference with other tests
137+
138+
- label: Entrypoints Test (API Server) # 40min
139+
mirror_hardwares: [amdexperimental]
140+
working_dir: "/vllm-workspace/tests"
141+
fast_check: true
142+
torch_nightly: true
143+
source_file_dependencies:
144+
- vllm/
145+
- tests/entrypoints/openai
146+
- tests/entrypoints/test_chat_utils
147+
commands:
148+
- export VLLM_WORKER_MULTIPROC_METHOD=spawn
138149
- pytest -v -s entrypoints/openai --ignore=entrypoints/openai/test_chat_with_tool_reasoning.py --ignore=entrypoints/openai/test_oot_registration.py --ignore=entrypoints/openai/test_tensorizer_entrypoint.py --ignore=entrypoints/openai/correctness/
139150
- pytest -v -s entrypoints/test_chat_utils.py
140-
- VLLM_USE_V1=0 pytest -v -s entrypoints/offline_mode # Needs to avoid interference with other tests
141151

142152
- label: Distributed Tests (4 GPUs) # 10min
143153
mirror_hardwares: [amdexperimental]

0 commit comments

Comments
 (0)