Skip to content

Conversation

lilinsiman
Copy link
Contributor

@lilinsiman lilinsiman commented Oct 11, 2025

What this PR does / why we need it?

This pr adds online single request DP2 test case for aclgraph

Does this PR introduce any user-facing change?

no

How was this patch tested?

ut

Copy link

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds a new end-to-end test for aclgraph with data parallelism. The test is well-structured, but there's a critical issue in how the test server is initialized that will prevent the test from running. I've also identified a potential source of test flakiness due to a hardcoded port and provided a comprehensive suggestion to fix both issues, making the test more robust.

Comment on lines 44 to 63
env_dict = {
"TASK_QUEUE_ENABLE": "1",
"HCCL_OP_EXPANSION_MODE": "AIV",
}
server_args = [
"--no-enable-prefix-caching", "--tensor-parallel-size",
"1", "--data-parallel-size", str(dp_size), "--port", "20002",
"--trust-remote-code", "--gpu-memory-utilization", "0.9"
]
request_keyword_args: dict[str, Any] = {
**api_keyword_args,
}
with RemoteOpenAIServer(model,
server_args,
server_port=20002,
env_dict=env_dict,
auto_port=False) as server:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

There are a couple of issues with the RemoteOpenAIServer setup:

  1. Incorrect RemoteOpenAIServer call (Critical): The current call to RemoteOpenAIServer is incorrect. The __init__ signature in tests/e2e/conftest.py is (self, model: str, server_host: str, server_port: int, vllm_serve_args: list[str], ...). Your call provides server_args (a list) as the second positional argument for server_host (a string), and misses the required vllm_serve_args argument. This will cause a TypeError at runtime.

  2. Hardcoded port (High): The test uses a hardcoded port 20002. This can cause flaky tests due to port conflicts when running tests in parallel or if the port is already occupied. It's better to use a dynamically allocated port.

Here is a suggested fix that addresses both issues by using vllm.utils.get_open_port to dynamically allocate a port and corrects the RemoteOpenAIServer call. The import is added inside the function for simplicity of the suggestion.

    from vllm.utils import get_open_port

    port = get_open_port()
    env_dict = {
        "TASK_QUEUE_ENABLE": "1",
        "HCCL_OP_EXPANSION_MODE": "AIV",
    }
    server_args = [
        "--no-enable-prefix-caching", "--tensor-parallel-size",
        "1", "--data-parallel-size", str(dp_size), "--port", str(port),
        "--trust-remote-code", "--gpu-memory-utilization", "0.9"
    ]
    request_keyword_args: dict[str, Any] = {
        **api_keyword_args,
    }
    with RemoteOpenAIServer(model=model,
                            server_host="localhost",
                            server_port=port,
                            vllm_serve_args=server_args,
                            env_dict=env_dict,
                            auto_port=False) as server:

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lilinsiman Please check these suggested changes.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lilinsiman Please check these suggested changes.

After checking, it was found that the suggested modification was indeed better. Calling the get_open_port function can automatically return an available port, ensuring that the port is not occupied and thus guaranteeing its availability.

@lilinsiman lilinsiman force-pushed the add_single_request_aclgraph branch 2 times, most recently from d6afb6e to 279e6db Compare October 13, 2025 06:56
@wangxiyuan
Copy link
Collaborator

please enable this test in .github workflow

@pytest.mark.asyncio
@pytest.mark.parametrize("model", MODELS)
@pytest.mark.parametrize("dp_size", DATA_PARALLELS)
async def test_models(model: str, dp_size: int) -> None:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

change the case name

server_args = [
"--no-enable-prefix-caching", "--tensor-parallel-size", "1",
"--data-parallel-size",
str(dp_size), "--port", "20002", "--trust-remote-code",
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

delete port code

@lilinsiman lilinsiman force-pushed the add_single_request_aclgraph branch from 279e6db to e73fd45 Compare October 13, 2025 09:04
Signed-off-by: lilinsiman <lilinsiman@gmail.com>
@lilinsiman lilinsiman force-pushed the add_single_request_aclgraph branch from e73fd45 to 407fb7a Compare October 13, 2025 10:14
@MengqingCao MengqingCao added ready read for review ready-for-test start test by label for PR labels Oct 13, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

module:tests ready read for review ready-for-test start test by label for PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants