Skip to content

Enable v1 metrics tests #20953

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 8 commits into
base: main
Choose a base branch
from

Conversation

eicherseiji
Copy link
Contributor

@eicherseiji eicherseiji commented Jul 14, 2025

Essential Elements of an Effective PR Description Checklist

  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.

Purpose

The test guarding RayPrometheusStatLogger wasn't enabled, and RayGaugeWrapper became incompatible with the prometheus_client.Gauge.__init__ signature.

This change enables the test and resolves the signature incompatibility.

Resolves #20954

Test Plan

pytest -vs test_ray_metrics.py

Test Result

(base) ray@ip-10-0-251-217:~/default/work/vllm/tests$ pytest -vs v1/metrics/
INFO 07-15 16:43:37 [__init__.py:253] Automatically detected platform cuda.
=================================================================================================== test session starts ===================================================================================================
platform linux -- Python 3.11.11, pytest-8.4.1, pluggy-1.5.0 -- /home/ray/anaconda3/bin/python
cachedir: .pytest_cache
rootdir: /home/ray/default/work/vllm
configfile: pyproject.toml
plugins: asyncio-1.0.0, anyio-3.7.1
asyncio: mode=Mode.STRICT, asyncio_default_fixture_loop_scope=None, asyncio_default_test_loop_scope=function
collected 1 item                                                                                                                                                                                                          

v1/metrics/test_ray_metrics.py::test_engine_log_metrics_ray[16-half-distilbert/distilgpt2] 2025-07-15 16:43:39,481      INFO worker.py:1747 -- Connecting to existing Ray cluster at address: 10.0.251.217:6379...
2025-07-15 16:43:39,495 INFO worker.py:1918 -- Connected to Ray cluster. View the dashboard at https://session-154ds4pw3y7x28guz2qyjlud7s.i.anyscaleuserdata-staging.com 
2025-07-15 16:43:39,508 INFO packaging.py:380 -- Pushing file package 'gcs://_ray_pkg_6944599221eda8fc6330e84642d878ad23c7843b.zip' (4.91MiB) to Ray cluster...
2025-07-15 16:43:39,524 INFO packaging.py:393 -- Successfully pushed file package 'gcs://_ray_pkg_6944599221eda8fc6330e84642d878ad23c7843b.zip'.
(pid=85702) INFO 07-15 16:43:43 [__init__.py:253] Automatically detected platform cuda.
(EngineTestActor pid=85702) INFO 07-15 16:43:51 [config.py:3485] Downcasting torch.float32 to torch.float16.
(EngineTestActor pid=85702) INFO 07-15 16:43:51 [config.py:1561] Using max model len 1024
(EngineTestActor pid=85702) WARNING 07-15 16:43:51 [arg_utils.py:1788] Detected VLLM_USE_V1=1 with Engine in background thread. Usage should be considered experimental. Please report any issues on Github.
(EngineTestActor pid=85702) INFO 07-15 16:43:51 [config.py:2380] Chunked prefill is enabled with max_num_batched_tokens=2048.
(EngineTestActor pid=85702) WARNING 07-15 16:43:51 [cuda.py:103] To see benefits of async output processing, enable CUDA graph. Since, enforce-eager is enabled, async output processor cannot be used
(EngineTestActor pid=85702) WARNING 07-15 16:43:51 [__init__.py:2870] We must use the `spawn` multiprocessing start method. Overriding VLLM_WORKER_MULTIPROC_METHOD to 'spawn'. See https://docs.vllm.ai/en/latest/usage/troubleshooting.html#python-multiprocessing for more information. Reason: In a Ray actor and can only be spawned
(EngineTestActor pid=85702) INFO 07-15 16:43:55 [__init__.py:253] Automatically detected platform cuda.
(EngineTestActor pid=85702) INFO 07-15 16:43:57 [core.py:526] Waiting for init message from front-end.
(EngineTestActor pid=85702) INFO 07-15 16:43:57 [core.py:69] Initializing a V1 LLM engine (v0.1.dev7731+g480beba) with config: model='distilbert/distilgpt2', speculative_config=None, tokenizer='distilbert/distilgpt2', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config={}, tokenizer_revision=None, trust_remote_code=False, dtype=torch.float16, max_seq_len=1024, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=True, kv_cache_dtype=auto,  device_config=cuda, decoding_config=DecodingConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_backend=''), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None), seed=0, served_model_name=distilbert/distilgpt2, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=False, pooler_config=None, compilation_config={"level":0,"debug_dump_path":"","cache_dir":"","backend":"","custom_ops":[],"splitting_ops":[],"use_inductor":true,"compile_sizes":[],"inductor_compile_config":{"enable_auto_functionalized_v2":false},"inductor_passes":{},"use_cudagraph":true,"cudagraph_num_of_warmups":0,"cudagraph_capture_sizes":[],"cudagraph_copy_inputs":false,"full_cuda_graph":false,"max_capture_size":0,"local_cache_dir":null}
(EngineTestActor pid=85702) INFO 07-15 16:43:58 [parallel_state.py:1090] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, TP rank 0, EP rank 0
(EngineTestActor pid=85702) WARNING 07-15 16:43:58 [topk_topp_sampler.py:59] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
(EngineTestActor pid=85702) INFO 07-15 16:43:58 [gpu_model_runner.py:1742] Starting to load model distilbert/distilgpt2...
(EngineTestActor pid=85702) INFO 07-15 16:43:59 [gpu_model_runner.py:1747] Loading model from scratch...
(EngineTestActor pid=85702) INFO 07-15 16:43:59 [cuda.py:290] Using Flash Attention backend on V1 engine.
(EngineTestActor pid=85702) INFO 07-15 16:43:59 [weight_utils.py:296] Using model weights format ['*.safetensors']
Loading safetensors checkpoint shards:   0% Completed | 0/1 [00:00<?, ?it/s]
(EngineTestActor pid=85702) INFO 07-15 16:43:59 [weight_utils.py:349] No model.safetensors.index.json found in remote.
Loading safetensors checkpoint shards: 100% Completed | 1/1 [00:00<00:00,  7.41it/s]
Loading safetensors checkpoint shards: 100% Completed | 1/1 [00:00<00:00,  7.40it/s]
(EngineTestActor pid=85702) 
(EngineTestActor pid=85702) INFO 07-15 16:43:59 [default_loader.py:272] Loading weights took 0.15 seconds
(EngineTestActor pid=85702) INFO 07-15 16:44:00 [gpu_model_runner.py:1773] Model loading took 0.1547 GiB and 0.582640 seconds
(EngineTestActor pid=85702) INFO 07-15 16:44:01 [gpu_worker.py:244] Available KV cache memory: 19.34 GiB
(EngineTestActor pid=85702) INFO 07-15 16:44:01 [kv_cache_utils.py:728] GPU KV cache size: 1,126,912 tokens
(EngineTestActor pid=85702) INFO 07-15 16:44:01 [kv_cache_utils.py:732] Maximum concurrency for 1,024 tokens per request: 1100.50x
(EngineTestActor pid=85702) INFO 07-15 16:44:01 [core.py:172] init engine (profile, create kv cache, warmup model) took 1.18 seconds
(EngineTestActor pid=85702) INFO 07-15 16:44:01 [async_llm.py:270] Added request request-id-0.
(EngineTestActor pid=85702) INFO 07-15 16:44:01 [async_llm.py:270] Added request request-id-1.
(EngineTestActor pid=85702) INFO 07-15 16:44:01 [async_llm.py:270] Added request request-id-2.
(EngineTestActor pid=85702) INFO 07-15 16:44:01 [async_llm.py:270] Added request request-id-3.
(EngineTestActor pid=85702) INFO 07-15 16:44:01 [async_llm.py:270] Added request request-id-4.
(EngineTestActor pid=85702) INFO 07-15 16:44:01 [async_llm.py:270] Added request request-id-5.
(EngineTestActor pid=85702) INFO 07-15 16:44:02 [async_llm.py:270] Added request request-id-6.
(EngineTestActor pid=85702) INFO 07-15 16:44:02 [async_llm.py:270] Added request request-id-7.
(EngineTestActor pid=85702) [rank0]:[W715 16:44:02.950199604 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
PASSED

=================================================================================================== 1 passed in 24.61s ====================================================================================================

(Optional) Documentation Update

Signed-off-by: Seiji Eicher <seiji@anyscale.com>
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @eicherseiji, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on enhancing the project's continuous integration by enabling the execution of v1 metrics-related tests within the Buildkite pipeline. This change ensures that the metrics collection and reporting components are regularly validated, contributing to the overall stability and reliability of the system.

Highlights

  • CI/CD Pipeline Enhancement: Integrated the v1/metrics test suite into the Buildkite continuous integration pipeline to ensure automated testing of metrics functionality.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@mergify mergify bot added the ci/build label Jul 14, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request enables the v1 metrics tests in the CI pipeline. The change itself is correct. My review focuses on the newly enabled tests and their underlying implementation. I've identified a couple of areas for improvement: the main test for Ray metrics could be strengthened to verify metric values instead of just being a smoke test, and a method in the Ray metrics wrapper could be made safer to prevent ambiguous usage. These suggestions aim to improve the robustness and maintainability of the metrics feature.

Signed-off-by: Seiji Eicher <seiji@anyscale.com>
Signed-off-by: Seiji Eicher <seiji@anyscale.com>
Signed-off-by: Seiji Eicher <seiji@anyscale.com>
Signed-off-by: Seiji Eicher <seiji@anyscale.com>
@eicherseiji eicherseiji changed the title Enable v1 metrics tests Add add_logger API to AsyncLLM Jul 15, 2025
@mergify mergify bot added the v1 label Jul 15, 2025
Signed-off-by: Seiji Eicher <seiji@anyscale.com>
Signed-off-by: Seiji Eicher <seiji@anyscale.com>
Signed-off-by: Seiji Eicher <seiji@anyscale.com>
@eicherseiji eicherseiji changed the title Add add_logger API to AsyncLLM Enable v1 metrics tests Jul 15, 2025
@eicherseiji eicherseiji marked this pull request as ready for review July 15, 2025 23:49
@eicherseiji
Copy link
Contributor Author

Hi @markmc! Your review would also be appreciated since we worked on this together in the past. Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Bug]: TypeError: RayGaugeWrapper.__init__() got an unexpected keyword argument
1 participant