-
-
Notifications
You must be signed in to change notification settings - Fork 8.7k
A developer friendly tool for multi-instance deployment with Ray Implementation #20761
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @Gongzq5, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request introduces a new, developer-friendly tool named dllm
(Distributed LLM) designed to streamline the deployment and management of vLLM inference instances across a Ray cluster. It establishes a robust architecture that supports multi-instance setups, including disaggregated prefill and decode roles, and various parallelism techniques like Tensor, Data, and Expert Parallelism. The tool provides a centralized controller for orchestration, a balancer for intelligent request routing and health monitoring, and an OpenAI-compatible API endpoint for serving requests, all managed via a convenient command-line interface.
Highlights
- New Distributed LLM (DLLM) Tool: Introduces a comprehensive tool for deploying and managing multiple vLLM instances on a Ray cluster, designed for developer friendliness.
- Modular Architecture: Implements a modular design with distinct components: a
Controller
for orchestration, aBalancer
for intelligent request routing and health monitoring, and anEndpoint
for API serving. - Parallelism Support: Provides robust support for various parallelism strategies including Tensor Parallelism (TP), Data Parallelism (DP), and Expert Parallelism (EP) for vLLM instances.
- Disaggregated Serving: Enables disaggregated prefill and decode instances, allowing for optimized resource utilization based on workload characteristics, alongside mixed-role instances.
- CLI for Deployment & Management: Includes a
click
-based command-line interface (dllm
) for easy deployment, configuration, and cleanup of the distributed vLLM cluster. - Health Monitoring & Metrics: Integrates continuous health checks and Prometheus-based metrics collection for vLLM instances, enabling dynamic load balancing and failure detection.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces dllm
, a promising tool for managing distributed vLLM deployments on Ray. The architecture, leveraging a controller, balancer, and instance actors, is well-conceived for scalability. However, my review has uncovered several critical and high-severity issues that need attention before this can be merged. These include a typo in a constant that will break metrics, bugs in the resource allocation logic, calls to non-existent remote methods, and incorrect state handling in the controller. I've also highlighted several medium-severity issues concerning error handling, logging, and API design that will improve the tool's robustness and maintainability. Addressing these points will significantly strengthen this valuable contribution.
|
||
NUM_RUNNING_REQUESTS = "vllm:num_requests_running" | ||
|
||
NUM_WAITING_REQUESTS = "vllm:nun_requests_waiting" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if accelerators_pack_max_size % tp_size != 0 else accelerators_pack_max_size) | ||
num_groups = total_accelerators // group_size | ||
remainder = total_accelerators % group_size | ||
packs = [group_size * num_groups] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
await dp_master_actor.init_dp_config.remote( | ||
dp_master_vllm_instance_config.dp_config | ||
) # type: ignore # ray remote call |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
self.balancer.update_vllm_instance_info.remote( # type: ignore # ray remote call | ||
list(self.vllm_instances_info.values())) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
num = e.get("Resources", {}).get(current_platform.device_name, None) | ||
if num: | ||
accelerator_nums.append(int(num)) | ||
return max(accelerator_nums) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
return | ||
|
||
env_dict = dict( | ||
item.split('=') for item in self._vllm_instance_config.env.split()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using item.split('=')
to parse environment variables is not robust. If an environment variable's value contains an equals sign (=
), this will split incorrectly.
item.split('=') for item in self._vllm_instance_config.env.split()) | |
item.split('=', 1) for item in self._vllm_instance_config.env.split()) |
async with aiohttp.ClientSession(timeout=aiohttp.ClientTimeout( | ||
total=6 * 60 * 60)) as session: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
try: | ||
request = CompletionRequest(**request_body) | ||
except pydantic.ValidationError as e: | ||
return Response(status_code=500, content={"error": str(e)}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When returning an error response, it's better to use fastapi.responses.JSONResponse
to ensure the dictionary content is correctly serialized to a JSON string with the appropriate Content-Type
header.
return Response(status_code=500, content={"error": str(e)}) | |
return JSONResponse(status_code=500, content={"error": str(e)}) |
assert isinstance(request.prompt, | ||
str), "currently only support one prompt at a time" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using assert
for request validation will result in a 500 Internal Server Error if the condition is not met. It's better to handle this validation explicitly and return a 400 Bad Request to the client, which is more informative.
assert isinstance(request.prompt, | |
str), "currently only support one prompt at a time" | |
if not isinstance(request.prompt, str): | |
return Response(status_code=400, content="This endpoint only supports a single prompt string.") |
def __init__(self, name: str, vllm_config: VllmInstanceConfig): | ||
""" | ||
Args: | ||
env: the environment variables pass to subprocess | ||
exec_cmd: the vllm api server startup command, e.g. ["vllm", "serve", "--a=1", "--b=2"] | ||
""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The docstring for __init__
is out of sync with the method's signature. It mentions env
and exec_cmd
as parameters, but the actual parameters are name
and vllm_config
. Please update the docstring to reflect the current implementation.
def __init__(self, name: str, vllm_config: VllmInstanceConfig): | |
""" | |
Args: | |
env: the environment variables pass to subprocess | |
exec_cmd: the vllm api server startup command, e.g. ["vllm", "serve", "--a=1", "--b=2"] | |
""" | |
def __init__(self, name: str, vllm_config: VllmInstanceConfig): | |
""" | |
Args: | |
name: The name of this vllm instance actor. | |
vllm_config: The configuration for this vllm instance. | |
""" |
Essential Elements of an Effective PR Description Checklist
supported_models.md
andexamples
for a new model.Purpose
#20476
Test Plan
Test Result
(Optional) Documentation Update