-
-
Notifications
You must be signed in to change notification settings - Fork 8.9k
[V1] [Hybrid] Enable Full CUDA Graph (decode-only) for Mamba layers #21401
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: fhl2000 <63384265+fhl2000@users.noreply.github.com>
Signed-off-by: Thomas Parnell <tpa@zurich.ibm.com>
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request refactors the CUDA graph support in attention backends to be more granular by using an enum instead of a boolean. This is a good change that allows enabling full CUDA graph for decode-only batches in hybrid models, such as those using Mamba layers. The changes are well-implemented across various backend files. I've found a minor copy-paste error in a docstring/assertion in the Mamba attention backend and a style violation in the FlashInfer backend. Overall, the changes look good and address the intended purpose.
use_cudagraph = (self.enable_cuda_graph and pure_decode and \ | ||
num_decodes <= self._decode_cudagraph_max_bs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The backslash \
for line continuation is redundant here because the expression is already inside parentheses. According to PEP 8, implied line continuation within parentheses is preferred. Removing the backslash will make the code cleaner and more compliant with standard Python style.
use_cudagraph = (self.enable_cuda_graph and pure_decode and \ | |
num_decodes <= self._decode_cudagraph_max_bs) | |
use_cudagraph = (self.enable_cuda_graph and pure_decode and | |
num_decodes <= self._decode_cudagraph_max_bs) |
This method builds the metadata for full cudagraph capture. | ||
Currently, only decode is supported for full cudagraphs with MLA. | ||
""" | ||
m = common_attn_metadata | ||
assert m.num_reqs == m.num_actual_tokens, \ | ||
"MLA only supports decode-only full CUDAGraph capture. " \ | ||
"Make sure all cudagraph capture sizes <= max_num_seq." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The docstring and assertion message here refer to 'MLA' (Mixture-of-LoRA-Experts Attention), but this file is for Mamba attention. This seems to be a copy-paste error and could be confusing for future developers. Please update them to refer to 'Mamba'.
"""
This method builds the metadata for full cudagraph capture.
Currently, only decode is supported for full cudagraphs with Mamba.
"""
m = common_attn_metadata
assert m.num_reqs == m.num_actual_tokens, \
"Mamba only supports decode-only full CUDAGraph capture. " \
"Make sure all cudagraph capture sizes <= max_num_seq."
Signed-off-by: Burkhard Ringlein <ngl@zurich.ibm.com>
Essential Elements of an Effective PR Description Checklist
supported_models.md
andexamples
for a new model.Purpose
Allow FCG to be used for batches that contain only decode requests. This PR will close the last remaining performance gap to V0 for hybrid models.
Should be merged after #21367 (diff will be much smaller)
cc @heheda12345 @tlrmchlsmth @fhl2000
Test Plan
TBD
Test Result
TBD
(Optional) Documentation Update