-
-
Notifications
You must be signed in to change notification settings - Fork 8.9k
[V1][CUDA] Full cudagraph support for FlashInfer #21367
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: fhl2000 <63384265+fhl2000@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces full CUDA graph support for the FlashInfer attention backend, which is a great performance enhancement for pure decode scenarios. The use of a new AttentionCGSupport
enum to manage different levels of CUDA graph support across backends is a solid design choice that improves code clarity and maintainability.
The PR also includes an important bug fix to prevent graph capture for unsupported batch sizes, which is crucial for stability. I've identified one critical issue where a data structure for padding batch sizes is not updated after filtering the capture sizes, which could lead to runtime errors. I've provided a suggestion to fix this. Overall, this is a valuable contribution.
vllm/v1/worker/gpu_model_runner.py
Outdated
self.cudagraph_batch_sizes = [ | ||
size for size in self.cudagraph_batch_sizes | ||
if size <= max_num_seqs] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change correctly filters self.cudagraph_batch_sizes
to prevent capturing graphs for sizes larger than max_num_seqs
for PURE_DECODE_ONLY
backends. However, the pad_for_cudagraph
method, which is used at runtime to determine the padded graph size, relies on a mapping (bs_to_padded_graph_size
) that was initialized with the original, unfiltered cudagraph_batch_sizes
.
This discrepancy can lead to a KeyError
at runtime. For example, if a batch with num_decodes
is processed, pad_for_cudagraph
might return a padded size that was filtered out and for which no CUDA graph was captured. This will cause a lookup failure in _decode_wrappers_cudagraph
.
To fix this, you should re-initialize the padding map after filtering self.cudagraph_batch_sizes
.
self.cudagraph_batch_sizes = [ | |
size for size in self.cudagraph_batch_sizes | |
if size <= max_num_seqs] | |
self.cudagraph_batch_sizes = [ | |
size for size in self.cudagraph_batch_sizes | |
if size <= max_num_seqs | |
] | |
self.vllm_config.compilation_config.init_with_cudagraph_sizes( | |
self.cudagraph_batch_sizes) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch for pad_for_cudagraph
, though I think it would not affect the final correctness.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I find that the following hangs:
VLLM_ATTENTION_BACKEND=FLASHINFER vllm serve models/Llama-3.1-8B-Instruct --no-enable-prefix-caching --compilation-config='{"full_cuda_graph": true}' --max-num-seqs 2
at this point:
Capturing CUDA graph shapes: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2.42it/s]
INFO 07-22 10:32:54 [gpu_model_runner.py:2404] Graph capturing finished in 1 secs, took 0.42 GiB
It just hangs here - could this be related?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if I remove the --max-num-seqs
then it works fine, so I think it is indeed related.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I find overriding gpu model runner's cudagraph_batch_sizes
would be enough. And vllm_config.compilation_config.init_with_cudagraph_sizes
method does not actually override cudagraph_batch_sizes
of compilation config after its first call.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm unfortunately it doesn't happen on main + FlashAttention. The hang is 100% reproducible using the code from this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will try my best to figure it out.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have tested --max-num-seqs
being one of [2,4,8,16,24,32, 40] leads to hangs, while [1,48,56,...] work normally. The stuck occurs in a final dummy_run after all capturing in gpu_worker.py around lines 285~292, which runs into cudagraph replay (nums_tokens = max_num_seqs) without creating attn_metadata. I guess something weird happened in Flashinfer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tdoublep Take the new fix! It should be fine now. Could you please also test if it works for you?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, seems to work now. Thanks!
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
This PR will be super-helpful for enabling full (decode-only) CUDA graphs for hybrid (mamba/attention) models in V1, where right now we need to use FlashInfer. I am testing these changes with my branch now. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few questions but otherwise looks good. I'm keen to see this PR merged because it should push us close to the point when we can deprecate V0 for hybrid models (which require FlashInfer currently).
vllm/v1/worker/gpu_worker.py
Outdated
# Always activate creating attn_cudagraphs for dummy run to avoid | ||
# illegal memory access for full cudagraph. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you explain a bit more why this change is needed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure! The current dummy_run
is not aware of whether it is in the status of warming up or needs to trigger cudagraph. It doesn't set up skip_cuda_graph
to forward_ctx, so it blindly expects the model executions to go through cudagraph. However, even after the full cudagraphs are captured (buffer address is solid), if attn_metadata is not built correctly in dummy_run
(nothing has been going through Flashinfer's plan function), it may access incorrect values in these buffers and may potentially fall into an infinite loop. I think always activating this part is also not bad for piecewise cudagraph.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey, just found that always activating it here would cause CI failures for FlexAttention. So I have to enable this only when full_cuda_graph and not enforce eager.
vllm/v1/attention/backends/utils.py
Outdated
"""Cudagraph supported for pure decode, need to use piecewise | ||
cudagraph or no cudagraph for mixed prefill-decode batches""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
need to use piecewise cudagraph or no cudagraph for mixed prefill-decode batches
In the mixed prefill-decode case: when will piecewise cudagraph be used and when will no cuda graph be used?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am sorry, this comment is directly from #20059, I didn't realize there are no piecewise cudagraph exists in this PR when enabling full_cuda_graph
. That is only allowed after that PR, which introduced a new cudagraph_mode
for config that enables decouple cuda graph logic from vllm compilation. In that PR, when cudagraph_mode is FULL
, it will run full cudagraph for pure decode if attention support is PURE_DECODE_ONLY
, and fall back to piecewise cudagraph for other situations that are incompatible with full cudagraph when vllm compilation is on. However, if vllm compilation is disabled, then just turn to no cudagraph.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's fixed now
Signed-off-by: fhl <2410591650@qq.com>
Essential Elements of an Effective PR Description Checklist
supported_models.md
andexamples
for a new model.Purpose
This PR is split from origin #20059 to support full cudagraph for FlashInfer (pure decode only), which runs pure decode batches at full cudagraph, and falls back to no cudagraph at mix prefill-decode batches. Hope to land this first before #20059.
Details include:
This PR also fixes a potential bug originally from #18581, where an assertion error will be raised when capturing if max_capture_size is greater than max_num_reqs. To resolve this, a new enum type AttentionCGSupport (adapted from #20059) is introduced to distinguish how the backend supports cudagraph, so that we can overwrite the cudagraph_batch_sizes to be not greater than max_num_seqs.
NOTE: Currently, manually setting max capture size seems impossible after the introduction of Pydantic, which blocks config
--cuda_graph_sizes
to beint
typeLimitation:
Test Plan
lm_eval, benchmark_serving
Test Result
piecewise cudagraph (main branch)
vllm ({'pretrained': '/root/models/Qwen2.5-7B-Instruct-GPTQ-Int4', 'gpu_memory_utilization': 0.9}), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: 1
full cudagraph (this PR)
vllm ({'pretrained': '/root/models/Qwen2.5-7B-Instruct-GPTQ-Int4', 'gpu_memory_utilization': 0.9, 'compilation_config': {'full_cuda_graph': True}}), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: 1
While the median ITL is decreased by 4.5%, the overall performance increase is small anyway. I believe this can be further optimized after #20059 is landed and more cpu overheads mentioned above are reduced.
(Optional) Documentation Update