-
-
Notifications
You must be signed in to change notification settings - Fork 8.7k
[Nvidia] Integrate cudnn prefill paged attention kernel for head_dim == 128 models, like Llama family #20850
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @elfiegg, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request integrates NVIDIA's cuDNN paged KV cache API to accelerate prefill operations, particularly targeting Blackwell architectures. The primary goal is to significantly improve throughput, with the PR description reporting a 2x improvement in benchmarks. This feature is opt-in via a new environment variable and enhances the performance of both FlashInfer and MLA attention backends by leveraging cuDNN for prefill computations.
Highlights
- cuDNN Integration for Prefill: Implemented support for NVIDIA's cuDNN paged KV cache API to optimize prefill operations, specifically targeting Blackwell architectures for substantial throughput gains.
- Performance Improvement: Benchmarks provided in the PR description show a 2x throughput improvement when using the cuDNN prefill feature on Blackwell GPUs.
- Configurable via Environment Variable: A new environment variable,
VLLM_USE_CUDNN_PREFILL
, has been added, allowing users to enable or disable this cuDNN acceleration. - Backend Enhancements: Both the FlashInfer and MLA attention backends have been updated with conditional logic to utilize the cuDNN prefill path when the feature is enabled and the model's head dimensions are supported.
- Metadata and Workspace Management: New metadata fields and workspace allocation logic have been introduced to facilitate the use of cuDNN prefill functions within the attention backends.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
This pull request has merge conflicts that must be resolved before it can be |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request integrates the cuDNN paged KV cache API for Blackwell GPUs to improve prefill performance. A critical issue was identified in the flashinfer
backend related to incorrect memory access when preparing the KV cache for the cuDNN kernel. A configuration inconsistency for the new environment variable was also noted.
k_cache = kv_cache[:, 0].as_strided( | ||
(total_num_pages, num_kv_heads, page_size, head_dim), ( | ||
page_size * num_kv_heads * head_dim, | ||
head_dim, | ||
num_kv_heads * head_dim, | ||
1, | ||
)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The as_strided
call for k_cache
has an incorrect stride for the first dimension. The stride for kv_cache[:, 0]
's first dimension is page_size * num_kv_heads * head_dim
, but page_size * num_kv_heads * head_dim
is used. This will lead to incorrect memory access. The correct first stride should be kv_cache.stride(0)
.
k_cache = kv_cache[:, 0].as_strided( | |
(total_num_pages, num_kv_heads, page_size, head_dim), ( | |
page_size * num_kv_heads * head_dim, | |
head_dim, | |
num_kv_heads * head_dim, | |
1, | |
)) | |
k_cache = kv_cache[:, 0].as_strided( | |
(total_num_pages, num_kv_heads, page_size, head_dim), ( | |
kv_cache.stride(0), | |
head_dim, | |
num_kv_heads * head_dim, | |
1, | |
)) |
v_cache = kv_cache[:, 1].as_strided( | ||
(total_num_pages, num_kv_heads, page_size, head_dim), ( | ||
page_size * num_kv_heads * head_dim, | ||
head_dim, | ||
num_kv_heads * head_dim, | ||
1, | ||
)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar to k_cache
, the as_strided
call for v_cache
has an incorrect stride for the first dimension. It should also be kv_cache.stride(0)
.
v_cache = kv_cache[:, 1].as_strided( | |
(total_num_pages, num_kv_heads, page_size, head_dim), ( | |
page_size * num_kv_heads * head_dim, | |
head_dim, | |
num_kv_heads * head_dim, | |
1, | |
)) | |
v_cache = kv_cache[:, 1].as_strided( | |
(total_num_pages, num_kv_heads, page_size, head_dim), ( | |
kv_cache.stride(0), | |
head_dim, | |
num_kv_heads * head_dim, | |
1, | |
)) |
|
||
# Controls whether or not to use cudnn prefill | ||
"VLLM_USE_CUDNN_PREFILL": | ||
lambda: bool(int(os.getenv("VLLM_USE_CUDNN_PREFILL", "1"))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The default value for VLLM_USE_CUDNN_PREFILL
is False
in the VllmEnvs
TypedDict (line 142), but the default value in os.getenv
is "1"
, which evaluates to True
. This inconsistency can lead to unexpected behavior where the feature is enabled by default when the environment variable is not explicitly set. To maintain consistency, the default value in os.getenv
should be "0"
.
lambda: bool(int(os.getenv("VLLM_USE_CUDNN_PREFILL", "1"))) | |
lambda: bool(int(os.getenv("VLLM_USE_CUDNN_PREFILL", "0"))) |
Purpose
Integrate cudnnn pagedKVcache API for blackwell. observed throughput 2x improvement using below command:
VLLM_USE_CUDNN_PREFILL=1 python3 benchmarks/benchmark_throughput.py --model=deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct --quantization=fp8 --trust-remote-code --enable-chunked-prefill --input-len 1000 --output-len 1000 --num-prompts 300
Before:
Throughput: 2.31 requests/s, 4612.45 total tokens/s, 2308.06 output tokens/s
After:
Throughput: 4.73 requests/s, 9463.29 total tokens/s, 4732.99 output tokens/s
Test Plan
Test Result
(Optional) Documentation Update