Skip to content

[Perf][Spec Decode] EAGLE Kernel Fusion + Synchronization Overhead Reduction #20078

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

leo-cf-tian
Copy link

@leo-cf-tian leo-cf-tian commented Jun 25, 2025

I am an intern at CentML and I worked on this PR with @benchislett.

This PR introduces two optimisations for the EAGLE code path for the V1 engine which together improve the speed by ~4-5% across various batch sizes.

Fused state-update kernels

During the main EAGLE loop, updating input_ids, positions, and other small tensors launched many tiny CUDA kernels. Profiling showed the GPU was idling on kernel-launch latency more than it was computing.

We replaced these individual operations with a single custom Triton kernel (changed only slightly from #18221). Because all tensors share the same batch size, the work can be fused trivially, eliminating the launch overhead.

Result: ~2.5% higher decoding speed (TPOT).

Collapse GPU-CPU synchronisations

Input pre-processing for EAGLE previously performed multiple blocking calls, with the following pattern:

Rejection sampling on GPU → sync → CPU computation → sync → GPU upload

The CPU computation in between syncs is currently on the critical path, starving the GPU.

We moved several of those computations back onto the GPU by precomputing certain values. Where necessary, these calculations were also duplicated on the CPU so both devices could compute the same values in parallel since the calculations were relatively inexpensive. Metadata that depends on GPU values is collected with one final sync before the execution of the EAGLE model.

This change also included a few optimizations to convert looped operations to numpy operations, since the conversion from CPU to GPU code made this quite convenient.

Result: Host-device syncs reduced from 3-4 to 1, yielding another ~2% improvement.

Results

In total, this yields about 4~5% increased token throughput. Below are screenshots of nsys profile traces which show the reduced time between the target model execution and the eagle model executions.

Before:

image

After:

image

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @leo-cf-tian, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the performance of the EAGLE speculative decoding path in the V1 engine by introducing two key optimizations: fusing multiple small CUDA kernels into a single Triton kernel and reducing GPU-CPU synchronization overhead. These changes collectively lead to a notable 4-5% improvement in token throughput across various batch sizes.

Highlights

  • Kernel Fusion for State Updates: Replaced numerous small CUDA kernels responsible for updating input IDs, positions, and other tensors with a single, custom Triton kernel (advance_state_kernel). This eliminates kernel-launch latency, contributing to a ~2.5% speedup in decoding by performing these operations directly on the GPU.
  • Reduced GPU-CPU Synchronizations: Critical path CPU computations, which previously caused GPU idling due to multiple blocking calls, have been offloaded to the GPU. This involved precomputing values on the GPU and, where necessary, duplicating calculations on both CPU and GPU for parallel execution, reducing host-device syncs from 3-4 to 1 and yielding an additional ~2% performance gain.
  • Optimized Input Preparation: Moved the preparation of various input tensors and metadata for the EAGLE model, such as last_token_indices and seq_lens, to the GPU. This is encapsulated in the new EagleProposer.load_inputs method, further minimizing CPU-GPU data transfers and synchronization points.
  • Streamlined Token Sampling Logic: Refactored the process of determining next_token_ids and handling rejected tokens in speculative decoding to leverage GPU tensor operations (torch.where, torch.gather, valid_mask.sum), replacing previous Python-based loops and reducing CPU overhead.
  • Dependency Updates: Updated requirements/test.txt to include new packages (async-timeout, exceptiongroup, toml) and adjusted existing dependency annotations, likely supporting new functionalities or updated libraries.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@leo-cf-tian leo-cf-tian changed the title EAGLE Kernel Fusion + Synchronization Overhead Reduction [Perf][Spec Decode] EAGLE Kernel Fusion + Synchronization Overhead Reduction Jun 25, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This PR introduces optimizations for the EAGLE code path, including fused state-update kernels and collapsing GPU-CPU synchronizations. The changes aim to improve decoding speed and token throughput. The code introduces a new Triton kernel, moves computations to the GPU, and optimizes operations for CPU-GPU conversion. Several suggestions were made to improve code clarity and maintainability, including adding docstrings, removing unused parameters, and refactoring duplicated logic.

@ekagra-ranjan
Copy link
Contributor

Nice job! Could you also share the exact command for running the nsight and the profiling script on which nsight ran? That would help in determining the batch size and other variables in the profiling.

@leo-cf-tian leo-cf-tian force-pushed the eagle-fusion-sync-reduce branch from 39043e0 to b678b55 Compare June 26, 2025 15:34
Signed-off-by: Leo Tian <leo.tian@centml.ai>
@leo-cf-tian leo-cf-tian force-pushed the eagle-fusion-sync-reduce branch from b678b55 to 6f67282 Compare June 26, 2025 15:39
@leo-cf-tian
Copy link
Author

@ekagra-ranjan Here are the commands I used:

To run vLLM and nsys:
nsys profile -t cuda,nvtx,osrt,cudnn,cublas --cuda-graph-trace=node --force-overwrite true -o eagle3_baseprofile --trace-fork-before-exec true vllm serve "meta-llama/Llama-3.1-8B-Instruct" --max-num-seqs 128 --max-model-len 8192 --max-num-batched-tokens 8192 --no-enable-prefix-caching --disable-log-requests --speculative-config '{"method": "eagle3", "model": "yuhuili/EAGLE3-LLaMA3.1-Instruct-8B", "num_speculative_tokens": 4}'

To benchmark:

fib benchmark -n 40 --max-concurrent 1 -rps inf --dataset sharegpt --backend openai-chat --endpoint v1/chat/completions --seed 42

Signed-off-by: Leo Tian <leo.tian@centml.ai>
Signed-off-by: Leo Tian <leo.tian@centml.ai>
Signed-off-by: Leo Tian <leo.tian@centml.ai>
Signed-off-by: Leo Tian <leo.tian@centml.ai>
Signed-off-by: Leo Tian <leo.tian@centml.ai>
Copy link

mergify bot commented Jul 2, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @leo-cf-tian.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants