Skip to content

[Benchmark] Add expert parallel support to MoE benchmark #20876

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

Chen-zexi
Copy link

@Chen-zexi Chen-zexi commented Jul 13, 2025

Essential Elements of an Effective PR Description Checklist

  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.

Purpose

This PR enhances the MoE kernel benchmark script (benchmarks/kernels/benchmark_moe.py) by adding support for tuning configurations with expert parallelism (EP).

Previously, the script only supported tuning for tensor parallelism (TP), where the intermediate size is sharded. This change introduces an --enable-expert-parallel flag, which modifies the benchmark to simulate an EP environment by sharding the experts themselves across the available devices.

Test Plan

This command should execute the tuning process and save a configuration file named according to the number of local experts per device.

## Test Command 1
CUDA_VISIBLE_DEVICES=0,1 python benchmarks/kernels/benchmark_moe.py \
       --model Qwen/Qwen3-30B-A3B \
       --enable-expert-parallel \
       --tp-size 2 \
       --tune

This command should expect to fail (test error handling)

## Test Command 2
CUDA_VISIBLE_DEVICES=0,1 python benchmarks/kernels/benchmark_moe.py \
       --model Qwen/Qwen3-30B-A3B \
       --enable-expert-parallel \
       --tp-size 1 \
       --tune

Test Result

The test commands above were executed, and the results are as expected.

  • The test command 1 runs successfully, distributing the benchmark across the 2 specified GPUs and generating a valid tuning configuration file.

  • The test command 2 correctly raises a ValueError with the following:

ValueError: When running with --enable-expert-parallel, the specified --tp-size must be equal to the number of available GPUs. Got --tp-size=1 and 2 GPUs.
To tune for a specific number of GPUs for expert parallel, please restrict the visible devices using the CUDA_VISIBLE_DEVICES

Signed-off-by: Alan Chen <zc2610@nyu.edu>
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @Chen-zexi, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly extends the benchmark_moe.py script by introducing comprehensive support for Expert Parallelism (EP). Previously limited to Tensor Parallelism (TP), the script can now accurately simulate and tune MoE kernels in a distributed EP environment. This involves adapting the benchmark logic to handle local expert counts per device, setting up torch.distributed for multi-GPU communication, and adding necessary command-line arguments and validation to facilitate EP-specific benchmarking.

Highlights

  • Expert Parallelism Support: The benchmark_moe.py script now supports an --enable-expert-parallel flag, allowing the benchmark to simulate an expert-parallel environment by sharding experts across available devices.
  • Distributed Expert Mapping: A new build_expert_map utility function has been introduced to correctly calculate the number of local experts and create a mapping for global experts across distributed ranks, essential for EP configurations.
  • MoE Kernel Adaptation: The benchmark_config function and underlying fused_moe calls have been updated to differentiate between global and local expert counts, ensuring correct tensor sizing and parameter passing for expert-parallel scenarios.
  • Multi-GPU Worker Setup: The BenchmarkWorker class has been enhanced to accept expert parallelism parameters (enable_expert_parallel, worker_id, total_workers) and includes logic to initialize torch.distributed for inter-GPU communication among Ray workers.
  • Argument Validation: New validation checks are in place to ensure that when expert parallelism is enabled, the --tp-size argument matches the number of available GPUs and is at least 2, preventing misconfigurations and guiding users to proper setup.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@mergify mergify bot added the performance Performance-related issues label Jul 13, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for expert parallelism (EP) in the MoE kernel benchmark script. The changes are well-structured and the implementation appears correct. I've added a couple of suggestions to improve the robustness of network-related code and to enhance the clarity of an error message for a better user experience.

Comment on lines 479 to 482
def get_node_ip(self) -> str:
"""Get the IP address of this worker node."""
import socket
return socket.gethostbyname(socket.gethostname())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The current method for obtaining the node's IP address (socket.gethostbyname(socket.gethostname())) can be unreliable in environments with multiple network interfaces, which might lead to issues in setting up the distributed environment. Ray provides a more robust utility, ray.util.get_node_ip_address(), which is specifically designed to correctly identify the node's IP within a Ray cluster. Using this utility would enhance the reliability of IP address resolution.

Suggested change
def get_node_ip(self) -> str:
"""Get the IP address of this worker node."""
import socket
return socket.gethostbyname(socket.gethostname())
def get_node_ip(self) -> str:
"""Get the IP address of this worker node."""
import ray.util
return ray.util.get_node_ip_address()

"please restrict the visible devices using the CUDA_VISIBLE_DEVICES"
)
if args.tp_size < 2:
raise ValueError("Expert parallel requires tensor parallel size >= 2")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The error message "Expert parallel requires tensor parallel size >= 2" might be confusing. When --enable-expert-parallel is active, --tp-size effectively represents the number of GPUs used for expert parallelism, not tensor parallelism. To improve clarity and user experience, the message should explicitly refer to the GPU requirement.

Suggested change
raise ValueError("Expert parallel requires tensor parallel size >= 2")
raise ValueError(f"Expert parallel benchmark requires at least 2 GPUs, but got --tp-size={args.tp_size}.")

Signed-off-by: Alan Chen <zc2610@nyu.edu>
@Chen-zexi Chen-zexi changed the title Implement EP for benchmark_moe Add expert parallel support to MoE benchmark Jul 13, 2025
@Chen-zexi Chen-zexi changed the title Add expert parallel support to MoE benchmark [Benchmark] Add expert parallel support to MoE benchmark Jul 13, 2025
Signed-off-by: Alan Chen <zc2610@nyu.edu>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
performance Performance-related issues
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant