-
-
Notifications
You must be signed in to change notification settings - Fork 8.8k
[Model] Add Ernie4.5 and Ernie4.5MoE Model Support #20220
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @CSWYF3634076, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request significantly expands vLLM's model compatibility by integrating the Ernie 4.5 and Ernie 4.5 MoE architectures. The changes involve adding the necessary model definitions and logic to enable efficient inference for these new large language models, including their specialized Mixture of Experts components.
Highlights
- New Model Support: I've added comprehensive support for the Ernie 4.5 and Ernie 4.5 MoE models. This includes their specific architectural components like MLP, Attention, and DecoderLayer, tailored for vLLM's efficient inference.
- Mixture of Experts (MoE) Integration: For the Ernie 4.5 MoE model, I've implemented the Mixture of Experts (MoE) layer, incorporating support for both expert routing and shared experts, which is crucial for its unique architecture.
- Documentation and Test Coverage: The pull request updates the documentation to list the newly supported Ernie models and adds corresponding entries to the test registry. This ensures that the models are discoverable and can be loaded correctly for testing and usage.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces support for Ernie4.5 and Ernie4.5MoE models, including their implementations, documentation updates, and test registry entries. The code is well-structured and aligns with existing patterns in the vLLM codebase. I've provided a minor suggestion to enhance code clarity and consistency. Overall, this is a solid contribution.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for adding these models! Just added some initial comments about dense model, PTAL!
This pull request has merge conflicts that must be resolved before it can be |
I merged this mr to v0.9.1, and served the ERNIE-4.5-21B-A3B-PT model. The output is garbled text, just like the text below. request: response: |
Same issue, and I'm using the branch |
I just executed the entire process once and didn't reproduce it. Could you please post your vllm serve command |
|
|
I have reproduced the issue, which occurred after I rebased the main branch. Please use branch CSWYF3634076:ernie to experience it first. I am currently investigating the cause of the issue and apologize for any inconvenience caused |
Adapted from this PR: vllm-project/vllm#20220 --------- Co-authored-by: Corle-hyz <heyongzhe18@mails.ucas.edu.cn>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some NIT comments, Please provide feedback on https://github.com/vllm-project/vllm/pull/20220/files#r2173637449
May I ask if the recommended hyperparameters are being used. I have created an issue specifically for collecting questions, and you can continue to ask questions here. CSWYF3634076#1 |
Adapted from this PR: vllm-project/vllm#20220 --------- Co-authored-by: Corle-hyz <heyongzhe18@mails.ucas.edu.cn>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM now! Thanks for adding this model!
When I was testing Model Ernie4_5_VLMoeForConditionalGeneration, I found that it seemed not to support the visual model ValueError: Ernie4_5_VLMoeForConditionalGeneration has no vLLM implementation and the Transformers implementation is not compatible with vLLM. Try setting VLLM_USE_V1=0. |
The vllm running the Ernie45 multimodal model is still under development. Support will be provided in the near future, You can use FastDeploy to deploy the paddle version for priority experience |
FastDeploy has other bugs such as memory sharing。。。。。 |
@SongDI911 Can you open an issue in FastDeploy, describe the problem, or join our ernie user WeChat group |
PaddlePaddle/FastDeploy#2683 here |
Signed-off-by: wangyafeng <wangyafeng@baidu.com>
Purpose
Support Baidu Ernie4.5 model for vllm
In this PR, I have provided the implementation of the Ernie4.5 and Ernie4.5MoE model structure through two files: ernie45.py
and ernie45_moe. py.