Skip to content

[Model] Add Ernie4.5 and Ernie4.5MoE Model Support #20220

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 9 commits into from
Jul 2, 2025

Conversation

CSWYF3634076
Copy link
Contributor

@CSWYF3634076 CSWYF3634076 commented Jun 29, 2025

Purpose

Support Baidu Ernie4.5 model for vllm
In this PR, I have provided the implementation of the Ernie4.5 and Ernie4.5MoE model structure through two files: ernie45.py
and ernie45_moe. py.

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @CSWYF3634076, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly expands vLLM's model compatibility by integrating the Ernie 4.5 and Ernie 4.5 MoE architectures. The changes involve adding the necessary model definitions and logic to enable efficient inference for these new large language models, including their specialized Mixture of Experts components.

Highlights

  • New Model Support: I've added comprehensive support for the Ernie 4.5 and Ernie 4.5 MoE models. This includes their specific architectural components like MLP, Attention, and DecoderLayer, tailored for vLLM's efficient inference.
  • Mixture of Experts (MoE) Integration: For the Ernie 4.5 MoE model, I've implemented the Mixture of Experts (MoE) layer, incorporating support for both expert routing and shared experts, which is crucial for its unique architecture.
  • Documentation and Test Coverage: The pull request updates the documentation to list the newly supported Ernie models and adds corresponding entries to the test registry. This ensures that the models are discoverable and can be loaded correctly for testing and usage.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@mergify mergify bot added the documentation Improvements or additions to documentation label Jun 29, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for Ernie4.5 and Ernie4.5MoE models, including their implementations, documentation updates, and test registry entries. The code is well-structured and aligns with existing patterns in the vLLM codebase. I've provided a minor suggestion to enhance code clarity and consistency. Overall, this is a solid contribution.

Copy link
Collaborator

@Isotr0py Isotr0py left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for adding these models! Just added some initial comments about dense model, PTAL!

Copy link

mergify bot commented Jun 30, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @CSWYF3634076.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Jun 30, 2025
@xjpang
Copy link

xjpang commented Jun 30, 2025

I merged this mr to v0.9.1, and served the ERNIE-4.5-21B-A3B-PT model. The output is garbled text, just like the text below.

request:
curl -X POST "http://127.0.0.1:11522/v1/chat/completions" -H "Content-Type: application/json" -d '{ "model":"baidu/ERNIE-4.5-21B-A3B-PT", "messages": [ { "role": "user", "content": "你好" } ], "top_p": 1.0, "top_k": 40, "temperature": 1.0, "max_tokens": 128, "repetition_penalty": 1.0 }'

response:
{"id":"chatcmpl-8b62dd62ffa0400b82254ac0030c7fe5","object":"chat.completion","created":1751275598,"model":"baidu/ERNIE-4.5-21B-A3B-PT","choices":[{"index":0,"message":{"role":"assistant","reasoning_content":null,"content":"彩­ PC分 ⇒南昌 –“­**о­ ** arrows企­ shows – –­ –拖­画拖 \"拖挤 \r­丢 – –琶**________ arrows彩**丢④****** ****** **** \r**** ** ** **丢**** ** **②** ** ** ************ **潮②** ** **摇 ** **** **** ** **媒 ** ****②** \r** ** ****** ** ********① ****/ **­****,、③→、→→ \r →“ heading-->\r **→→*/\r。② />\r。","tool_calls":[]},"logprobs":null,"finish_reason":"length","stop_reason":null}],"usage":{"prompt_tokens":9,"total_tokens":137,"completion_tokens":128,"prompt_tokens_details":null},"prompt_logprobs":null,"kv_transfer_params":null}

@prnake
Copy link

prnake commented Jun 30, 2025

I merged this mr to v0.9.1, and served the ERNIE-4.5-21B-A3B-PT model. The output is garbled text, just like the text below.

request: curl -X POST "http://127.0.0.1:11522/v1/chat/completions" -H "Content-Type: application/json" -d '{ "model":"baidu/ERNIE-4.5-21B-A3B-PT", "messages": [ { "role": "user", "content": "你好" } ], "top_p": 1.0, "top_k": 40, "temperature": 1.0, "max_tokens": 128, "repetition_penalty": 1.0 }'

response: {"id":"chatcmpl-8b62dd62ffa0400b82254ac0030c7fe5","object":"chat.completion","created":1751275598,"model":"baidu/ERNIE-4.5-21B-A3B-PT","choices":[{"index":0,"message":{"role":"assistant","reasoning_content":null,"content":"彩­ PC分 ⇒南昌 –“­**о­ ** arrows企­ shows – –­ –拖­画拖 \"拖挤 \r­丢 – –琶**________ arrows彩**丢④****** ****** **** \r**** ** ** **丢**** ** **②** ** ** ************ **潮②** ** **摇 ** **** **** ** **媒 ** ****②** \r** ** ****** ** ********① ****/ **­****,、③→、→→ \r →“ heading-->\r **→→*/\r。② />\r。","tool_calls":[]},"logprobs":null,"finish_reason":"length","stop_reason":null}],"usage":{"prompt_tokens":9,"total_tokens":137,"completion_tokens":128,"prompt_tokens_details":null},"prompt_logprobs":null,"kv_transfer_params":null}

Same issue, and I'm using the branch CSWYF3634076:dev, so it's very likely a implementation problem.

@CSWYF3634076
Copy link
Contributor Author

I merged this mr to v0.9.1, and served the ERNIE-4.5-21B-A3B-PT model. The output is garbled text, just like the text below.

request: curl -X POST "http://127.0.0.1:11522/v1/chat/completions" -H "Content-Type: application/json" -d '{ "model":"baidu/ERNIE-4.5-21B-A3B-PT", "messages": [ { "role": "user", "content": "你好" } ], "top_p": 1.0, "top_k": 40, "temperature": 1.0, "max_tokens": 128, "repetition_penalty": 1.0 }'

response: {"id":"chatcmpl-8b62dd62ffa0400b82254ac0030c7fe5","object":"chat.completion","created":1751275598,"model":"baidu/ERNIE-4.5-21B-A3B-PT","choices":[{"index":0,"message":{"role":"assistant","reasoning_content":null,"content":"彩­ PC分 ⇒南昌 –“­**о­ ** arrows企­ shows – –­ –拖­画拖 \"拖挤 \r­丢 – –琶**________ arrows彩**丢④****** ****** **** \r**** ** ** **丢**** ** **②** ** ** ************ **潮②** ** **摇 ** **** **** ** **媒 ** ****②** \r** ** ****** ** ********① ****/ **­****,、③→、→→ \r →“ heading-->\r **→→*/\r。② />\r。","tool_calls":[]},"logprobs":null,"finish_reason":"length","stop_reason":null}],"usage":{"prompt_tokens":9,"total_tokens":137,"completion_tokens":128,"prompt_tokens_details":null},"prompt_logprobs":null,"kv_transfer_params":null}

I just executed the entire process once and didn't reproduce it. Could you please post your vllm serve command

@prnake
Copy link

prnake commented Jun 30, 2025

I merged this mr to v0.9.1, and served the ERNIE-4.5-21B-A3B-PT model. The output is garbled text, just like the text below.
request: curl -X POST "http://127.0.0.1:11522/v1/chat/completions" -H "Content-Type: application/json" -d '{ "model":"baidu/ERNIE-4.5-21B-A3B-PT", "messages": [ { "role": "user", "content": "你好" } ], "top_p": 1.0, "top_k": 40, "temperature": 1.0, "max_tokens": 128, "repetition_penalty": 1.0 }'
response: {"id":"chatcmpl-8b62dd62ffa0400b82254ac0030c7fe5","object":"chat.completion","created":1751275598,"model":"baidu/ERNIE-4.5-21B-A3B-PT","choices":[{"index":0,"message":{"role":"assistant","reasoning_content":null,"content":"彩­ PC分 ⇒南昌 –“­**о­ ** arrows企­ shows – –­ –拖­画拖 \"拖挤 \r­丢 – –琶**________ arrows彩**丢④****** ****** **** \r**** ** ** **丢**** ** **②** ** ** ************ **潮②** ** **摇 ** **** **** ** **媒 ** ****②** \r** ** ****** ** ********① ****/ **­****,、③→、→→ \r →“ heading-->\r **→→*/\r。② />\r。","tool_calls":[]},"logprobs":null,"finish_reason":"length","stop_reason":null}],"usage":{"prompt_tokens":9,"total_tokens":137,"completion_tokens":128,"prompt_tokens_details":null},"prompt_logprobs":null,"kv_transfer_params":null}

I just executed the entire process once and didn't reproduce it. Could you please post your vllm serve command

python3 -m vllm.entrypoints.openai.api_server \
  --model /models/ERNIE-4.5-21B-A3B-PT     \
  --served-model-name ERNIE-4.5-21B-A3B-PT    \
  --tensor-parallel-size 8     \
  --dtype auto \
  --trust-remote-code

@xjpang
Copy link

xjpang commented Jun 30, 2025

/models/ERNIE-4.5-21B-A3B-PT

['python3', '-m', 'vllm.entrypoints.openai.api_server', '--model=/models/ERNIE-4.5-21B-A3B-PT', '--host=127.0.0.1', '--port=11522', '--trust-remote-code', '--tensor-parallel-size=2', '--gpu-memory-utilization=0.95', '--max-logprobs=20', '--swap-space=10', '--max-num-seqs=32', '--no-enable-prefix-caching', '--enable-chunked-prefill', '--max-num-batched-tokens=2048']

@CSWYF3634076
Copy link
Contributor Author

I merged this mr to v0.9.1, and served the ERNIE-4.5-21B-A3B-PT model. The output is garbled text, just like the text below.
request: curl -X POST "http://127.0.0.1:11522/v1/chat/completions" -H "Content-Type: application/json" -d '{ "model":"baidu/ERNIE-4.5-21B-A3B-PT", "messages": [ { "role": "user", "content": "你好" } ], "top_p": 1.0, "top_k": 40, "temperature": 1.0, "max_tokens": 128, "repetition_penalty": 1.0 }'
response: {"id":"chatcmpl-8b62dd62ffa0400b82254ac0030c7fe5","object":"chat.completion","created":1751275598,"model":"baidu/ERNIE-4.5-21B-A3B-PT","choices":[{"index":0,"message":{"role":"assistant","reasoning_content":null,"content":"彩­ PC分 ⇒南昌 –“­**о­ ** arrows企­ shows – –­ –拖­画拖 \"拖挤 \r­丢 – –琶**________ arrows彩**丢④****** ****** **** \r**** ** ** **丢**** ** **②** ** ** ************ **潮②** ** **摇 ** **** **** ** **媒 ** ****②** \r** ** ****** ** ********① ****/ **­****,、③→、→→ \r →“ heading-->\r **→→*/\r。② />\r。","tool_calls":[]},"logprobs":null,"finish_reason":"length","stop_reason":null}],"usage":{"prompt_tokens":9,"total_tokens":137,"completion_tokens":128,"prompt_tokens_details":null},"prompt_logprobs":null,"kv_transfer_params":null}

I just executed the entire process once and didn't reproduce it. Could you please post your vllm serve command

python3 -m vllm.entrypoints.openai.api_server \
  --model /models/ERNIE-4.5-21B-A3B-PT     \
  --served-model-name ERNIE-4.5-21B-A3B-PT    \
  --tensor-parallel-size 8     \
  --dtype auto \
  --trust-remote-code

I have reproduced the issue, which occurred after I rebased the main branch. Please use branch CSWYF3634076:ernie to experience it first. I am currently investigating the cause of the issue and apologize for any inconvenience caused

ceci3 pushed a commit to FlagOpen/FlagScale that referenced this pull request Jul 1, 2025
Adapted from this PR: vllm-project/vllm#20220

---------

Co-authored-by: Corle-hyz <heyongzhe18@mails.ucas.edu.cn>
Copy link
Collaborator

@jeejeelee jeejeelee left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some NIT comments, Please provide feedback on https://github.com/vllm-project/vllm/pull/20220/files#r2173637449

@xjpang
Copy link

xjpang commented Jul 1, 2025

I merged this mr to v0.9.1, and served the ERNIE-4.5-21B-A3B-PT model. The output is garbled text, just like the text below.
request: curl -X POST "http://127.0.0.1:11522/v1/chat/completions" -H "Content-Type: application/json" -d '{ "model":"baidu/ERNIE-4.5-21B-A3B-PT", "messages": [ { "role": "user", "content": "你好" } ], "top_p": 1.0, "top_k": 40, "temperature": 1.0, "max_tokens": 128, "repetition_penalty": 1.0 }'
response: {"id":"chatcmpl-8b62dd62ffa0400b82254ac0030c7fe5","object":"chat.completion","created":1751275598,"model":"baidu/ERNIE-4.5-21B-A3B-PT","choices":[{"index":0,"message":{"role":"assistant","reasoning_content":null,"content":"彩­ PC分 ⇒南昌 –“­**о­ ** arrows企­ shows – –­ –拖­画拖 \"拖挤 \r­丢 – –琶**________ arrows彩**丢④****** ****** **** \r**** ** ** **丢**** ** **②** ** ** ************ **潮②** ** **摇 ** **** **** ** **媒 ** ****②** \r** ** ****** ** ********① ****/ **­****,、③→、→→ \r →“ heading-->\r **→→*/\r。② />\r。","tool_calls":[]},"logprobs":null,"finish_reason":"length","stop_reason":null}],"usage":{"prompt_tokens":9,"total_tokens":137,"completion_tokens":128,"prompt_tokens_details":null},"prompt_logprobs":null,"kv_transfer_params":null}

I just executed the entire process once and didn't reproduce it. Could you please post your vllm serve command

python3 -m vllm.entrypoints.openai.api_server \
  --model /models/ERNIE-4.5-21B-A3B-PT     \
  --served-model-name ERNIE-4.5-21B-A3B-PT    \
  --tensor-parallel-size 8     \
  --dtype auto \
  --trust-remote-code

I have reproduced the issue, which occurred after I rebased the main branch. Please use branch CSWYF3634076:ernie to experience it first. I am currently investigating the cause of the issue and apologize for any inconvenience caused

I used CSWYF3634076:ernie branch, and asked "你能做什么?"。
The outputs seems still have something wrong.
image

@CSWYF3634076
Copy link
Contributor Author

I merged this mr to v0.9.1, and served the ERNIE-4.5-21B-A3B-PT model. The output is garbled text, just like the text below.
request: curl -X POST "http://127.0.0.1:11522/v1/chat/completions" -H "Content-Type: application/json" -d '{ "model":"baidu/ERNIE-4.5-21B-A3B-PT", "messages": [ { "role": "user", "content": "你好" } ], "top_p": 1.0, "top_k": 40, "temperature": 1.0, "max_tokens": 128, "repetition_penalty": 1.0 }'
response: {"id":"chatcmpl-8b62dd62ffa0400b82254ac0030c7fe5","object":"chat.completion","created":1751275598,"model":"baidu/ERNIE-4.5-21B-A3B-PT","choices":[{"index":0,"message":{"role":"assistant","reasoning_content":null,"content":"彩­ PC分 ⇒南昌 –“­**о­ ** arrows企­ shows – –­ –拖­画拖 \"拖挤 \r­丢 – –琶**________ arrows彩**丢④****** ****** **** \r**** ** ** **丢**** ** **②** ** ** ************ **潮②** ** **摇 ** **** **** ** **媒 ** ****②** \r** ** ****** ** ********① ****/ **­****,、③→、→→ \r →“ heading-->\r **→→*/\r。② />\r。","tool_calls":[]},"logprobs":null,"finish_reason":"length","stop_reason":null}],"usage":{"prompt_tokens":9,"total_tokens":137,"completion_tokens":128,"prompt_tokens_details":null},"prompt_logprobs":null,"kv_transfer_params":null}

I just executed the entire process once and didn't reproduce it. Could you please post your vllm serve command

python3 -m vllm.entrypoints.openai.api_server \
  --model /models/ERNIE-4.5-21B-A3B-PT     \
  --served-model-name ERNIE-4.5-21B-A3B-PT    \
  --tensor-parallel-size 8     \
  --dtype auto \
  --trust-remote-code

I have reproduced the issue, which occurred after I rebased the main branch. Please use branch CSWYF3634076:ernie to experience it first. I am currently investigating the cause of the issue and apologize for any inconvenience caused

I used CSWYF3634076:ernie branch, and asked "你能做什么?"。 The outputs seems still have something wrong. image

May I ask if the recommended hyperparameters are being used. I have created an issue specifically for collecting questions, and you can continue to ask questions here. CSWYF3634076#1

zihugithub pushed a commit to zihugithub/FlagScale that referenced this pull request Jul 1, 2025
Adapted from this PR: vllm-project/vllm#20220

---------

Co-authored-by: Corle-hyz <heyongzhe18@mails.ucas.edu.cn>
@mergify mergify bot removed the needs-rebase label Jul 2, 2025
Copy link
Collaborator

@Isotr0py Isotr0py left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM now! Thanks for adding this model!

@Isotr0py Isotr0py enabled auto-merge (squash) July 2, 2025 02:57
@Isotr0py Isotr0py removed performance Performance-related issues rocm Related to AMD ROCm frontend speculative-decoding ci/build v1 multi-modality Related to multi-modality (#4194) tool-calling llama Related to Llama models qwen Related to Qwen models labels Jul 2, 2025
@mergify mergify bot removed the needs-rebase label Jul 2, 2025
@Isotr0py Isotr0py enabled auto-merge (squash) July 2, 2025 06:48
@vllm-bot vllm-bot merged commit e303dcf into vllm-project:main Jul 2, 2025
61 of 66 checks passed
@CSWYF3634076
Copy link
Contributor Author

#20376

@SongDI911
Copy link

When I was testing Model Ernie4_5_VLMoeForConditionalGeneration, I found that it seemed not to support the visual model

ValueError: Ernie4_5_VLMoeForConditionalGeneration has no vLLM implementation and the Transformers implementation is not compatible with vLLM. Try setting VLLM_USE_V1=0.

@CSWYF3634076
Copy link
Contributor Author

When I was testing Model Ernie4_5_VLMoeForConditionalGeneration, I found that it seemed not to support the visual model

ValueError: Ernie4_5_VLMoeForConditionalGeneration has no vLLM implementation and the Transformers implementation is not compatible with vLLM. Try setting VLLM_USE_V1=0.

The vllm running the Ernie45 multimodal model is still under development. Support will be provided in the near future, You can use FastDeploy to deploy the paddle version for priority experience

@SongDI911
Copy link

When I was testing Model Ernie4_5_VLMoeForConditionalGeneration, I found that it seemed not to support the visual model
ValueError: Ernie4_5_VLMoeForConditionalGeneration has no vLLM implementation and the Transformers implementation is not compatible with vLLM. Try setting VLLM_USE_V1=0.

The vllm running the Ernie45 multimodal model is still under development. Support will be provided in the near future, You can use FastDeploy to deploy the paddle version for priority experience

FastDeploy has other bugs such as memory sharing。。。。。

@CSWYF3634076
Copy link
Contributor Author

@xjpang @prnake Using the latest vllm-project/vllm main branch code will no longer output in garbled texts

@CSWYF3634076
Copy link
Contributor Author

When I was testing Model Ernie4_5_VLMoeForConditionalGeneration, I found that it seemed not to support the visual model
ValueError: Ernie4_5_VLMoeForConditionalGeneration has no vLLM implementation and the Transformers implementation is not compatible with vLLM. Try setting VLLM_USE_V1=0.

The vllm running the Ernie45 multimodal model is still under development. Support will be provided in the near future, You can use FastDeploy to deploy the paddle version for priority experience

FastDeploy has other bugs such as memory sharing。。。。。

@SongDI911 Can you open an issue in FastDeploy, describe the problem, or join our ernie user WeChat group

@SongDI911
Copy link

SongDI911 commented Jul 3, 2025

When I was testing Model Ernie4_5_VLMoeForConditionalGeneration, I found that it seemed not to support the visual model
ValueError: Ernie4_5_VLMoeForConditionalGeneration has no vLLM implementation and the Transformers implementation is not compatible with vLLM. Try setting VLLM_USE_V1=0.

The vllm running the Ernie45 multimodal model is still under development. Support will be provided in the near future, You can use FastDeploy to deploy the paddle version for priority experience

FastDeploy has other bugs such as memory sharing。。。。。

@SongDI911 Can you open an issue in FastDeploy, describe the problem, or join our ernie user WeChat group

PaddlePaddle/FastDeploy#2683 here
I've applied for a wechat group, name is gxs

huydhn pushed a commit to huydhn/vllm that referenced this pull request Jul 8, 2025
Signed-off-by: wangyafeng <wangyafeng@baidu.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation new-model Requests to new models ready ONLY add when PR is ready to merge/full CI is needed
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

7 participants