-
-
Notifications
You must be signed in to change notification settings - Fork 8.7k
[Model] Add reason parser for Hunyuan A13B Model. #20625
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @kzjeef, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request adds a new reasoning parser tailored for the Hunyuan A13B model. It includes logic for both non-streaming and streaming modes, using regular expressions and a token ID-based state machine, respectively, to extract reasoning and response content. Comprehensive test cases are also included to ensure the parser's reliability.
Highlights
- New Reasoning Parser: Introduces a new reasoning parser,
hunyuan_a13b
, specifically for the Hunyuan A13B model, designed to handle a special token indicating the thinking state. - Non-stream Mode: Implements regular expressions to extract reasoning and response parts from the model output in non-streaming mode.
- Stream Mode: Utilizes a token ID-based state machine to manage transitions between thinking and response states in streaming mode.
- Test Coverage: Adds new test cases to ensure the correctness and stability of the new reasoning parser.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a new reason parser, hunyuan_a13b
, for the Hunyuan A13B model. The implementation includes regular expressions for non-streaming mode and a token-ID-based state machine for streaming. The review identified a critical issue in the streaming parser related to single-token input assumption, and opportunities to improve code maintainability by removing duplicated test cases and unused code.
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
This parser does not implement vllm/vllm/entrypoints/openai/serving_chat.py Line 760 in a4c2331
Moreover, the separator between reasoning and response ( vllm/vllm/entrypoints/openai/serving_chat.py Lines 748 to 750 in a4c2331
|
Thanks for explain, |
e596f6b
to
56ce6d5
Compare
Hi @huyz-git , I have add one is_reasoning_end in latest MR code. |
- A new `hunyuan_a13b parser` reason parser was added. Because the upcoming model will use a spcial token for think state, so this parser only for a13b model. - For non-stream mode, use a regex to absorb the reason part and the response part. - For stream mode, use a token id based state machine to control the state change. - Add Reason end function by check state. - Add test case. Signed-off-by: Asher Zhang <asherszhang@tencent.com>
It seems that this PR still breaks the stream tool parser. For requests with For requests with thinking, the reasoning parser does not implement vllm/vllm/entrypoints/openai/serving_chat.py Lines 749 to 754 in 6a9e6b2
The returned current_token_ids will be used by here:vllm/vllm/entrypoints/openai/serving_chat.py Line 815 in 6a9e6b2
and then here: vllm/vllm/entrypoints/openai/serving_chat.py Lines 630 to 634 in 6a9e6b2
which will result the following error:
Moreover, even if I add the
(source: https://github.com/Tencent-Hunyuan/Hunyuan-A13B/blob/2798f3c8b6a69e0ce93950b0d2417203cf950fa0/agent/hunyuan_tool_parser.py#L147-L148 ) |
Thanks, I'll add this function in later commit, I found this issue too, and I change the chat's code to fix this error, For the function call function parser in hunyuan's github, it's not working and needs to update. I have made some modify on this tool parser, I'll send the PR today to vllm. (source: https://github.com/Tencent-Hunyuan/Hunyuan-A13B/blob/2798f3c8b6a69e0ce93950b0d2417203cf950fa0/agent/hunyuan_tool_parser.py#L147-L148 ) Currently I tested these case:
the tool choice "function name" implementation maybe needs to change, since it don't filter the content, so the arguments may contains <tool_call> </tool_call> meta strings. Maybe we can discuss this in another PR. |
@huyz-git Hunyuan A13B Tool Calling code was submitted in this PR(#20820 ) |
Signed-off-by: Asher Zhang <asherszhang@tencent.com>
@kzjeef when response finishes with following sequence, last part is not filtered. [ '!\n', '</', 'answer', '>' ] |
Hi @fbirlik Could you provide the input parameter and input parameter for this issue ?
I just simply add one test case for !\n , but seems cannot reproduce this issue, maybe be because the test case's tokenizer have different token about !\n so I needs reproduce this issue with your input. |
Signed-off-by: Asher Zhang <asherszhang@tencent.com> Signed-off-by: Patrick von Platen <patrick.v.platen@gmail.com>
Purpose
This MR introduces a new reason parser,
hunyuan_a13b
, specifically designed for the the new relased A13B model. This parser is tailored to handle a special token used to indicate the thinking state, which will be exclusive to this model.Key Changes
hunyuan_a13b
parser to handle the reasoning and response extraction logic.Test Plan
unit test:
server test:
reason without stream:
reason with stream:
Test Result
unit test: