Skip to content

[Model] Add reason parser for Hunyuan A13B Model. #20625

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jul 10, 2025

Conversation

kzjeef
Copy link
Contributor

@kzjeef kzjeef commented Jul 8, 2025

Purpose

This MR introduces a new reason parser, hunyuan_a13b, specifically designed for the the new relased A13B model. This parser is tailored to handle a special token used to indicate the thinking state, which will be exclusive to this model.

Key Changes

  • New Reason Parser: Added the hunyuan_a13b parser to handle the reasoning and response extraction logic.
  • Non-stream Mode: Utilizes a regular expression to extract both the reason and response parts from the model output.
  • Stream Mode: Implements a token ID–based state machine to manage transitions between thinking and response states.
  • Test Coverage: New test cases have been added to ensure correctness and stability.

Test Plan

unit test:

pytest tests/reasoning/test_hunyuan_reasoning_parser.py

server test:

python3 -m vllm.entrypoints.openai.api_server \
                --host 0.0.0.0 \
                --reasoning-parser hunyuan_a13b \
                --enable_reasoning \
                --trust_remote_code
                --tensor-parallel-size 2 \
                --port 8000 \
                --model tencent/Hunyuan-A13B-Instruct

reason without stream:

from openai import OpenAI

# Modify OpenAI's API key and API base to use vLLM's API server.
openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8000/v1"

client = OpenAI(
            api_key=openai_api_key,
                base_url=openai_api_base,
                )

models = client.models.list()
model = models.data[0].id

# Round 1
messages = [{"role": "user", "content": "9.11 and 9.8, which is greater?"}]
# For granite, add: `extra_body={"chat_template_kwargs": {"thinking": True}}`
# For Qwen3 series, if you want to disable thinking in reasoning mode, add:
# extra_body={"chat_template_kwargs": {"enable_thinking": False}}
response = client.chat.completions.create(model=model, messages=messages)

reasoning_content = response.choices[0].message.reasoning_content
content = response.choices[0].message.content

print("reasoning_content:", reasoning_content)
print("\n\ncontent:", content)

reason with stream:

from openai import OpenAI

# Modify OpenAI's API key and API base to use vLLM's API server.
openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8000/v1"

client = OpenAI(
    api_key=openai_api_key,
    base_url=openai_api_base,
)

models = client.models.list()
model = models.data[0].id

messages = [{"role": "user", "content": "9.11 and 9.8, which is greater?"}]
stream = client.chat.completions.create(model=model,
                                        messages=messages,
                                        stream=True)
print("client: Start streaming chat completions...")
printed_reasoning_content = False
printed_content = False

final_reason_text = ""
final_content_text = ""

for chunk in stream:

#    print(f"chunk: {chunk}")
    reasoning_content = None
    content = None
    # Check the content is reasoning_content or content
    if hasattr(chunk.choices[0].delta, "reasoning_content"):
        reasoning_content = chunk.choices[0].delta.reasoning_content
        if reasoning_content:
            final_reason_text += reasoning_content

    if hasattr(chunk.choices[0].delta, "content"):
        content = chunk.choices[0].delta.content
        if content:
            final_content_text += content

    if reasoning_content is not None:
        if not printed_reasoning_content:
            printed_reasoning_content = True
            print("reasoning_content:", end="", flush=True)
        print(reasoning_content, end="", flush=True)
    if content is not None:
        if not printed_content:
            printed_content = True
            print("\ncontent:", end="", flush=True)
        # Extract and print the content
        print(content, end="", flush=True)

print(">>> reason :")
print(final_reason_text)
print(">>> response: ")
print(final_content_text)

Test Result

unit test:

= 16 passed, 1 warning in 5.00s =========

@kzjeef kzjeef requested a review from aarnphm as a code owner July 8, 2025 11:15
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @kzjeef, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request adds a new reasoning parser tailored for the Hunyuan A13B model. It includes logic for both non-streaming and streaming modes, using regular expressions and a token ID-based state machine, respectively, to extract reasoning and response content. Comprehensive test cases are also included to ensure the parser's reliability.

Highlights

  • New Reasoning Parser: Introduces a new reasoning parser, hunyuan_a13b, specifically for the Hunyuan A13B model, designed to handle a special token indicating the thinking state.
  • Non-stream Mode: Implements regular expressions to extract reasoning and response parts from the model output in non-streaming mode.
  • Stream Mode: Utilizes a token ID-based state machine to manage transitions between thinking and response states in streaming mode.
  • Test Coverage: Adds new test cases to ensure the correctness and stability of the new reasoning parser.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@kzjeef kzjeef force-pushed the hy-reason-parser branch from 44ae8fc to 58485a5 Compare July 8, 2025 11:22
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new reason parser, hunyuan_a13b, for the Hunyuan A13B model. The implementation includes regular expressions for non-streaming mode and a token-ID-based state machine for streaming. The review identified a critical issue in the streaming parser related to single-token input assumption, and opportunities to improve code maintainability by removing duplicated test cases and unused code.

Copy link

github-actions bot commented Jul 8, 2025

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@huyz-git
Copy link

huyz-git commented Jul 8, 2025

This parser does not implement is_reasoning_end method, so it may not work with the tool call parser together, since current chat API server uses reasoning_parser.is_reasoning_end to determinate whether it should handle tool call or not:

# handle tool calls only after reasoning is done,

Moreover, the separator between reasoning and response (</think> or <answer>) of Hunyuan A13B model is not a single token, so it may not be able to support using reasoning parser and tool call parser together without refactoring the chat API server. Because the chat API server only uses the current generated token id (not the whole generated ids) to check is_reasoning_end, and a single generated token id is not enough to determinate reasoning end:

if reasoning_parser.is_reasoning_end(
list(output.token_ids)):
reasoning_end_arr[i] = True

@kzjeef
Copy link
Contributor Author

kzjeef commented Jul 8, 2025

This parser does not implement is_reasoning_end method, so it may not work with the tool call parser together, since current chat API server uses reasoning_parser.is_reasoning_end to determinate whether it should handle tool call or not:

# handle tool calls only after reasoning is done,

Moreover, the separator between reasoning and response (</think> or <answer>) of Hunyuan A13B model is not a single token, so it may not be able to support using reasoning parser and tool call parser together without refactoring the chat API server. Because the chat API server only uses the current generated token id (not the whole generated ids) to check is_reasoning_end:

if reasoning_parser.is_reasoning_end(
list(output.token_ids)):
reasoning_end_arr[i] = True

Thanks for explain,
agree, the A13B doest support a single token to shift thinking state, like deepseek or qwen3.
At least the reasoning parser is works now, but function is not complete, just like the granite_reasoning_parser.

@kzjeef kzjeef force-pushed the hy-reason-parser branch 2 times, most recently from e596f6b to 56ce6d5 Compare July 8, 2025 17:00
@kzjeef kzjeef force-pushed the hy-reason-parser branch from 56ce6d5 to a8acd11 Compare July 9, 2025 09:35
@kzjeef
Copy link
Contributor Author

kzjeef commented Jul 9, 2025

This parser does not implement is_reasoning_end method, so it may not work with the tool call parser together, since current chat API server uses reasoning_parser.is_reasoning_end to determinate whether it should handle tool call or not:

# handle tool calls only after reasoning is done,

Moreover, the separator between reasoning and response (</think> or <answer>) of Hunyuan A13B model is not a single token, so it may not be able to support using reasoning parser and tool call parser together without refactoring the chat API server. Because the chat API server only uses the current generated token id (not the whole generated ids) to check is_reasoning_end, and a single generated token id is not enough to determinate reasoning end:

if reasoning_parser.is_reasoning_end(
list(output.token_ids)):
reasoning_end_arr[i] = True

Hi @huyz-git ,

I have add one is_reasoning_end in latest MR code.
Implement it by checking the self.current_state.

- A new `hunyuan_a13b parser` reason parser was added.
  Because the upcoming model will use a spcial token
  for think state, so this parser only for a13b model.

- For non-stream mode, use a regex to absorb the
  reason part and the response part.
- For stream mode, use a token id based state machine
  to control the state change.
- Add Reason end function by check state.
- Add test case.

Signed-off-by: Asher Zhang <asherszhang@tencent.com>
@kzjeef kzjeef force-pushed the hy-reason-parser branch from a8acd11 to ffebf8f Compare July 9, 2025 12:29
@simon-mo simon-mo enabled auto-merge (squash) July 10, 2025 01:23
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Jul 10, 2025
@simon-mo simon-mo merged commit b140416 into vllm-project:main Jul 10, 2025
73 checks passed
@huyz-git
Copy link

It seems that this PR still breaks the stream tool parser.

For requests with "chat_template_kwargs": {"enable_thinking": false} (i.e., without thinking), the reasoning_parser.current_state is always "idle", and therefore reasoning_parser.is_reasoning_end(...) is always False, and the API server will never call the tool parser.

For requests with thinking, the reasoning parser does not implement reasoning_parser.extract_content_ids which is required by here:

if reasoning_parser.is_reasoning_end(
list(output.token_ids)):
reasoning_end_arr[i] = True
current_token_ids = \
reasoning_parser.extract_content_ids(
list(output.token_ids))

The returned current_token_ids will be used by here:
all_previous_token_ids[i] = current_token_ids

and then here:
previous_text = previous_texts[i]
previous_token_ids = all_previous_token_ids[i]
current_text = previous_text + delta_text
current_token_ids = previous_token_ids + list(
output.token_ids)

which will result the following error:

ERROR 07-11 14:35:27 [serving_chat.py:954] Error in chat completion stream generator.
ERROR 07-11 14:35:27 [serving_chat.py:954] Traceback (most recent call last):
ERROR 07-11 14:35:27 [serving_chat.py:954]   File "/vllm/vllm/entrypoints/openai/serving_chat.py", line 633, in chat_completion_stream_generator
ERROR 07-11 14:35:27 [serving_chat.py:954]     current_token_ids = previous_token_ids + list(
ERROR 07-11 14:35:27 [serving_chat.py:954]                         ^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 07-11 14:35:27 [serving_chat.py:954] TypeError: unsupported operand type(s) for +: 'NoneType' and 'list'

Moreover, even if I add the extract_content_ids function to the reasoning parser which returns an empty list [], the tool parser still does not work. This is because, in a response with tool calls, it should have the format <answer>\n<tool_calls>[...]</tool_calls>\n</answer>, and in the official tool parser, it uses the string "answer>\n<" to check whether current response contains tool calls:

        # Simplify detection: if it begins with "<" treat it as a function call
        is_function_call = ("answer>\n<" in current_text.strip())

(source: https://github.com/Tencent-Hunyuan/Hunyuan-A13B/blob/2798f3c8b6a69e0ce93950b0d2417203cf950fa0/agent/hunyuan_tool_parser.py#L147-L148 )
But, the reasoning parser will only set the current_state to response after the complete <answer>, which makes the "answer>\n<" string disappears from the output content, and the tool parser will not parse the tool call.

@kzjeef
Copy link
Contributor Author

kzjeef commented Jul 11, 2025

It seems that this PR still breaks the stream tool parser.

For requests with "chat_template_kwargs": {"enable_thinking": false} (i.e., without thinking), the reasoning_parser.current_state is always "idle", and therefore reasoning_parser.is_reasoning_end(...) is always False, and the API server will never call the tool parser.

For requests with thinking, the reasoning parser does not implement reasoning_parser.extract_content_ids which is required by here:

Thanks, I'll add this function in later commit, I found this issue too, and I change the chat's code to fix this error,
But I think it's better to add this function to filter out tokens. but it's not a special token, but 3 token, makes thing harder.

For the function call function parser in hunyuan's github, it's not working and needs to update.

I have made some modify on this tool parser, I'll send the PR today to vllm. (source: https://github.com/Tencent-Hunyuan/Hunyuan-A13B/blob/2798f3c8b6a69e0ce93950b0d2417203cf950fa0/agent/hunyuan_tool_parser.py#L147-L148 )

Currently I tested these case:

  • Tool Choice Auto w/ Stream
  • Tool Choice Auto w/o Stream
  • Tool Choice Function name w/o Stream
  • Tool Choice Function name w/o Stream
  • Tool Choice required w Stream
  • Tool Choice required w/o Stream

the tool choice "function name" implementation maybe needs to change, since it don't filter the content, so the arguments may contains <tool_call> </tool_call> meta strings.

Maybe we can discuss this in another PR.

@kzjeef
Copy link
Contributor Author

kzjeef commented Jul 11, 2025

It seems that this PR still breaks the stream tool parser.
For requests with "chat_template_kwargs": {"enable_thinking": false} (i.e., without thinking), the reasoning_parser.current_state is always "idle", and therefore reasoning_parser.is_reasoning_end(...) is always False, and the API server will never call the tool parser.
For requests with thinking, the reasoning parser does not implement reasoning_parser.extract_content_ids which is required by here:

Thanks, I'll add this function in later commit, I found this issue too, and I change the chat's code to fix this error, But I think it's better to add this function to filter out tokens. but it's not a special token, but 3 token, makes thing harder.

For the function call function parser in hunyuan's github, it's not working and needs to update.

I have made some modify on this tool parser, I'll send the PR today to vllm. (source: https://github.com/Tencent-Hunyuan/Hunyuan-A13B/blob/2798f3c8b6a69e0ce93950b0d2417203cf950fa0/agent/hunyuan_tool_parser.py#L147-L148 )

Currently I tested these case:

  • Tool Choice Auto w/ Stream
  • Tool Choice Auto w/o Stream
  • Tool Choice Function name w/o Stream
  • Tool Choice Function name w/o Stream
  • Tool Choice required w Stream
  • Tool Choice required w/o Stream

the tool choice "function name" implementation maybe needs to change, since it don't filter the content, so the arguments may contains <tool_call> </tool_call> meta strings.

Maybe we can discuss this in another PR.

@huyz-git Hunyuan A13B Tool Calling code was submitted in this PR(#20820 )

Chen-zexi pushed a commit to Chen-zexi/vllm that referenced this pull request Jul 13, 2025
Signed-off-by: Asher Zhang <asherszhang@tencent.com>
@fbirlik
Copy link

fbirlik commented Jul 13, 2025

@kzjeef when response finishes with following sequence, last part is not filtered.

[ '!\n', '</', 'answer', '>' ]

@kzjeef
Copy link
Contributor Author

kzjeef commented Jul 14, 2025

@kzjeef when response finishes with following sequence, last part is not filtered.

[ '!\n', '</', 'answer', '>' ]

Hi @fbirlik

Could you provide the input parameter and input parameter for this issue ?

  • input text:
  • generate parameter:
  • stream/non-stream:

I just simply add one test case for !\n , but seems cannot reproduce this issue, maybe be because the test case's tokenizer have different token about !\n

so I needs reproduce this issue with your input.

patrickvonplaten pushed a commit to patrickvonplaten/vllm that referenced this pull request Jul 15, 2025
Signed-off-by: Asher Zhang <asherszhang@tencent.com>
Signed-off-by: Patrick von Platen <patrick.v.platen@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants