-
-
Notifications
You must be signed in to change notification settings - Fork 8.7k
[Bugfix] Mistral tool parser streaming update #19425
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
[Bugfix] Mistral tool parser streaming update #19425
Conversation
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @avigny, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request addresses an issue related to streaming tool calls for Mistral models by replacing the previous partial_json_parser
-based implementation with a more robust, custom stateful parser. This new approach aims to accurately extract tool call information, including names and arguments, as tokens are streamed, improving the reliability of tool use functionality in streaming mode. The changes include a significant rewrite of the parsing logic and the addition of comprehensive test coverage.
Highlights
- Refactor Streaming Parser: The core logic for parsing Mistral tool calls during streaming has been completely rewritten.
- Removed Dependency: The dependency on the
partial_json_parser
library for streaming tool call extraction has been removed. - New Parsing Mechanism: Introduced a custom stateful parsing mechanism using regex and
json.JSONDecoder.raw_decode
to incrementally extract tool call names and arguments from the raw token stream. - Comprehensive Tests: Added a new, extensive test file (
tests/tool_use/test_mistral_tool_parser.py
) with various test cases covering both streaming and non-streaming scenarios for Mistral tool calls, including single and multiple tool calls with different argument structures.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configureGemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request refactors the streaming tool call parsing logic for Mistral models and adds a comprehensive test suite. The core change involves replacing partial_json_parser
with a custom regex and json.raw_decode
-based approach for more fine-grained control over the streaming process. The new tests cover a variety of scenarios. The review includes stylistic suggestions for the tests and points for consideration regarding complexity and state management in the new parsing logic.
# select as the current tool call the one we're on the state at | ||
|
||
current_tool_call: dict = tool_call_arr[self.current_tool_id] \ | ||
if len(tool_call_arr) > 0 else {} | ||
|
||
# case -- if no tokens have been streamed for the tool, e.g. | ||
# only the array brackets, stream nothing | ||
if len(tool_call_arr) == 0: | ||
return None | ||
|
||
# case: we are starting a new tool in the array | ||
# -> array has > 0 length AND length has moved past cursor | ||
elif (len(tool_call_arr) > 0 | ||
and len(tool_call_arr) > self.current_tool_id + 1): | ||
|
||
# if we're moving on to a new call, first make sure we | ||
# haven't missed anything in the previous one that was | ||
# auto-generated due to JSON completions, but wasn't | ||
# streamed to the client yet. | ||
if self.current_tool_id >= 0: | ||
diff: Union[str, None] = current_tool_call.get("arguments") | ||
|
||
if diff: | ||
diff = json.dumps(diff, ensure_ascii=False).replace( | ||
self.streamed_args_for_tool[self.current_tool_id], | ||
"") | ||
delta = DeltaMessage(tool_calls=[ | ||
DeltaToolCall(index=self.current_tool_id, | ||
function=DeltaFunctionCall( | ||
arguments=diff).model_dump( | ||
exclude_none=True)) | ||
]) | ||
self.streamed_args_for_tool[ | ||
self.current_tool_id] += diff | ||
else: | ||
delta = None | ||
else: | ||
delta = None | ||
# re-set stuff pertaining to progress in the current tool | ||
self.current_tool_id = len(tool_call_arr) - 1 | ||
self.current_tool_name_sent = False | ||
self.streamed_args_for_tool.append("") | ||
logger.debug("starting on new tool %d", self.current_tool_id) | ||
return delta | ||
|
||
# case: update an existing tool - this is handled below | ||
|
||
# if the current tool name hasn't been sent, send if available | ||
# - otherwise send nothing | ||
if not self.current_tool_name_sent: | ||
function_name = current_tool_call.get("name") | ||
if function_name: | ||
|
||
delta = DeltaMessage(tool_calls=[ | ||
function_name, name_end_index = self._extracted_complete_name(raw_current_tool_call, self.current_attribute_start_index) | ||
except IndexError: | ||
# name value has not started being generated | ||
return self._none_or_additional_content(additional_content) | ||
if function_name == "": | ||
return self._none_or_additional_content(additional_content) | ||
else: | ||
assert name_end_index is not None # because the function name was successfully retrieved | ||
self.current_tool_name_finished = True | ||
self.current_element_streaming = None | ||
self.current_attribute_start_index = -1 | ||
self.previous_attribute_end_index = name_end_index | ||
delta = DeltaMessage( | ||
content=additional_content, | ||
tool_calls=[ | ||
DeltaToolCall(index=self.current_tool_id, | ||
type="function", | ||
id=MistralToolCall.generate_random_id(), | ||
function=DeltaFunctionCall( | ||
name=function_name).model_dump( | ||
exclude_none=True)) | ||
]) | ||
return delta | ||
if self.current_element_streaming == "arguments": | ||
try: | ||
diff, arguments_end_index = self._extract_argument_fragment(raw_current_tool_call, self.current_attribute_start_index, delta_text) | ||
self.current_tool_arguments_finished = arguments_end_index != -1 | ||
if self.current_tool_arguments_finished: | ||
self.current_element_streaming = None | ||
self.current_attribute_start_index = -1 | ||
self.previous_attribute_end_index = arguments_end_index | ||
delta = DeltaMessage( | ||
content=additional_content, | ||
tool_calls=[ | ||
DeltaToolCall(index=self.current_tool_id, | ||
function=DeltaFunctionCall( | ||
arguments=diff). | ||
model_dump(exclude_none=True)) | ||
]) | ||
self.current_tool_name_sent = True | ||
else: | ||
delta = None | ||
return delta | ||
except IndexError: | ||
# arguments value has not started being generated | ||
return self._none_or_additional_content(additional_content) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tests are similar as the ones added for Jamba models in vllm-project#9154 Signed-off-by: avigny <47987522+avigny@users.noreply.github.com>
Signed-off-by: avigny <47987522+avigny@users.noreply.github.com>
Signed-off-by: avigny <47987522+avigny@users.noreply.github.com>
Signed-off-by: avigny <47987522+avigny@users.noreply.github.com>
c468495
to
d6d17c1
Compare
@hibukipanim I did run the test you provided in your issue description #17585 (comment) and got the following output: ChoiceDeltaToolCall(index=0, id='j6OY9szTS', function=ChoiceDeltaToolCallFunction(arguments=None, name='mcp_confluence'), type='function')
ChoiceDeltaToolCall(index=0, id=None, function=ChoiceDeltaToolCallFunction(arguments='{"', name=None), type=None)
ChoiceDeltaToolCall(index=0, id=None, function=ChoiceDeltaToolCallFunction(arguments='query', name=None), type=None)
ChoiceDeltaToolCall(index=0, id=None, function=ChoiceDeltaToolCallFunction(arguments='":', name=None), type=None)
ChoiceDeltaToolCall(index=0, id=None, function=ChoiceDeltaToolCallFunction(arguments=' "', name=None), type=None)
ChoiceDeltaToolCall(index=0, id=None, function=ChoiceDeltaToolCallFunction(arguments='co', name=None), type=None)
ChoiceDeltaToolCall(index=0, id=None, function=ChoiceDeltaToolCallFunction(arguments='ffee', name=None), type=None)
ChoiceDeltaToolCall(index=0, id=None, function=ChoiceDeltaToolCallFunction(arguments='",', name=None), type=None)
ChoiceDeltaToolCall(index=0, id=None, function=ChoiceDeltaToolCallFunction(arguments=' "', name=None), type=None)
ChoiceDeltaToolCall(index=0, id=None, function=ChoiceDeltaToolCallFunction(arguments='limit', name=None), type=None)
ChoiceDeltaToolCall(index=0, id=None, function=ChoiceDeltaToolCallFunction(arguments='":', name=None), type=None)
ChoiceDeltaToolCall(index=0, id=None, function=ChoiceDeltaToolCallFunction(arguments=' ', name=None), type=None)
ChoiceDeltaToolCall(index=0, id=None, function=ChoiceDeltaToolCallFunction(arguments='1', name=None), type=None)
ChoiceDeltaToolCall(index=0, id=None, function=ChoiceDeltaToolCallFunction(arguments='}', name=None), type=None) It seems to fix your issue. |
@avigny hey! I've being trying to test your solution but with no success. This is what I'm doing:
Where template.jinja is this one and mistral_tool_parser.py is the one that you've created. I'm using this test request:
When I set stream to false, I'm getting this response:
And this error:
When I set stream=true, I dont receive any errors, but the response does not have tool calls:
Am I doing something wrong here? |
Looks liks this PR unfortunately don't fix issues on Mistral Small 3.2. API Call : {
"stream": false,
"temperature": 0.15,
"top_p": 1.0,
"tool_choice": "auto",
"model": "mistralai/Mistral-Small-3.2-24B-Instruct-2506",
"messages": [
{
"role": "user",
"content": "Hi ! What's the result of 95478415 / 4571 ?"
}
],
"tools": [
{
"type":"function",
"function": {
"name":"calculator",
"description":"Perform a basic calculation using ruby syntax for arithmetic operations.",
"parameters": {
"type":"object",
"properties": {
"calculation": {
"type":"string",
"description":"A basic arithmetic calculation in python language (e.g., \"2+2\", \"10*3\", \"45/9\").",
"required":["calculation"]
}
},
"required":["calculation"]
}
}
}
]
} Still have this error :
Here are some logs : === model_output ===
[TOOL_CALLS]calculator{"calculation": "95478415 / 4571"}
=== tool_content ===
calculator{"calculation": "95478415 / 4571"} Please note that this issue is NOT happening when using |
Yes you're both right! Thanks for finding this! |
Any update on getting this merge? |
cc @aarnphm |
So I did more complete testing and found this wasn't working that well after all -- I was getting the same errors reported above. Not sure what happened on initial testing. But, I've since taken it and have a working implementation, for streaming at least, at https://github.com/sjuxax/vllm/tree/Mistral3.2-tool-call-fix. I'm going to cherry-pick it onto #20471 in a sec. Then using that branch should work with quantized HF models and tool calling. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think replacing lines 127:139 with this below will fix it for non-streaming:
#First, use the tool call token to split, and we discard the first item, because it is empty
raw_tool_calls = model_output.split(self.bot_token)[1:]
function_call_arr = []
for raw_tool_call in raw_tool_calls:
tool_name = raw_tool_call.split("{")[0]
tool_arguments_begin = raw_tool_call.find("{")
tool_arguments = raw_tool_call[tool_arguments_begin:]
function_call_arr.append({
"name": tool_name,
"arguments": json.loads(tool_arguments)
})
Signed-off-by: avigny <47987522+avigny@users.noreply.github.com>
Signed-off-by: avigny <47987522+avigny@users.noreply.github.com>
… in the model output Signed-off-by: avigny <47987522+avigny@users.noreply.github.com>
Signed-off-by: avigny <47987522+avigny@users.noreply.github.com>
assert tools is not None | ||
assistant_msg = AssistantMessage(tool_calls=[ | ||
ToolCall(function=FunctionCall( | ||
name=name, | ||
arguments=arg, | ||
)) for (name, arg) in tools | ||
], ) | ||
request = InstructRequest(messages=[assistant_msg], ) | ||
all_token_ids = mistral_tokenizer.instruct.encode_instruct( | ||
request).tokens |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did not find an other way to have a good tokenization of my model output.
I need to have the BOT token [TOOL_CALLS]
as a single token and not split into multiple tokens ([
, TOOL
, _
, ... for example)
# ( | ||
# '''[TOOL_CALLS]add{"a": 3.5, "b": 4}[TOOL_CALLS]multiply{"a": 3, "b": 6}''', # noqa: E501 | ||
# [ | ||
# ToolCall(function=FunctionCall(name="add", | ||
# arguments=json.dumps({ | ||
# "a": 3.5, | ||
# "b": 4 | ||
# }))), | ||
# ToolCall(function=FunctionCall(name="multiply", | ||
# arguments=json.dumps({ | ||
# "a": 3, | ||
# "b": 6 | ||
# }))) | ||
# ], | ||
# None) # Was already broken |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is broken in the current released version of vllm:
mistral tool calls with a v11 tokenizer are broken if multiple tool calls are generated in the same output
with an add
and a multiply
tools,
I got mistralai/Mistral-Small-3.2-24B-Instruct-2506
to generate the following output : [TOOL_CALLS]add{"a": 4, "b": 5}[TOOL_CALLS]multiply{"a": 6, "b": 7}
and got the following errors (line number is off by 1 because I added a print
statement)
ERROR 07-08 17:10:24 [mistral_tool_parser.py:190] Error in extracting tool call from response.
ERROR 07-08 17:10:24 [mistral_tool_parser.py:190] Traceback (most recent call last):
ERROR 07-08 17:10:24 [mistral_tool_parser.py:190] File "/home/avigny/vllm-venv/lib/python3.12/site-packages/vllm/entrypoints/openai/tool_parsers/mistral_tool_parser.py", line 158, in extract_tool_calls
ERROR 07-08 17:10:24 [mistral_tool_parser.py:190] "arguments": json.loads(args)
ERROR 07-08 17:10:24 [mistral_tool_parser.py:190] ^^^^^^^^^^^^^^^^
ERROR 07-08 17:10:24 [mistral_tool_parser.py:190] File "/home/avigny/.pyenv/versions/3.12.11/lib/python3.12/json/__init__.py", line 346, in loads
ERROR 07-08 17:10:24 [mistral_tool_parser.py:190] return _default_decoder.decode(s)
ERROR 07-08 17:10:24 [mistral_tool_parser.py:190] ^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 07-08 17:10:24 [mistral_tool_parser.py:190] File "/home/avigny/.pyenv/versions/3.12.11/lib/python3.12/json/decoder.py", line 341, in decode
ERROR 07-08 17:10:24 [mistral_tool_parser.py:190] raise JSONDecodeError("Extra data", s, end)
ERROR 07-08 17:10:24 [mistral_tool_parser.py:190] json.decoder.JSONDecodeError: Extra data: line 1 column 17 (char 16)
ERROR 07-08 17:10:24 [mistral_tool_parser.py:190]
ERROR 07-08 17:10:24 [mistral_tool_parser.py:190] During handling of the above exception, another exception occurred:
ERROR 07-08 17:10:24 [mistral_tool_parser.py:190]
ERROR 07-08 17:10:24 [mistral_tool_parser.py:190] Traceback (most recent call last):
ERROR 07-08 17:10:24 [mistral_tool_parser.py:190] File "/home/avigny/vllm-venv/lib/python3.12/site-packages/vllm/entrypoints/openai/tool_parsers/mistral_tool_parser.py", line 167, in extract_tool_calls
ERROR 07-08 17:10:24 [mistral_tool_parser.py:190] raw_tool_call = self.tool_call_regex.findall(tool_content)[0]
ERROR 07-08 17:10:24 [mistral_tool_parser.py:190] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^
ERROR 07-08 17:10:24 [mistral_tool_parser.py:190] IndexError: list index out of range
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've not looked into this yet
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should be repaired by 138cef3
Signed-off-by: avigny <47987522+avigny@users.noreply.github.com>
return DeltaMessage(content=delta_text) | ||
|
||
# if the tool call token ID IS in the tokens generated so far, that | ||
# means we're parsing as tool calls now | ||
if _is_fn_name_regex_support(self.model_tokenizer): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Naming of this function could be modified to be more readable as the streaming function does not use this regex.
Basically this is used here to know if the tool calls should be parsed with :
- the old parser (tool call like
[TOOL_CALLS][{"name":"add" , "arguments":{"a": 3, "b": 4} } ]
) - the new parser (tool call like
[TOOL_CALLS]add{"a": "3", "b": "4"}
)
Signed-off-by: avigny <47987522+avigny@users.noreply.github.com>
Signed-off-by: avigny <47987522+avigny@users.noreply.github.com>
Purpose
Fixes #13622
Fixes #17585
Probably fixes #20028
This PR is similar to #16096 (hermes tool parser)
Future improvements
ijson
(used in [Frontend] [Bugfix] Refactor tool parsers and simplify the tool parsing interface. #16096) so that we don't try to parse from scratch at every new delta received.extract_tool_calls_streaming
function in smaller piecesTest Plan
I've added a test file
tests/tool_use/test_mistral_tool_parser.py
for easy and fast testing. This file works as the existingtests/tool_use/test_jamba_tool_parser.py
.Use
pytest tests/tool_use/test_mistral_tool_parser.py
to run this test file.Test Result
The following results are from the added tests in
tests/tool_use/test_mistral_tool_parser.py
Before the fix:
After the fix
(Optional) Documentation Update
I believe no documentation update is needed