Skip to content

[Feature] Add command tool parser for Command-A model #20800

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

gjgjos
Copy link

@gjgjos gjgjos commented Jul 11, 2025

Purpose

This PR adds a new tool parser module named CommandToolParser to support the command tool calling format used by the [CohereLabs/c4ai-command-a-03-2025](https://huggingface.co/CohereLabs/c4ai-command-a-03-2025) model.

The parser is designed to extract tool call information from model outputs that follow the <|START_ACTION|> ... <|END_ACTION|> format, parsing both synchronous and streaming responses. It leverages partial_json_parser to handle incomplete or malformed JSON in streaming scenarios gracefully.

Test Plan

  1. Serve the model using vLLM with the following configuration:

    vllm serve CohereLabs/c4ai-command-a-03-2025 \
        --enable-auto-tool-choice \
        --tool-call-parser command
  2. Use the OpenAI-compatible API interface to send tool-calling requests:

    from openai import OpenAI
    
    client = OpenAI(
        api_key="EMPTY",
        base_url="http://localhost:8000/v1",
    )
    
    model = client.models.list().data[0].id
    
    tools = [{
        "type": "function",
        "function": {
            "name": "get_current_weather",
            "description": "Get the current weather in a given location",
            "parameters": {
                "type": "object",
                "properties": {
                    "city": {"type": "string", "description": "e.g. 'San Francisco'"},
                    "state": {"type": "string", "description": "e.g. 'CA'"},
                    "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}
                },
                "required": ["city", "state", "unit"]
            }
        }
    }]
    
    messages = [
        {"role": "user", "content": "Can you tell me what the temperature will be in Dallas, in fahrenheit?"}
    ]
    
    response = client.chat.completions.create(
        model=model,
        messages=messages,
        tools=tools,
        temperature=0.3,
        stream=False
    )
    
    print(response)

Test Result

The model successfully produced a tool call:

{
  "tool_calls": [
    {
      "id": "chatcmpl-tool-...",
      "type": "function",
      "function": {
        "name": "get_current_weather",
        "arguments": "{\"city\": \"Dallas\", \"state\": \"TX\", \"unit\": \"fahrenheit\"}"
      }
    }
  ]
}

Accompanied by a reasoning message:

<|START_THINKING|>I will use the 'get_current_weather' tool to find out the temperature in Dallas, Texas.<|END_THINKING|>

This confirms that the command parser correctly handles tool extraction and reasoning content from the model output.


(Optional) Documentation Update

  • Added inline documentation within the CommandToolParser class explaining the parsing logic.
  • If needed, future docs can describe --tool-call-parser command as a valid option for serving command family models.

gjgjos and others added 2 commits July 11, 2025 06:14
Co-authored-by: 김종곤 <rlawhdrhs27@gmail.com>
Signed-off-by: Doil Kim <gjgjos@naver.com>
@gjgjos gjgjos requested a review from aarnphm as a code owner July 11, 2025 06:19
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @gjgjos, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances vLLM's tool-calling capabilities by adding a dedicated parser for the command model family. This allows vLLM to correctly interpret and extract function calls from models that use Cohere's specific tool-calling format, ensuring seamless integration and functionality for both immediate and streaming model outputs.

Highlights

  • New Tool Parser: Introduces a new CommandToolParser specifically designed to handle the command tool calling format, as used by models like CohereLabs/c4ai-command-a-03-2025.
  • Tool Call Extraction Logic: Implements robust logic to extract tool call information from model outputs that adhere to the <|START_ACTION|> ... <|END_ACTION|> delimiters, parsing the JSON payload within.
  • Synchronous and Streaming Support: The parser supports both synchronous (batch) and streaming responses, utilizing partial_json_parser to gracefully handle incomplete or malformed JSON during real-time output generation.
  • Integration and Discoverability: The CommandToolParser is registered with the ToolParserManager, making it available for use via the --tool-call-parser command flag when serving models with vLLM.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @gjgjos, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances vLLM's tool calling capabilities by adding a dedicated parser for models that utilize the Cohere Command-A tool calling convention. This allows vLLM to seamlessly integrate with and correctly interpret function calls from these models, improving compatibility and enabling advanced use cases for both standard and streaming inference.

Highlights

  • New Tool Parser: Introduced a new CommandToolParser specifically designed to handle tool calls from models that follow the Cohere Command-A format, such as c4ai-command-a-03-2025.
  • Tool Call Format Support: The parser is configured to recognize and extract tool call information enclosed within <|START_ACTION|> and <|END_ACTION|> delimiters in the model's output.
  • Synchronous and Streaming Compatibility: The implementation supports both synchronous (full output) and streaming (token-by-token) extraction of tool calls, leveraging partial_json_parser to robustly handle incomplete JSON during streaming.
  • Integration: The new parser is registered with vLLM's ToolParserManager, making it available for use via the --tool-call-parser command option when serving models.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a CommandToolParser for Cohere's command model tool-calling format, including logic for both synchronous and streaming responses. The review focuses on improving the correctness, robustness, and performance of the new parser, particularly in the streaming implementation by addressing logic bugs, performance bottlenecks, and maintainability.

Comment on lines +98 to +100
prev_start = previous_token_ids.count(self.tool_call_start_token_id)
cur_start = current_token_ids.count(self.tool_call_start_token_id)
cur_end = current_token_ids.count(self.tool_call_end_token_id)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The count() method is called on previous_token_ids and current_token_ids in every invocation of this streaming method. Since these lists can grow very large for long conversations, this O(N) operation on each streaming chunk can become a performance bottleneck. Consider maintaining the counts as part of the parser's state and update them incrementally with each new delta_token_ids.

Comment on lines +102 to +116
# Case 1: Block not started → Text as is
if cur_start == 0:
return DeltaMessage(content=delta_text)

# Case 2: Starting a new block
if cur_start > prev_start:
self.current_tool_id += 1
return None

# Case 3: Inside block, not closed → ignored
if cur_start > cur_end:
return None

# Case 4: Block End Point
if cur_start == cur_end and self.tool_call_end_token in delta_text:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The streaming logic has a flaw: when a single delta_text contains both the end of one action block and the start of another (e.g., ...<|END_ACTION|><|START_ACTION|>...), cur_start will be greater than cur_end, causing the check for a completed block at line 116 to be skipped. The if cur_start > prev_start: check at line 107 will be true, and the method will return None, effectively ignoring the completed block that just ended. The logic should be structured to handle a block ending before checking for a new block starting within the same chunk.


# Case 2: Starting a new block
if cur_start > prev_start:
self.current_tool_id += 1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This increment, combined with the one in the parsing loop (line 147), causes non-contiguous tool call indices (e.g., 0, 1, 3, ...) when the model output contains multiple action blocks. Tool call indices for a single response must be contiguous, starting from 0. To fix this, current_tool_id should only be incremented inside the parsing loop (lines 132-148) for each parsed tool call.


# Case 4: Block End Point
if cur_start == cur_end and self.tool_call_end_token in delta_text:
full = current_text + delta_text
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The current_text parameter should already contain the full accumulated text from the stream up to this point. Concatenating delta_text to it is redundant and will result in duplicated content, which will likely cause the payload extraction to fail. full should just be assigned the value of current_text.

Suggested change
full = current_text + delta_text
full = current_text

Comment on lines +32 to +35
self.prev_tool_call_arr: list[dict] = []
self.streamed_args_for_tool: list[str] = []
self.current_tool_id: int = -1
self.current_tool_name_sent: bool = False
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The state variables prev_tool_call_arr, streamed_args_for_tool, and current_tool_name_sent are re-declared here, but they are already inherited from the ToolParser base class and initialized in super().__init__(). These variables are also unused within this class. Remove these lines to avoid shadowing the parent class's attributes and improve code clarity.

return ExtractedToolCallInformation(tools_called=True,
tool_calls=tool_calls,
content=prefix or None)
except Exception:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Catching a broad Exception can hide specific issues and make debugging harder. It's better to catch more specific exceptions that you expect to handle, such as json.JSONDecodeError or ValueError. This provides better error context and avoids accidentally catching unrelated exceptions.

Suggested change
except Exception:
except (json.JSONDecodeError, ValueError):

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces CommandToolParser to support Cohere's command model tool-calling format. The implementation covers both synchronous and streaming responses. I have identified a bug in the streaming logic, a potential robustness issue with token parsing, and opportunities to enhance performance and maintainability.

return None

# Case 4: Block End Point
if cur_start == cur_end and self.tool_call_end_token in delta_text:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Checking for the end token as a substring (self.tool_call_end_token in delta_text) is not robust. If the tokenizer splits the end token string into multiple tokens, this check will fail, and the tool call block will not be processed correctly. A more reliable approach is to check for the presence of the end token's ID in the delta_token_ids list.

Suggested change
if cur_start == cur_end and self.tool_call_end_token in delta_text:
if cur_start == cur_end and self.tool_call_end_token_id in delta_token_ids:


# Case 4: Block End Point
if cur_start == cur_end and self.tool_call_end_token in delta_text:
full = current_text + delta_text
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The current_text parameter already contains the full text generated so far, including the delta_text. Concatenating current_text + delta_text results in duplicating the last chunk of text, which is a bug and could lead to incorrect parsing of the tool call payload. You should use current_text directly.

Suggested change
full = current_text + delta_text
full = current_text

Comment on lines +32 to +35
self.prev_tool_call_arr: list[dict] = []
self.streamed_args_for_tool: list[str] = []
self.current_tool_id: int = -1
self.current_tool_name_sent: bool = False
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The state variables prev_tool_call_arr, streamed_args_for_tool, and current_tool_name_sent are initialized here but are not used anywhere within the CommandToolParser class. To improve code clarity and avoid confusion, these redundant and unused variables should be removed.

Suggested change
self.prev_tool_call_arr: list[dict] = []
self.streamed_args_for_tool: list[str] = []
self.current_tool_id: int = -1
self.current_tool_name_sent: bool = False
self.current_tool_id: int = -1

return ExtractedToolCallInformation(tools_called=True,
tool_calls=tool_calls,
content=prefix or None)
except Exception:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Catching a generic Exception is too broad and can hide unexpected errors or bugs in the code. It's better to catch specific exceptions that you expect to handle, such as json.JSONDecodeError and ValueError. This makes the error handling more robust and debugging easier.

Suggested change
except Exception:
except (json.JSONDecodeError, ValueError):

Comment on lines +98 to +100
prev_start = previous_token_ids.count(self.tool_call_start_token_id)
cur_start = current_token_ids.count(self.tool_call_start_token_id)
cur_end = current_token_ids.count(self.tool_call_end_token_id)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The current implementation repeatedly calls .count() on previous_token_ids and current_token_ids for every streaming chunk. As the generation progresses, current_token_ids can become very large, making this operation a performance bottleneck with O(N) complexity at each step, where N is the number of tokens generated so far.

To optimize this, I recommend maintaining the counts of start and end tokens as state within the CommandToolParser instance.

@gjgjos
Copy link
Author

gjgjos commented Jul 14, 2025

@aarnphm
I’ve submitted the PR and would appreciate it if you could take a look when you have time.
Please let me know if there’s anything that needs to be adjusted. Thank you in advance!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: No status
Development

Successfully merging this pull request may close these issues.

1 participant