Skip to content

Function calling support for Kimi-K2 #628

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 15 commits into from
Jul 23, 2025

Conversation

iSevenDays
Copy link
Contributor

@iSevenDays iSevenDays commented Jul 18, 2025


The implementation adds support for tool calls.

The reason why I think the feature is important is that it allows users of ik_llama.cpp to use this backend with apps like Claude Code that requires tool calls.

By using simple proxy like this one https://github.com/1rgs/claude-code-proxy (I just found it in github), I could connect Claude Code to ik_llama.cpp using Kimi-K2 Q2 LLM provided by ubergarm.
In claude-code-proxy you just have to change .env OPENAI_API_BASE="http://192.168.0.24:8080/v1"

image

I had to port llama.cpp function tool calls support. The most difficult part was to port streaming and json healing.

image

Copy link
Owner

@ikawrakow ikawrakow left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for this! People have been asking for function calling support, but that is not something I'm very familiar with.

LGTM, but I would appreciate at least one other person testing.

I see your location is Leipzig. Have fond memories of this place, having spent 11 years there studying physics, doing a PhD, and staying for my first postdoc position.

@iSevenDays
Copy link
Contributor Author

LGTM, but I would appreciate at least one other person testing.

Thanks! I've done the basic tests, but the model loads too slow from my hdd, so I will test different use cases over the weekend.
I could make it work for the first request, but it seems that multiple requests don't work currently or Kimi-K2 requires a different prompting. I'll debug this more over the weekend and update the PR.

I see your location is Leipzig. Have fond memories of this place, having spent 11 years there studying physics, doing a PhD, and staying for my first postdoc position.

I live in a beautiful city, thanks! I've been living here for 3 years and have absolutely no regrets!

@iSevenDays iSevenDays changed the title Function calling support for Kimi-K2 [Draft] Function calling support for Kimi-K2 Jul 18, 2025
@ubergarm
Copy link
Contributor

I could make it work for the first request, but it seems that multiple requests don't work currently or Kimi-K2 requires a different prompting. I'll debug this more over the weekend and update the PR.>

Oh hej this is exciting! I believe we have a PR open for this #407 (comment) where some folks were trying to use a reverse proxy / wrapper to handle it similar to claude-code-proxy perhaps.

I don't use tool calling myself, but did notice when adding Kimi-K2-Instruct PR that I left out one section for the chat endpoint for the "role": "tool": ggml-org/llama.cpp#14654 (comment)

So if it expects llama-server to handle the template internally that "role": "tool" might not be applied. But if you're using the text completions endpoint and doing your own template it might not matter.

@sousekd
Copy link

sousekd commented Jul 18, 2025

@iSevenDays This seems relevant:

We've just fixed 2 bugs in Kimi-K2-Instruct huggingface repo. Please update the following files to apply the fix:

  • tokenizer_config.json: update chat-template so that it works for multi-turn tool calls.
  • tokenization_kimi.py: update encode method to enable encoding special tokens.

https://x.com/Kimi_Moonshot/status/1945050874067476962

@mtcl
Copy link

mtcl commented Jul 19, 2025

This is very exciting! I would much rather use a native function calling!

@iSevenDays
Copy link
Contributor Author

iSevenDays commented Jul 19, 2025

I took a look at how llama.cpp implements tool calling support and the task is much more complicated than I thought. Especially, the streaming part.
I'll keep you updated.

@mtcl
Copy link

mtcl commented Jul 19, 2025

I took a look at how llama.cpp implements tool calling support and the task is much more complicated than I thought. Especially, the streaming part.
I'll keep you updated.

That would be really amazing! ik_llama + tool calling will be a dream come true for me!

- Add new chat.h/chat.cpp and chat-parser.h/chat-parser.cpp for better chat handling
- Improve function calls parsing with fallback to llama.cpp builder pattern
- Add string utility functions (starts_with, ends_with, find_partial_stop)
- Update README with function calls testing instructions
- Enhance Kimi K2 parser and function calls documentation
- Add comprehensive test suite for function calls
- Update CMakeLists.txt and Makefile for new components
- Fix streaming content cleanup to prevent function syntax in output
- Unify content extraction patterns with llama.cpp approach
- Improve Kimi K2 parser robustness and partial content handling
- Add comprehensive test coverage for function call scenarios
- Optimize chat message parsing and diff computation
- Add compile-time constants for all token format markers
- Add compile-time constants for XML format markers
- Add compile-time constants for simple format patterns
- Replace all hardcoded string literals with named constants
- Use compile-time length calculation to avoid manual counting
- Improve maintainability and reduce magic numbers throughout parser
- Remove duplicate implementation from chat-parser.cpp
- Keep single implementation in chat.cpp following llama.cpp patterns
- Resolves linker error: multiple definition of common_chat_parse
@iSevenDays
Copy link
Contributor Author

I had to port llama.cpp function tool calls support.

Here is branch of Claude Proxy that you can use with ik_llama.cpp and Claude code.

Steps to test this PR

  1. Clone https://github.com/iSevenDays/claude-code-proxy
  2. Run the proxy
uv run uvicorn server:app --host 0.0.0.0 --port 8082
  1. Open .env inside claude proxy
OPENAI_API_BASE="http://192.168.0.24:8080/v1"
PREFERRED_PROVIDER="openai"
BIG_MODEL="Kimi-K2"
SMALL_MODEL="Kimi-K2"
  1. The model name is important, so set it to kimi-k2 to enable tool parsing from ik_llama.cpp
  2. Test with Claude Code
ANTHROPIC_BASE_URL=http://localhost:8082 claude "list files"

I'm doing more tests in the meantime.

- Add proper validation that 'function' field is an object before accessing nested keys
- Handle missing 'arguments' field gracefully with default "{}"
- Prevents crash when parsing malformed tool call JSON structures
- Implement Qwen3 XML parser with <tool_call>{"name": "func", "arguments": {...}}</tool_call> format
- Add model detection and routing for Qwen3 vs Kimi-K2 formats
- Create 8 comprehensive unit tests covering parsing, streaming, error handling
- Fix token format cleaning bug in kimi_k2_parser.hpp processing order
- Remove progressive parsing code and related utilities
- Add tool injection support for Qwen3 format in server utils
@iSevenDays
Copy link
Contributor Author

I added Qwen3 tool calling support.
From my tests, Kimi-K2 uses tools better and Qwen3 fails to use tools for Claude Code.

@iSevenDays
Copy link
Contributor Author

@ikawrakow I have backported tool calling support. I'm not sure if I can make PR smaller, because the feature in llama.cpp is quite complicated.
I'd be glad if somebody can also do real world tests.

I suggest using Kimi-K2 model with Claude Code using these steps #628 (comment)

It seems to work fine, at least it can call tools when I explicitly ask for it.

@ikawrakow
Copy link
Owner

I think there was a lot of interest for this, so hopefully we will have a few people testing the PR. Hopefully today, so I can merge before going on vacation tomorrow.

@iSevenDays iSevenDays changed the title [Draft] Function calling support for Kimi-K2 Function calling support for Kimi-K2 Jul 23, 2025
@iSevenDays
Copy link
Contributor Author

@ikawrakow I'll be happy to work on your requests for this PR to get it merged.
I followed the strategy of porting llama.cpp as close as possible.

@xldistance
Copy link

Looking forward to qwen3's tool call

- Implement complete DeepSeek R1 tool call parsing in common_chat_parser.cpp
- Add DeepSeek R1 model detection and tool injection in deepseek_r1_tools.hpp
- Update function_calls.hpp with DeepSeek R1 integration and content extraction
- Update documentation to reflect support for Kimi-K2, Qwen3, and DeepSeek R1 models
- Add comprehensive unit tests for DeepSeek R1 reasoning, tool calls, and integration
- Port exact implementation patterns from original llama.cpp for compatibility

Key features:
- Native DeepSeek R1 format: <|tool▁calls▁begin|>function<|tool▁sep|>name```json{}```<|tool▁call▁end|><|tool▁calls▁end|>
- Reasoning content extraction from <think>...</think> tags
- Multiple tool calls support with separate call blocks
- Model detection for deepseek-r1, deepseek_r1 naming patterns
- Integration with incremental parsing and streaming support
@iSevenDays
Copy link
Contributor Author

I have added DeepSeek-R1 tool calling support.
The following LLM works just fine. It takes often 2 iterations to do the tool call, but Claude Code handles that automatically.

numactl --interleave=all ./build/bin/llama-server \
                         --alias DeepSeek-R1T2 \
                         --model /root/models/DeepSeek-TNG-R1T2-Chimera-GGUF/IQ3_KS/IQ3_KS/DeepSeek-TNG-R1T2-Chimera-IQ3_KS-00001-of-00007.gguf \
                         -rtr \
                         --ctx-size 102400 \
                         -ctk q8_0 \
                         -mla 3 -fa \
                         -amb 512 \
                         -fmoe \
                         --temp 0.6 \
                         --top_p 0.95 \
                         --n-gpu-layers 63 \
                         --override-tensor "blk\.([0-5])\.ffn_.*=CUDA0,exps=CPU" \
                         --parallel 1 \
                         --threads 16 \
                         --host 0.0.0.0 \
                         --port 8080 \
                         --min_p 0.01 \
                         --numa distribute \
                         --threads-batch 32 \
                         --no-mmap \
                         -b 8192 -ub 8192

@xldistance
Copy link

@iSevenDays json-partial.h
json-partial.cpp
regex-partial.h
regex-partial.cpp Missing documents

- json-partial.h/cpp: JSON partial parsing functionality
- regex-partial.h/cpp: Regex partial parsing functionality
@iSevenDays
Copy link
Contributor Author

@xldistance thanks for the feedback, the files are there and can be compiled successfully.

For those who are testing with Claude Code, here are my suggestions:
Kimi-K2 works the best, and is very fast, uses tools.
DeepSeek-TNG-R1T2-Chimera works, but too often it times out on my Dell R740 48GB 4090D.
Qwen3-235B-A22B-Instruct-2507-GGUF (pure-IQ4_KS from ubergarm) doesn't want to use tools

@xldistance
Copy link

@iSevenDays I use qwen3-coder-480b on top of ccr code

@iSevenDays
Copy link
Contributor Author

@xldistance just make sure to set the correct name of LLM in env and in llama-server.
I enabled name matching e.g. the following triggers additional tool calling in system prompt to let the model know how to use tools properly. I ported the behavior from llama.cpp. Llama.cpp uses more complex system btw.
The following names would work:
Qwen3-235b
DeepSeek-R1
Kimi-K2
Kimi_K2

I'll check qwen3-coder-480b that was recently uploaded https://huggingface.co/ubergarm/Qwen3-Coder-480B-A35B-Instruct-GGUF/tree/main/IQ2_KS

- Add test_qwen3_format_chat_integration() to validate tool injection pipeline
- Test tool injection conditions and system message enhancement
- Verify JSON formatting and anti-preamble instructions
- Add comprehensive test documentation

Tests confirm tool injection works correctly - conversational preamble
issue is not in ik_llama.cpp but likely in UI configuration.
Server was not passing model name to parse_chat_message_incremental(),
causing Qwen3 to fall back to Kimi-K2 parser and return tool calls
as content instead of proper tool_calls array.
Non-streaming responses were hardcoded to use Kimi-K2 format,
causing Qwen3 XML tool calls to be returned as content instead
of proper tool_calls array. Now uses same model detection as
streaming path for consistency.
@ikawrakow
Copy link
Owner

Well, I'll just merge it then.

@ikawrakow ikawrakow merged commit 3701fb1 into ikawrakow:main Jul 23, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants