Sampling special tokens, not just <|im_end|>
?
#9886
Replies: 1 comment
-
Both llama-cli and llama-server can sample special token |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Qwen 2.5 have special token
<tool_call>
and</tool_call>
, but I do not know if it is model not trained to generate this token, or llama.cpp rejects generating all special tokens, but<|im_end|>
.How to allow llama.cpp to output
<tool_call>
token, if model is trained to output this special token?I am interested more in the server part.
Beta Was this translation helpful? Give feedback.
All reactions