Support vLLM reasoning api #726
Replies: 3 comments 2 replies
-
I presume you mean
This is expected. gptel doesn't know apriori if a model supports reasoning or not, so it always checks.
Could you share the complete response? |
Beta Was this translation helpful? Give feedback.
-
Sorry, I forgot what the problem was here. IIUC,
The sample output looks fine for deepseek, but there's no actual response content? I only see "reasoning_content" blocks. |
Beta Was this translation helpful? Give feedback.
-
Cannot recall and now I am basically on Thanks @karthink |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello @karthink - picking up from #613 (comment).
I tried the
gptel-make-openai
backend and I still do not see the reasoning content.Didn't debug very thoroughly but when I receive a response I dump the
info
structure here and see the following in all responses (note thatstream
is on):The
:reasoning-done
is something I did not see in thevllm
responses: this is a sample I got withcurl
(just kept the last few):Where can I go from here?
Beta Was this translation helpful? Give feedback.
All reactions