llama-server why does the legacy completion response use .content vs .choices[0].text #9219
Unanswered
codefromthecrypt
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, I am trying to understand the version of the /v1/completions endpoint llama-server is emulating, and I can't seem to find it. When I make a completion request, the content is returned in a top-level field
.content
, where as far as I can tell, openai wants it as.choices[0].text
. Any insight on this?from llama-server
from ollama
Beta Was this translation helpful? Give feedback.
All reactions