Providing message role with llama.cpp #9986
Unanswered
Musa-Azeem
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Is there any way to provide the role of a message when using llama-cli of llama.cpp? I have a list of prompts in openAI chat completion style (list of {role: '', content: ''}). I can generate a response using llama-server and sending a request to the v1/chat/completions route, but I want to know if this is possible without starting a server and holding GPU memory. I will have multiple users, each with their own list of prompts.
Beta Was this translation helpful? Give feedback.
All reactions