Replies: 1 comment 1 reply
-
I think I figured it out, maybe someone who knows could confirm that this is actually a correct solution: (gptel-make-ollama "Ollama" ;Any name of your choosing
:host "localhost:11434" ;Where it's running
:stream t ;Stream responses
:models '((qwen3:30b
:request-params (:options (:temperature 0.6
:top_p 0.95
:top_k 20
:min_p 0
:num_ctx 131072))))) makes the json request look correct: {
"model": "qwen3:30b",
"messages": [
...
],
"stream": true,
"options": {
"temperature": 0.6,
"top_p": 0.95,
"top_k": 20,
"min_p": 0,
"num_ctx": 131072
}
} |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
How can I set model options for ollama? especially things like context length.
This makes the request look as follows:
Beta Was this translation helpful? Give feedback.
All reactions