Skip to content

Commit ca7e3e4

Browse files
committed
Clarify the Ollama options docs
1 parent 41a459a commit ca7e3e4

File tree

1 file changed

+25
-26
lines changed

1 file changed

+25
-26
lines changed

spring-ai-docs/src/main/antora/modules/ROOT/pages/api/clients/ollama-chat.adoc

Lines changed: 25 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -143,13 +143,10 @@ The prefix `spring.ai.ollama` is the property prefix to configure the connection
143143
| spring.ai.ollama.base-url | Base URL where Ollama API server is running. | `http://localhost:11434`
144144
|====
145145

146-
NOTE: The list of options for chat is to be reviewed. This https://github.com/spring-projects/spring-ai/issues/230[issue] will track progress.
147-
148-
NOTE: The `spring.ai.ollama.chat.options.*` properties are based on the https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values[Ollama Valid Parameters and Values] and https://github.com/jmorganca/ollama/blob/main/api/types.go[Ollama Types]
149-
150-
151146
The prefix `spring.ai.ollama.chat.options` is the property prefix that configures the `ChatClient` implementation for Ollama.
152147

148+
NOTE: The listed properties are based on the https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values[Ollama Valid Parameters and Values] and https://github.com/jmorganca/ollama/blob/main/api/types.go[Ollama Types]. And the default values are based on: https://github.com/ollama/ollama/blob/b538dc3858014f94b099730a592751a5454cab0a/api/types.go#L364[Ollama type defaults].
149+
153150
[cols="3,6,1"]
154151
|====
155152
| Property | Description | Default
@@ -158,37 +155,39 @@ The prefix `spring.ai.ollama.chat.options` is the property prefix that configure
158155
| spring.ai.ollama.chat.options.model | The name of the https://github.com/ollama/ollama?tab=readme-ov-file#model-library[supported models] to use. | mistral
159156
| spring.ai.ollama.chat.options.numa | Whether to use NUMA. | false
160157
| spring.ai.ollama.chat.options.num-ctx | Sets the size of the context window used to generate the next token. | 2048
161-
| spring.ai.ollama.chat.options.num-batch | ??? | -
162-
| spring.ai.ollama.chat.options.num-gqa | The number of GQA groups in the transformer layer. Required for some models, for example, it is 8 for llama2:70b. | -
163-
| spring.ai.ollama.chat.options.num-gpu | The number of layers to send to the GPU(s). On macOS it defaults to 1 to enable metal support, 0 to disable. | -
158+
| spring.ai.ollama.chat.options.num-batch | ??? | 512
159+
| spring.ai.ollama.chat.options.num-gqa | The number of GQA groups in the transformer layer. Required for some models, for example, it is 8 for llama2:70b. | 1
160+
| spring.ai.ollama.chat.options.num-gpu | The number of layers to send to the GPU(s). On macOS it defaults to 1 to enable metal support, 0 to disable. 1 here indicates that NumGPU should be set dynamically | -1
164161
| spring.ai.ollama.chat.options.main-gpu | ??? | -
165-
| spring.ai.ollama.chat.options.low-vram | ??? | -
166-
| spring.ai.ollama.chat.options.f16-kv | ??? | -
162+
| spring.ai.ollama.chat.options.low-vram | ??? | false
163+
| spring.ai.ollama.chat.options.f16-kv | ??? | true
167164
| spring.ai.ollama.chat.options.logits-all | ??? | -
168165
| spring.ai.ollama.chat.options.vocab-only | ??? | -
169-
| spring.ai.ollama.chat.options.use-mmap | ??? | -
170-
| spring.ai.ollama.chat.options.use-mlock | ??? | -
171-
| spring.ai.ollama.chat.options.embedding-only | ??? | -
172-
| spring.ai.ollama.chat.options.rope-frequency-base | ??? | -
173-
| spring.ai.ollama.chat.options.rope-frequency-scale | ??? | -
174-
| spring.ai.ollama.chat.options.num-thread | Sets the number of threads to use during computation. By default, Ollama will detect this for optimal performance. It is recommended to set this value to the number of physical CPU cores your system has (as opposed to the logical number of cores). | -
175-
| spring.ai.ollama.chat.options.num-keep | ??? | -
176-
| spring.ai.ollama.chat.options.seed | Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. | 0
177-
| spring.ai.ollama.chat.options.num-predict | Maximum number of tokens to predict when generating text. (Default: 128, -1 = infinite generation, -2 = fill context) | 128
166+
| spring.ai.ollama.chat.options.use-mmap | ??? | true
167+
| spring.ai.ollama.chat.options.use-mlock | ??? | false
168+
| spring.ai.ollama.chat.options.embedding-only | ??? | false
169+
| spring.ai.ollama.chat.options.rope-frequency-base | ??? | 10000.0
170+
| spring.ai.ollama.chat.options.rope-frequency-scale | ??? | 1.0
171+
| spring.ai.ollama.chat.options.num-thread | Sets the number of threads to use during computation. By default, Ollama will detect this for optimal performance. It is recommended to set this value to the number of physical CPU cores your system has (as opposed to the logical number of cores). 0 = let the runtime decide | 0
172+
| spring.ai.ollama.chat.options.num-keep | ??? | 0
173+
| spring.ai.ollama.chat.options.seed | Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. | -1
174+
175+
| spring.ai.ollama.chat.options.num-predict | Maximum number of tokens to predict when generating text. (-1 = infinite generation, -2 = fill context) | -1
178176
| spring.ai.ollama.chat.options.top-k | Reduces the probability of generating nonsense. A higher value (e.g., 100) will give more diverse answers, while a lower value (e.g., 10) will be more conservative. | 40
179177
| spring.ai.ollama.chat.options.top-p | Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. | 0.9
180-
| spring.ai.ollama.chat.options.tfs-z | Tail-free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. | 1
181-
| spring.ai.ollama.chat.options.typical-p | ??? | -
178+
| spring.ai.ollama.chat.options.tfs-z | Tail-free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. | 1.0
179+
| spring.ai.ollama.chat.options.typical-p | ??? | 1.0
182180
| spring.ai.ollama.chat.options.repeat-last-n | Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx) | 64
183181
| spring.ai.ollama.chat.options.temperature | The temperature of the model. Increasing the temperature will make the model answer more creatively. | 0.8
184182
| spring.ai.ollama.chat.options.repeat-penalty | Sets how strongly to penalize repetitions. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. | 1.1
185-
| spring.ai.ollama.chat.options.presence-penalty | ??? | -
186-
| spring.ai.ollama.chat.options.frequency-penalty | ??? | -
183+
| spring.ai.ollama.chat.options.presence-penalty | ??? | 0.0
184+
| spring.ai.ollama.chat.options.frequency-penalty | ??? | 0.0
187185
| spring.ai.ollama.chat.options.mirostat | Enable Mirostat sampling for controlling perplexity. (default: 0, 0 = disabled, 1 = Mirostat, 2 = Mirostat 2.0) | 0
188-
| spring.ai.ollama.chat.options.mirostat-tau | Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. | 0.1
189-
| spring.ai.ollama.chat.options.mirostat-eta | Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. | 5.0
190-
| spring.ai.ollama.chat.options.penalize-newline | ??? | -
186+
| spring.ai.ollama.chat.options.mirostat-tau | Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. | 5.0
187+
| spring.ai.ollama.chat.options.mirostat-eta | Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. | 0.1
188+
| spring.ai.ollama.chat.options.penalize-newline | ??? | true
191189
| spring.ai.ollama.chat.options.stop | Sets the stop sequences to use. When this pattern is encountered the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile. | -
192190
|====
193191

192+
NOTE: The list of options for chat is to be reviewed. This https://github.com/spring-projects/spring-ai/issues/230[issue] will track progress.
194193

0 commit comments

Comments
 (0)