You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: spring-ai-docs/src/main/antora/modules/ROOT/pages/api/clients/ollama-chat.adoc
+25-26Lines changed: 25 additions & 26 deletions
Original file line number
Diff line number
Diff line change
@@ -143,13 +143,10 @@ The prefix `spring.ai.ollama` is the property prefix to configure the connection
143
143
| spring.ai.ollama.base-url | Base URL where Ollama API server is running. | `http://localhost:11434`
144
144
|====
145
145
146
-
NOTE: The list of options for chat is to be reviewed. This https://github.com/spring-projects/spring-ai/issues/230[issue] will track progress.
147
-
148
-
NOTE: The `spring.ai.ollama.chat.options.*` properties are based on the https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values[Ollama Valid Parameters and Values] and https://github.com/jmorganca/ollama/blob/main/api/types.go[Ollama Types]
149
-
150
-
151
146
The prefix `spring.ai.ollama.chat.options` is the property prefix that configures the `ChatClient` implementation for Ollama.
152
147
148
+
NOTE: The listed properties are based on the https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values[Ollama Valid Parameters and Values] and https://github.com/jmorganca/ollama/blob/main/api/types.go[Ollama Types]. And the default values are based on: https://github.com/ollama/ollama/blob/b538dc3858014f94b099730a592751a5454cab0a/api/types.go#L364[Ollama type defaults].
149
+
153
150
[cols="3,6,1"]
154
151
|====
155
152
| Property | Description | Default
@@ -158,37 +155,39 @@ The prefix `spring.ai.ollama.chat.options` is the property prefix that configure
158
155
| spring.ai.ollama.chat.options.model | The name of the https://github.com/ollama/ollama?tab=readme-ov-file#model-library[supported models] to use. | mistral
159
156
| spring.ai.ollama.chat.options.numa | Whether to use NUMA. | false
160
157
| spring.ai.ollama.chat.options.num-ctx | Sets the size of the context window used to generate the next token. | 2048
| spring.ai.ollama.chat.options.num-gqa | The number of GQA groups in the transformer layer. Required for some models, for example, it is 8 for llama2:70b. | -
163
-
| spring.ai.ollama.chat.options.num-gpu | The number of layers to send to the GPU(s). On macOS it defaults to 1 to enable metal support, 0 to disable. | -
| spring.ai.ollama.chat.options.num-gqa | The number of GQA groups in the transformer layer. Required for some models, for example, it is 8 for llama2:70b. | 1
160
+
| spring.ai.ollama.chat.options.num-gpu | The number of layers to send to the GPU(s). On macOS it defaults to 1 to enable metal support, 0 to disable. 1 here indicates that NumGPU should be set dynamically | -1
| spring.ai.ollama.chat.options.num-thread | Sets the number of threads to use during computation. By default, Ollama will detect this for optimal performance. It is recommended to set this value to the number of physical CPU cores your system has (as opposed to the logical number of cores). | -
| spring.ai.ollama.chat.options.seed | Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. | 0
177
-
| spring.ai.ollama.chat.options.num-predict | Maximum number of tokens to predict when generating text. (Default: 128, -1 = infinite generation, -2 = fill context) | 128
| spring.ai.ollama.chat.options.num-thread | Sets the number of threads to use during computation. By default, Ollama will detect this for optimal performance. It is recommended to set this value to the number of physical CPU cores your system has (as opposed to the logical number of cores). 0 = let the runtime decide | 0
| spring.ai.ollama.chat.options.seed | Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. | -1
174
+
175
+
| spring.ai.ollama.chat.options.num-predict | Maximum number of tokens to predict when generating text. (-1 = infinite generation, -2 = fill context) | -1
178
176
| spring.ai.ollama.chat.options.top-k | Reduces the probability of generating nonsense. A higher value (e.g., 100) will give more diverse answers, while a lower value (e.g., 10) will be more conservative. | 40
179
177
| spring.ai.ollama.chat.options.top-p | Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. | 0.9
180
-
| spring.ai.ollama.chat.options.tfs-z | Tail-free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. | 1
| spring.ai.ollama.chat.options.tfs-z | Tail-free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. | 1.0
| spring.ai.ollama.chat.options.repeat-last-n | Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx) | 64
183
181
| spring.ai.ollama.chat.options.temperature | The temperature of the model. Increasing the temperature will make the model answer more creatively. | 0.8
184
182
| spring.ai.ollama.chat.options.repeat-penalty | Sets how strongly to penalize repetitions. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. | 1.1
| spring.ai.ollama.chat.options.mirostat-tau | Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. | 0.1
189
-
| spring.ai.ollama.chat.options.mirostat-eta | Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. | 5.0
| spring.ai.ollama.chat.options.mirostat-tau | Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. | 5.0
187
+
| spring.ai.ollama.chat.options.mirostat-eta | Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. | 0.1
| spring.ai.ollama.chat.options.stop | Sets the stop sequences to use. When this pattern is encountered the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile. | -
192
190
|====
193
191
192
+
NOTE: The list of options for chat is to be reviewed. This https://github.com/spring-projects/spring-ai/issues/230[issue] will track progress.
0 commit comments