You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: spring-ai-docs/src/main/antora/modules/ROOT/pages/api/clients/bedrock/bedrock-titan.adoc
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
= Titan Chat
2
2
3
-
link:https://aws.amazon.com/bedrock/titan/[Amazon Titan] foundation models (FMs) provide customers with a breadth of high-performing image, multimodal, and text model choices, via a fully managed API.
3
+
link:https://aws.amazon.com/bedrock/titan/[Amazon Titan] foundation models (FMs) provide customers with a breadth of high-performing image, multimodal embeddings, and text model choices, via a fully managed API.
4
4
Amazon Titan models are created by AWS and pretrained on large datasets, making them powerful, general-purpose models built to support a variety of use cases, while also supporting the responsible use of AI.
5
5
Use them as is or privately customize them with your own data.
| spring.ai.openai.image.base-url | Optional overrides the spring.ai.openai.base-url to provide chat specific url | -
52
+
| spring.ai.openai.image.api-key | Optional overrides the spring.ai.openai.api-key to provide chat specific api-key | -
53
+
| spring.ai.openai.image.options.n | The number of images to generate. Must be between 1 and 10. For dall-e-3, only n=1 is supported. | -
54
+
| spring.ai.openai.image.options.model | The model to use for image generation. | OpenAiImageApi.DEFAULT_IMAGE_MODEL
55
+
| spring.ai.openai.image.options.quality | The quality of the image that will be generated. HD creates images with finer details and greater consistency across the image. This parameter is only supported for dall-e-3. | -
56
+
| spring.ai.openai.image.options.response_format | The format in which the generated images are returned. Must be one of URL or b64_json. | -
57
+
| `spring.ai.openai.image.options.size` | The size of the generated images. Must be one of 256x256, 512x512, or 1024x1024 for dall-e-2. Must be one of 1024x1024, 1792x1024, or 1024x1792 for dall-e-3 models. | -
58
+
| `spring.ai.openai.image.options.size_width` | The width of the generated images. Must be one of 256, 512, or 1024 for dall-e-2. | -
59
+
| `spring.ai.openai.image.options.size_height`| The height of the generated images. Must be one of 256, 512, or 1024 for dall-e-2. | -
60
+
| `spring.ai.openai.image.options.style` | The style of the generated images. Must be one of vivid or natural. Vivid causes the model to lean towards generating hyper-real and dramatic images. Natural causes the model to produce more natural, less hyper-real looking images. This parameter is only supported for dall-e-3. | -
61
+
| `spring.ai.openai.image.options.user` | A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. | -
58
62
|====
59
63
60
64
==== Connection Properties
@@ -70,32 +74,31 @@ The prefix `spring.ai.openai` is used as the property prefix that lets you conne
70
74
71
75
==== Configuration Properties
72
76
73
-
The prefix `spring.ai.openai.image` is the property prefix that lets you configure the `ImageClient` implementation for OpenAI.
77
+
78
+
==== Retry Properties
79
+
80
+
The prefix `spring.ai.retry` is used as the property prefix that lets you configure the retry mechanism for the OpenAI Image client.
| spring.ai.openai.image.base-url | Optional overrides the spring.ai.openai.base-url to provide chat specific url | -
80
-
| spring.ai.openai.image.api-key | Optional overrides the spring.ai.openai.api-key to provide chat specific api-key | -
81
-
| spring.ai.openai.image.options.n | The number of images to generate. Must be between 1 and 10. For dall-e-3, only n=1 is supported. | -
82
-
| spring.ai.openai.image.options.model | The model to use for image generation. | OpenAiImageApi.DEFAULT_IMAGE_MODEL
83
-
| spring.ai.openai.image.options.quality | The quality of the image that will be generated. HD creates images with finer details and greater consistency across the image. This parameter is only supported for dall-e-3. | -
84
-
| spring.ai.openai.image.options.response_format | The format in which the generated images are returned. Must be one of URL or b64_json. | -
85
-
| `spring.ai.openai.image.options.size` | The size of the generated images. Must be one of 256x256, 512x512, or 1024x1024 for dall-e-2. Must be one of 1024x1024, 1792x1024, or 1024x1792 for dall-e-3 models. | -
86
-
| `spring.ai.openai.image.options.size_width` | The width of the generated images. Must be one of 256, 512, or 1024 for dall-e-2. | -
87
-
| `spring.ai.openai.image.options.size_height`| The height of the generated images. Must be one of 256, 512, or 1024 for dall-e-2. | -
88
-
| `spring.ai.openai.image.options.style` | The style of the generated images. Must be one of vivid or natural. Vivid causes the model to lean towards generating hyper-real and dramatic images. Natural causes the model to produce more natural, less hyper-real looking images. This parameter is only supported for dall-e-3. | -
89
-
| `spring.ai.openai.image.options.user` | A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. | -
85
+
86
+
| spring.ai.retry.max-attempts | Maximum number of retry attempts. | 10
87
+
| spring.ai.retry.backoff.initial-interval | Initial sleep duration for the exponential backoff policy. | 2 sec.
| spring.ai.retry.backoff.max-interval | Maximum backoff duration. | 3 min.
90
+
| spring.ai.retry.on-client-errors | If false, throw a NonTransientAiException, and do not attempt retry for `4xx` client error codes | false
91
+
| spring.ai.retry.exclude-on-http-codes | List of HTTP status codes that should not trigger a retry (e.g. to throw NonTransientAiException). | empty
90
92
|====
91
93
92
-
=== Image Options [[image-options]]
94
+
95
+
== Runtime Options [[image-options]]
93
96
94
97
The https://github.com/spring-projects/spring-ai/blob/main/models/spring-ai-openai/src/main/java/org/springframework/ai/openai/OpenAiImageOptions.java[OpenAiImageOptions.java] provides model configurations, such as the model to use, the quality, the size, etc.
95
98
96
99
On start-up, the default options can be configured with the `OpenAiImageClient(OpenAiImageApi openAiImageApi)` constructor and the `withDefaultOptions(OpenAiImageOptions defaultOptions)` method. Alternatively, use the `spring.ai.openai.image.options.*` properties described previously.
97
100
98
-
At run-time you can override the default options by adding new, request specific, options to the `ImagePrompt` call.
101
+
At runtime you can override the default options by adding new, request specific, options to the `ImagePrompt` call.
99
102
For example to override the OpenAI specific options such as quality and the number of images to create, use the following code example:
Copy file name to clipboardExpand all lines: spring-ai-docs/src/main/antora/modules/ROOT/pages/api/clients/image/stabilityai-image.adoc
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -80,7 +80,7 @@ The https://github.com/spring-projects/spring-ai/blob/main/models/spring-ai-stab
80
80
81
81
On start-up, the default options can be configured with the `StabilityAiImageClient(StabilityAiApi stabilityAiApi, StabilityAiImageOptions options)` constructor. Alternatively, use the `spring.ai.openai.image.options.*` properties described previously.
82
82
83
-
At run-time you can override the default options by adding new, request specific, options to the `ImagePrompt` call.
83
+
At runtime, you can override the default options by adding new, request specific, options to the `ImagePrompt` call.
84
84
For example to override the Stability AI specific options such as quality and the number of images to create, use the following code example:
Follow the https://github.com/spring-projects/spring-ai/blob/main/models/spring-ai-openai/src/main/java/org/springframework/ai/openai/api/OpenAiApi.java[OpenAiApi.java]'s JavaDoc for further information.
264
264
265
-
==== OpenAiApi Samples
265
+
== Example Code
266
266
* The link:https://github.com/spring-projects/spring-ai/blob/main/models/spring-ai-openai/src/test/java/org/springframework/ai/openai/chat/api/OpenAiApiIT.java[OpenAiApiIT.java] test provides some general examples how to use the lightweight library.
267
267
268
268
* The link:https://github.com/spring-projects/spring-ai/blob/main/models/spring-ai-openai/src/test/java/org/springframework/ai/openai/chat/api/tool/OpenAiApiToolFunctionCallIT.java[OpenAiApiToolFunctionCallIT.java] test shows how to use the low-level API to call tool functions.
Copy file name to clipboardExpand all lines: spring-ai-docs/src/main/antora/modules/ROOT/pages/api/clients/vertexai-gemini-chat.adoc
+7-9Lines changed: 7 additions & 9 deletions
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,5 @@
1
1
= VertexAI Gemini Chat
2
2
3
-
4
-
5
3
The https://cloud.google.com/vertex-ai/docs/generative-ai/multimodal/overview[Vertex AI Gemini API] allows developers to build generative AI applications using the Gemini model.
6
4
The Vertex AI Gemini API supports multimodal prompts as input and output text or code.
7
5
A multimodal model is a model that is capable of processing information from multiple modalities, including images, videos, and text. For example, you can send the model a photo of a plate of cookies and ask it to give you a recipe for those cookies.
@@ -79,13 +77,13 @@ The prefix `spring.ai.vertex.ai.gemini.chat` is the property prefix that lets yo
79
77
80
78
TIP: All properties prefixed with `spring.ai.vertex.ai.gemini.chat.options` can be overridden at runtime by adding a request specific <<chat-options>> to the `Prompt` call.
81
79
82
-
=== Chat Options [[chat-options]]
80
+
== Runtime options [[chat-options]]
83
81
84
82
The https://github.com/spring-projects/spring-ai/blob/main/models/spring-ai-vertex-ai-gemini/src/main/java/org/springframework/ai/vertexai/gemini/VertexAiGeminiChatOptions.java[VertexAiGeminiChatOptions.java] provides model configurations, such as the temperature, the topK, etc.
85
83
86
84
On start-up, the default options can be configured with the `VertexAiGeminiChatClient(api, options)` constructor or the `spring.ai.vertex.ai.chat.options.*` properties.
87
85
88
-
At run-time you can override the default options by adding new, request specific, options to the `Prompt` call.
86
+
At runtime you can override the default options by adding new, request specific, options to the `Prompt` call.
89
87
For example to override the default temperature for a specific request:
TIP: In addition to the model specific `VertexAiChatPaLm2Options` you can use a portable https://github.com/spring-projects/spring-ai/blob/main/spring-ai-core/src/main/java/org/springframework/ai/chat/ChatOptions.java[ChatOptions] instance, created with the https://github.com/spring-projects/spring-ai/blob/main/spring-ai-core/src/main/java/org/springframework/ai/chat/ChatOptionsBuilder.java[ChatOptionsBuilder#builder()].
103
101
104
-
=== Function Calling
102
+
== Function Calling
105
103
106
104
You can register custom Java functions with the VertexAiGeminiChatClient and have the Gemini Pro model intelligently choose to output a JSON object containing arguments to call one or many of the registered functions.
107
105
This is a powerful technique to connect the LLM capabilities with external tools and APIs.
108
106
Read more about xref:api/clients/functions/vertexai-gemini-chat-functions.adoc[Vertex AI Gemini Function Calling].
109
107
110
-
=== Multimodal Example
108
+
== Multimodal
111
109
Multimodality refers to a model's ability to simultaneously understand and process information from various sources, including text, images, audio, and other data formats. This paradigm represents a significant advancement in AI models.
112
110
113
-
Google's Gemini AI models support this capability by comprehending and integrating text, code, audio, images, and video. For more details, refer to the blog post [Introducing Gemini](https://blog.google/technology/ai/google-gemini-ai/#introducing-gemini).
111
+
Google's Gemini AI models support this capability by comprehending and integrating text, code, audio, images, and video. For more details, refer to the blog post https://blog.google/technology/ai/google-gemini-ai/#introducing-gemini[Introducing Gemini].
114
112
115
113
Spring AI's `Message` interface supports multimodal AI models by introducing the Media type.
116
114
This type contains data and information about media attachments in messages, using Spring's `org.springframework.util.MimeType` and a `java.lang.Object` for the raw media data.
117
115
118
-
Below is a simple code example extracted from [VertexAiGeminiChatClientIT.java](https://github.com/spring-projects/spring-ai/blob/main/models/spring-ai-vertex-ai-gemini/src/test/java/org/springframework/ai/vertexai/gemini/VertexAiGeminiChatClientIT.java), demonstrating the combination of user text with an image.
116
+
Below is a simple code example extracted from https://github.com/spring-projects/spring-ai/blob/main/models/spring-ai-vertex-ai-gemini/src/test/java/org/springframework/ai/vertexai/gemini/VertexAiGeminiChatClientIT.java[VertexAiGeminiChatClientIT.java], demonstrating the combination of user text with an image.
119
117
120
118
121
119
[source,java]
@@ -128,7 +126,7 @@ var userMessage = new UserMessage("Explain what do you see o this picture?",
https://start.spring.io/[Create] a new Spring Boot project and add the `spring-ai-vertex-ai-palm2-spring-boot-starter` to your pom (or gradle) dependencies.
Copy file name to clipboardExpand all lines: spring-ai-docs/src/main/antora/modules/ROOT/pages/api/embeddings/bedrock-titan-embedding.adoc
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
= Titan Embeddings
2
2
3
3
Provides Bedrock Titan Embedding client.
4
-
link:https://aws.amazon.com/bedrock/titan/[Amazon Titan] foundation models (FMs) provide customers with a breadth of high-performing image, multimodal, and text model choices, via a fully managed API.
4
+
link:https://aws.amazon.com/bedrock/titan/[Amazon Titan] foundation models (FMs) provide customers with a breadth of high-performing image, multimodal embeddings, and text model choices, via a fully managed API.
5
5
Amazon Titan models are created by AWS and pretrained on large datasets, making them powerful, general-purpose models built to support a variety of use cases, while also supporting the responsible use of AI.
6
6
Use them as is or privately customize them with your own data.
0 commit comments