You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: spring-ai-docs/src/main/antora/modules/ROOT/pages/api/chat/ollama-chat.adoc
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -252,7 +252,7 @@ TIP: You need Ollama 0.2.8 or newer to use the functional calling capabilities a
252
252
253
253
Multimodality refers to a model's ability to simultaneously understand and process information from various sources, including text, images, audio, and other data formats.
254
254
255
-
Some of the models available in Ollama with multimodality support are https://ollama.com/library/llava[LLaVa] and https://ollama.com/library/bakllava[bakllava] (see the link:https://ollama.com/search?c=vision[full list]).
255
+
Some of the models available in Ollama with multimodality support are https://ollama.com/library/llava[LLaVA] and https://ollama.com/library/bakllava[BakLLaVA] (see the link:https://ollama.com/search?c=vision[full list]).
256
256
For further details, refer to the link:https://llava-vl.github.io/[LLaVA: Large Language and Vision Assistant].
257
257
258
258
The Ollama link:https://github.com/ollama/ollama/blob/main/docs/api.md#parameters-1[Message API] provides an "images" parameter to incorporate a list of base64-encoded images with the message.
Copy file name to clipboardExpand all lines: spring-ai-docs/src/main/antora/modules/ROOT/pages/api/multimodality.adoc
+2-2Lines changed: 2 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -13,7 +13,7 @@ Contrary to those principles, the Machine Learning was often focused on speciali
13
13
For instance, we developed audio models for tasks like text-to-speech or speech-to-text, and computer vision models for tasks such as object detection and classification.
14
14
15
15
However, a new wave of multimodal large language models starts to emerge.
16
-
Examples include OpenAI's GPT-4o , Google's Vertex AI Gemini 1.5, Anthropic's Claude3, and open source offerings Llama3.2, LLaVA and Balklava are able to accept multiple inputs, including text images, audio and video and generate text responses by integrating these inputs.
16
+
Examples include OpenAI's GPT-4o , Google's Vertex AI Gemini 1.5, Anthropic's Claude3, and open source offerings Llama3.2, LLaVA and BakLLaVA are able to accept multiple inputs, including text images, audio and video and generate text responses by integrating these inputs.
17
17
18
18
NOTE: The multimodal large language model (LLM) features enable the models to process and generate text in conjunction with other modalities such as images, audio, or video.
19
19
@@ -69,6 +69,6 @@ Spring AI provides multimodal support for the following chat models:
0 commit comments