Skip to content

Commit 152420f

Browse files
authored
Fix Typos and Grammatical Errors
* Fix Typo: Duplicate 'for' in documentation text * Fix Typo: Duplicate 'to' in documentation text * Fix broken links in documentation * Correct grammar by deleting unnecessary 'an' in documentation * Fix typo: Change 'tunning' to 'tuning' in documentation * Fix typo: Change 'an' to 'can' in documentation * Fix typo: Change 'generats' to 'generates' in documentation * Fix grammatical error: Change 'a AI' to 'an AI' in documentation * Fix grammatical error: Change 'a AI' to 'an AI' in code * Fix Typo: Duplicate 'for' in code
1 parent ba3e94e commit 152420f

File tree

11 files changed

+13
-13
lines changed

11 files changed

+13
-13
lines changed

models/spring-ai-bedrock/src/main/java/org/springframework/ai/bedrock/cohere/BedrockCohereEmbeddingOptions.java

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ public class BedrockCohereEmbeddingOptions implements EmbeddingOptions {
3232
// @formatter:off
3333
/**
3434
* Prepends special tokens to differentiate each type from one another. You should not mix
35-
* different types together, except when mixing types for for search and retrieval.
35+
* different types together, except when mixing types for search and retrieval.
3636
* In this case, embed your corpus with the search_document type and embedded queries with
3737
* type search_query type.
3838
*/

models/spring-ai-bedrock/src/main/java/org/springframework/ai/bedrock/cohere/api/CohereEmbeddingBedrockApi.java

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ public CohereEmbeddingBedrockApi(String modelId, AwsCredentialsProvider credenti
6969
* @param texts An array of strings for the model to embed. For optimal performance, we recommend reducing the
7070
* length of each text to less than 512 tokens. 1 token is about 4 characters.
7171
* @param inputType Prepends special tokens to differentiate each type from one another. You should not mix
72-
* different types together, except when mixing types for for search and retrieval. In this case, embed your corpus
72+
* different types together, except when mixing types for search and retrieval. In this case, embed your corpus
7373
* with the search_document type and embedded queries with type search_query type.
7474
* @param truncate Specifies how the API handles inputs longer than the maximum token length. If you specify LEFT or
7575
* RIGHT, the model discards the input until the remaining input is exactly the maximum input token length for the

spring-ai-core/src/main/java/org/springframework/ai/document/ContentFormatter.java

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@
1616
package org.springframework.ai.document;
1717

1818
/**
19-
* Converts the Document text and metadata into a AI, prompt-friendly text representation.
19+
* Converts the Document text and metadata into an AI, prompt-friendly text representation.
2020
*
2121
* @author Christian Tzolov
2222
*/

spring-ai-core/src/main/java/org/springframework/ai/model/StreamingModelClient.java

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@
1818
import reactor.core.publisher.Flux;
1919

2020
/**
21-
* The StreamingModelClient interface provides a generic API for invoking a AI models with
21+
* The StreamingModelClient interface provides a generic API for invoking an AI models with
2222
* streaming response. It abstracts the process of sending requests and receiving a
2323
* streaming responses. The interface uses Java generics to accommodate different types of
2424
* requests and responses, enhancing flexibility and adaptability across different AI

spring-ai-docs/src/main/antora/modules/ROOT/pages/api/clients/functions/vertexai-gemini-chat-functions.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -108,7 +108,7 @@ static class Config {
108108
public record Request(String location, Unit unit) {}
109109
----
110110

111-
It is a best practice to annotate the request object with information such that the generats JSON schema of that function is as descriptive as possible to help the AI model pick the correct funciton to invoke.
111+
It is a best practice to annotate the request object with information such that the generates JSON schema of that function is as descriptive as possible to help the AI model pick the correct function to invoke.
112112

113113
The link:https://github.com/spring-projects/spring-ai/blob/main/spring-ai-spring-boot-autoconfigure/src/test/java/org/springframework/ai/autoconfigure/gemini/tool/FunctionCallWithFunctionBeanIT.java[FunctionCallWithFunctionBeanIT.java] demonstrates this approach.
114114

spring-ai-docs/src/main/antora/modules/ROOT/pages/api/embeddings/bedrock-cohere-embedding.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ The prefix `spring.ai.bedrock.cohere.embedding` (defined in `BedrockCohereEmbedd
6969
| Property | Description | Default
7070
| spring.ai.bedrock.cohere.embedding.enabled | Enable or disable support for Cohere | false
7171
| spring.ai.bedrock.cohere.embedding.model | The model id to use. See the https://github.com/spring-projects/spring-ai/blob/056b95a00efa5b014a1f488329fbd07a46c02378/models/spring-ai-bedrock/src/main/java/org/springframework/ai/bedrock/cohere/api/CohereEmbeddingBedrockApi.java#L150[CohereEmbeddingModel] for the supported models. | cohere.embed-multilingual-v3
72-
| spring.ai.bedrock.cohere.embedding.options.input-type | Prepends special tokens to differentiate each type from one another. You should not mix different types together, except when mixing types for for search and retrieval. In this case, embed your corpus with the search_document type and embedded queries with type search_query type. | SEARCH_DOCUMENT
72+
| spring.ai.bedrock.cohere.embedding.options.input-type | Prepends special tokens to differentiate each type from one another. You should not mix different types together, except when mixing types for search and retrieval. In this case, embed your corpus with the search_document type and embedded queries with type search_query type. | SEARCH_DOCUMENT
7373
| spring.ai.bedrock.cohere.embedding.options.truncate | Specifies how the API handles inputs longer than the maximum token length. If you specify LEFT or RIGHT, the model discards the input until the remaining input is exactly the maximum input token length for the model. | NONE
7474
|====
7575

spring-ai-docs/src/main/antora/modules/ROOT/pages/api/embeddings/ollama-embeddings.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -110,7 +110,7 @@ TIP: All properties prefixed with `spring.ai.ollama.embedding.options` can be ov
110110

111111
=== Embedding Options [[embedding-options]]
112112

113-
The https://github.com/spring-projects/spring-ai/blob/main/models/spring-ai-ollama/src/main/java/org/springframework/ai/ollama/api/OllamaOptions.java[OllamaOptions.java] provides the Ollama configurations, such as the model to use, the low level GPU and CPU tunning, etc.
113+
The https://github.com/spring-projects/spring-ai/blob/main/models/spring-ai-ollama/src/main/java/org/springframework/ai/ollama/api/OllamaOptions.java[OllamaOptions.java] provides the Ollama configurations, such as the model to use, the low level GPU and CPU tuning, etc.
114114

115115
The default options can be configured using the `spring.ai.ollama.embedding.options` properties as well.
116116

spring-ai-docs/src/main/antora/modules/ROOT/pages/api/generic-model.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ public interface ModelClient<TReq extends ModelRequest<?>, TRes extends ModelRes
3131

3232
== StreamingModelClient
3333

34-
The StreamingModelClient interface provides a generic API for invoking a AI models with streaming response. It abstracts the process of sending requests and receiving a streaming responses. The interface uses Java generics to accommodate different types of requests and responses, enhancing flexibility and adaptability across different AI model implementations.
34+
The StreamingModelClient interface provides a generic API for invoking an AI models with streaming response. It abstracts the process of sending requests and receiving a streaming responses. The interface uses Java generics to accommodate different types of requests and responses, enhancing flexibility and adaptability across different AI model implementations.
3535

3636
[source,java]
3737
----

spring-ai-docs/src/main/antora/modules/ROOT/pages/api/output-parser.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ public interface FormatProvider {
4343

4444
The `Parser` interface parses text strings to produce instances of the type T.
4545

46-
The `FormatProvider` provides text instructions for the AI Model to format the output so that it an be parsed into the type T by the `Parser`.
46+
The `FormatProvider` provides text instructions for the AI Model to format the output so that it can be parsed into the type T by the `Parser`.
4747
These text instructions are most often appended to the end of the user input to the AI Model.
4848

4949

spring-ai-docs/src/main/antora/modules/ROOT/pages/api/vectordbs/chroma.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ link:https://docs.trychroma.com/[Chroma] is the open-source embedding database.
1212

1313
1. OpenAI Account: Create an account at link:https://platform.openai.com/signup[OpenAI Signup] and generate the token at link:https://platform.openai.com/account/api-keys[API Keys].
1414

15-
2. Access to ChromeDB. The <<appendix-a, setup local ChromaDB>> appendix shows how to set up a DB locally with a Docker container.
15+
2. Access to ChromeDB. The <<Run Chroma Locally, setup local ChromaDB>> appendix shows how to set up a DB locally with a Docker container.
1616

1717
On startup, the `ChromaVectorStore` creates the required collection if one is not provisioned already.
1818

0 commit comments

Comments
 (0)