Skip to content

Commit def15f9

Browse files
committed
(docs) minor PE patterns copyedit changes
Signed-off-by: Christian Tzolov <christian.tzolov@broadcom.com>
1 parent d32e773 commit def15f9

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

spring-ai-docs/src/main/antora/modules/ROOT/pages/api/chat/prompt-engineering-patterns.adoc

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
= Prompt Engineering Patterns
33

44
Practical implementations of Prompt Engineering techniques based on the comprehensive link:https://www.kaggle.com/whitepaper-prompt-engineering[Prompt Engineering Guide].
5-
The guide covers the theory, principles, and patterns of effective prompt engineering, while here we demosntrate how to translate those concepts into working Java code using Spring AI's fluent xref::api/chatclient.adoc[ChatClient API].
5+
The guide covers the theory, principles, and patterns of effective prompt engineering, while here we demonstrate how to translate those concepts into working Java code using Spring AI's fluent xref::api/chatclient.adoc[ChatClient API].
66
The demo source code used in this article is available at: link:https://github.com/spring-projects/spring-ai-examples/tree/main/prompt-engineering/prompt-engineering-patterns[Prompt Engineering Patterns Examples].
77

88
== 1. Configuration
@@ -25,8 +25,6 @@ For example, here is how to enable Anthropic Claude API:
2525
</dependency>
2626
----
2727

28-
You can find detailed information for enabling each model in the xref::api/chatmodel.adoc[reference docs].
29-
3028
You can specify the LLM model name like this:
3129

3230
[source,java]
@@ -36,11 +34,13 @@ You can specify the LLM model name like this:
3634
.build())
3735
----
3836

37+
Find detailed information for enabling each model in the xref::api/chatmodel.adoc[reference docs].
38+
3939
=== LLM Output Configuration
4040

4141
image::https://docs.spring.io/spring-ai/reference/_images/chat-options-flow.jpg[width=500,float=right]
4242

43-
Before we dive into prompt engineering techniques, it's essential to understand how to configure the LLM's output behavior. Spring AI provides several configuration options that let you control various aspects of generation through the xref:/api/chatmodel.adoc#_chat_options[ChatOptions] builder.
43+
Before we dive into prompt engineering techniques, it's essential to understand how to configure the LLM's output behavior. Spring AI provides several configuration options that let you control various aspects of generation through the xref::api/chatmodel.adoc#_chat_options[ChatOptions] builder.
4444

4545
All configurations can be applied programmatically as demonstrated in the examples below or through Spring application properties at start time.
4646

@@ -202,7 +202,7 @@ One-shot provides a single example, which is useful when examples are costly or
202202
[source,java]
203203
----
204204
// Implementation of Section 2.2: One-shot & few-shot (page 16)
205-
public void pt_ones_shot_few_shots(ChatClient chatClient) {
205+
public void pt_one_shot_few_shots(ChatClient chatClient) {
206206
String pizzaOrder = chatClient.prompt("""
207207
Parse a customer's pizza order into valid JSON
208208
@@ -213,7 +213,7 @@ public void pt_ones_shot_few_shots(ChatClient chatClient) {
213213
{
214214
"size": "small",
215215
"type": "normal",
216-
"ingredients": ["cheese", "tomato sauce", "peperoni"]
216+
"ingredients": ["cheese", "tomato sauce", "pepperoni"]
217217
}
218218
```
219219

0 commit comments

Comments
 (0)