You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: spring-ai-docs/src/main/antora/modules/ROOT/pages/api/chat/prompt-engineering-patterns.adoc
+6-6Lines changed: 6 additions & 6 deletions
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
= Prompt Engineering Patterns
3
3
4
4
Practical implementations of Prompt Engineering techniques based on the comprehensive link:https://www.kaggle.com/whitepaper-prompt-engineering[Prompt Engineering Guide].
5
-
The guide covers the theory, principles, and patterns of effective prompt engineering, while here we demosntrate how to translate those concepts into working Java code using Spring AI's fluent xref::api/chatclient.adoc[ChatClient API].
5
+
The guide covers the theory, principles, and patterns of effective prompt engineering, while here we demonstrate how to translate those concepts into working Java code using Spring AI's fluent xref::api/chatclient.adoc[ChatClient API].
6
6
The demo source code used in this article is available at: link:https://github.com/spring-projects/spring-ai-examples/tree/main/prompt-engineering/prompt-engineering-patterns[Prompt Engineering Patterns Examples].
7
7
8
8
== 1. Configuration
@@ -25,8 +25,6 @@ For example, here is how to enable Anthropic Claude API:
25
25
</dependency>
26
26
----
27
27
28
-
You can find detailed information for enabling each model in the xref::api/chatmodel.adoc[reference docs].
29
-
30
28
You can specify the LLM model name like this:
31
29
32
30
[source,java]
@@ -36,11 +34,13 @@ You can specify the LLM model name like this:
36
34
.build())
37
35
----
38
36
37
+
Find detailed information for enabling each model in the xref::api/chatmodel.adoc[reference docs].
Before we dive into prompt engineering techniques, it's essential to understand how to configure the LLM's output behavior. Spring AI provides several configuration options that let you control various aspects of generation through the xref:/api/chatmodel.adoc#_chat_options[ChatOptions] builder.
43
+
Before we dive into prompt engineering techniques, it's essential to understand how to configure the LLM's output behavior. Spring AI provides several configuration options that let you control various aspects of generation through the xref::api/chatmodel.adoc#_chat_options[ChatOptions] builder.
44
44
45
45
All configurations can be applied programmatically as demonstrated in the examples below or through Spring application properties at start time.
46
46
@@ -202,7 +202,7 @@ One-shot provides a single example, which is useful when examples are costly or
202
202
[source,java]
203
203
----
204
204
// Implementation of Section 2.2: One-shot & few-shot (page 16)
205
-
public void pt_ones_shot_few_shots(ChatClient chatClient) {
205
+
public void pt_one_shot_few_shots(ChatClient chatClient) {
206
206
String pizzaOrder = chatClient.prompt("""
207
207
Parse a customer's pizza order into valid JSON
208
208
@@ -213,7 +213,7 @@ public void pt_ones_shot_few_shots(ChatClient chatClient) {
0 commit comments