Skip to content

Commit 7260360

Browse files
Kludexdmontagu
andauthored
Add instructions parameter (#1360)
Co-authored-by: David Montague <35119617+dmontagu@users.noreply.github.com>
1 parent eec1532 commit 7260360

32 files changed

+1139
-128
lines changed

docs/agents.md

Lines changed: 46 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -123,6 +123,8 @@ async def main():
123123
[
124124
UserPromptNode(
125125
user_prompt='What is the capital of France?',
126+
instructions=None,
127+
instructions_functions=[],
126128
system_prompts=(),
127129
system_prompt_functions=[],
128130
system_prompt_dynamic_functions={},
@@ -136,6 +138,7 @@ async def main():
136138
part_kind='user-prompt',
137139
)
138140
],
141+
instructions=None,
139142
kind='request',
140143
)
141144
),
@@ -184,6 +187,8 @@ async def main():
184187
[
185188
UserPromptNode(
186189
user_prompt='What is the capital of France?',
190+
instructions=None,
191+
instructions_functions=[],
187192
system_prompts=(),
188193
system_prompt_functions=[],
189194
system_prompt_dynamic_functions={},
@@ -197,6 +202,7 @@ async def main():
197202
part_kind='user-prompt',
198203
)
199204
],
205+
instructions=None,
200206
kind='request',
201207
)
202208
),
@@ -612,6 +618,14 @@ Running `pyright` would identify the same issues.
612618

613619
System prompts might seem simple at first glance since they're just strings (or sequences of strings that are concatenated), but crafting the right system prompt is key to getting the model to behave as you want.
614620

621+
!!! tip
622+
For most use cases, you should use `instructions` instead of "system prompts".
623+
624+
If you know what you are doing though and want to preserve system prompt messages in the message history sent to the
625+
LLM in subsequent completions requests, you can achieve this using the `system_prompt` argument/decorator.
626+
627+
See the section below on [Instructions](#instructions) for more information.
628+
615629
Generally, system prompts fall into two categories:
616630

617631
1. **Static system prompts**: These are known when writing the code and can be defined via the `system_prompt` parameter of the [`Agent` constructor][pydantic_ai.Agent.__init__].
@@ -655,6 +669,36 @@ print(result.output)
655669

656670
_(This example is complete, it can be run "as is")_
657671

672+
## Instructions
673+
674+
Instructions are similar to system prompts. The main difference is that when an explicit `message_history` is provided
675+
in a call to `Agent.run` and similar methods, _instructions_ from any existing messages in the history are not included
676+
in the request to the model — only the instructions of the _current_ agent are included.
677+
678+
You should use:
679+
680+
- `instructions` when you want your request to the model to only include system prompts for the _current_ agent
681+
- `system_prompt` when you want your request to the model to _retain_ the system prompts used in previous requests (possibly made using other agents)
682+
683+
In general, we recommend using `instructions` instead of `system_prompt` unless you have a specific reason to use `system_prompt`.
684+
685+
```python {title="instructions.py"}
686+
from pydantic_ai import Agent
687+
688+
agent = Agent(
689+
'openai:gpt-4o',
690+
instructions='You are a helpful assistant that can answer questions and help with tasks.', # (1)!
691+
)
692+
693+
result = agent.run_sync('What is the capital of France?')
694+
print(result.output)
695+
#> Paris
696+
```
697+
698+
1. This will be the only instructions for this agent.
699+
700+
_(This example is complete, it can be run "as is")_
701+
658702
## Reflection and self-correction
659703

660704
Validation errors from both function tool parameter validation and [structured output validation](output.md#structured-output) can be passed back to the model with a request to retry.
@@ -749,6 +793,7 @@ with capture_run_messages() as messages: # (2)!
749793
part_kind='user-prompt',
750794
)
751795
],
796+
instructions=None,
752797
kind='request',
753798
),
754799
ModelResponse(
@@ -774,6 +819,7 @@ with capture_run_messages() as messages: # (2)!
774819
part_kind='retry-prompt',
775820
)
776821
],
822+
instructions=None,
777823
kind='request',
778824
),
779825
ModelResponse(

docs/api/models/function.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -31,6 +31,7 @@ async def model_function(
3131
part_kind='user-prompt',
3232
)
3333
],
34+
instructions=None,
3435
kind='request',
3536
)
3637
]

docs/message-history.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -54,6 +54,7 @@ print(result.all_messages())
5454
part_kind='user-prompt',
5555
),
5656
],
57+
instructions=None,
5758
kind='request',
5859
),
5960
ModelResponse(
@@ -100,6 +101,7 @@ async def main():
100101
part_kind='user-prompt',
101102
),
102103
],
104+
instructions=None,
103105
kind='request',
104106
)
105107
]
@@ -130,6 +132,7 @@ async def main():
130132
part_kind='user-prompt',
131133
),
132134
],
135+
instructions=None,
133136
kind='request',
134137
),
135138
ModelResponse(
@@ -188,6 +191,7 @@ print(result2.all_messages())
188191
part_kind='user-prompt',
189192
),
190193
],
194+
instructions=None,
191195
kind='request',
192196
),
193197
ModelResponse(
@@ -209,6 +213,7 @@ print(result2.all_messages())
209213
part_kind='user-prompt',
210214
)
211215
],
216+
instructions=None,
212217
kind='request',
213218
),
214219
ModelResponse(
@@ -314,6 +319,7 @@ print(result2.all_messages())
314319
part_kind='user-prompt',
315320
),
316321
],
322+
instructions=None,
317323
kind='request',
318324
),
319325
ModelResponse(
@@ -335,6 +341,7 @@ print(result2.all_messages())
335341
part_kind='user-prompt',
336342
)
337343
],
344+
instructions=None,
338345
kind='request',
339346
),
340347
ModelResponse(

docs/tools.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -82,6 +82,7 @@ print(dice_result.all_messages())
8282
part_kind='user-prompt',
8383
),
8484
],
85+
instructions=None,
8586
kind='request',
8687
),
8788
ModelResponse(
@@ -107,6 +108,7 @@ print(dice_result.all_messages())
107108
part_kind='tool-return',
108109
)
109110
],
111+
instructions=None,
110112
kind='request',
111113
),
112114
ModelResponse(
@@ -132,6 +134,7 @@ print(dice_result.all_messages())
132134
part_kind='tool-return',
133135
)
134136
],
137+
instructions=None,
135138
kind='request',
136139
),
137140
ModelResponse(

pydantic_ai_slim/pydantic_ai/_agent_graph.py

Lines changed: 14 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -125,6 +125,9 @@ def is_agent_node(
125125
class UserPromptNode(AgentNode[DepsT, NodeRunEndT]):
126126
user_prompt: str | Sequence[_messages.UserContent] | None
127127

128+
instructions: str | None
129+
instructions_functions: list[_system_prompt.SystemPromptRunner[DepsT]]
130+
128131
system_prompts: tuple[str, ...]
129132
system_prompt_functions: list[_system_prompt.SystemPromptRunner[DepsT]]
130133
system_prompt_dynamic_functions: dict[str, _system_prompt.SystemPromptRunner[DepsT]]
@@ -166,6 +169,7 @@ async def _prepare_messages(
166169
ctx_messages.used = True
167170

168171
parts: list[_messages.ModelRequestPart] = []
172+
instructions = await self._instructions(run_context)
169173
if message_history:
170174
# Shallow copy messages
171175
messages.extend(message_history)
@@ -176,7 +180,7 @@ async def _prepare_messages(
176180

177181
if user_prompt is not None:
178182
parts.append(_messages.UserPromptPart(user_prompt))
179-
return messages, _messages.ModelRequest(parts)
183+
return messages, _messages.ModelRequest(parts, instructions=instructions)
180184

181185
async def _reevaluate_dynamic_prompts(
182186
self, messages: list[_messages.ModelMessage], run_context: RunContext[DepsT]
@@ -206,6 +210,15 @@ async def _sys_parts(self, run_context: RunContext[DepsT]) -> list[_messages.Mod
206210
messages.append(_messages.SystemPromptPart(prompt))
207211
return messages
208212

213+
async def _instructions(self, run_context: RunContext[DepsT]) -> str | None:
214+
if self.instructions is None and not self.instructions_functions:
215+
return None
216+
217+
instructions = self.instructions or ''
218+
for instructions_runner in self.instructions_functions:
219+
instructions += await instructions_runner.run(run_context)
220+
return instructions
221+
209222

210223
async def _prepare_request_parameters(
211224
ctx: GraphRunContext[GraphAgentState, GraphAgentDeps[DepsT, NodeRunEndT]],

0 commit comments

Comments
 (0)