Skip to content

Commit acddb8d

Browse files
committed
Add sampling_model to Agent __init__, iter, run (etc), and override, pass sampling_model to MCPServer through RunContext, and make Agent an async contextmanager instead of run_toolsets
1 parent badbe23 commit acddb8d

File tree

12 files changed

+155
-96
lines changed

12 files changed

+155
-96
lines changed

docs/mcp/client.md

Lines changed: 9 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ Examples of both are shown below; [mcp-run-python](run-python.md) is used as the
2929
[`MCPServerSSE`][pydantic_ai.mcp.MCPServerSSE] connects over HTTP using the [HTTP + Server Sent Events transport](https://spec.modelcontextprotocol.io/specification/2024-11-05/basic/transports/#http-with-sse) to a server.
3030

3131
!!! note
32-
[`MCPServerSSE`][pydantic_ai.mcp.MCPServerSSE] requires an MCP server to be running and accepting HTTP connections before calling [`agent.run_toolsets()`][pydantic_ai.Agent.run_toolsets]. Running the server is not managed by PydanticAI.
32+
[`MCPServerSSE`][pydantic_ai.mcp.MCPServerSSE] requires an MCP server to be running and accepting HTTP connections before running the agent. Running the server is not managed by Pydantic AI.
3333

3434
The name "HTTP" is used since this implementation will be adapted in future to use the new
3535
[Streamable HTTP](https://github.com/modelcontextprotocol/specification/pull/206) currently in development.
@@ -51,7 +51,7 @@ agent = Agent('openai:gpt-4o', toolsets=[server]) # (2)!
5151

5252

5353
async def main():
54-
async with agent.run_toolsets(): # (3)!
54+
async with agent: # (3)!
5555
result = await agent.run('How many days between 2000-01-01 and 2025-03-18?')
5656
print(result.output)
5757
#> There are 9,208 days between January 1, 2000, and March 18, 2025.
@@ -92,9 +92,8 @@ Will display as follows:
9292

9393
!!! note
9494
[`MCPServerStreamableHTTP`][pydantic_ai.mcp.MCPServerStreamableHTTP] requires an MCP server to be
95-
running and accepting HTTP connections before calling
96-
[`agent.run_toolsets()`][pydantic_ai.Agent.run_toolsets]. Running the server is not
97-
managed by PydanticAI.
95+
running and accepting HTTP connections before running the agent. Running the server is not
96+
managed by Pydantic AI.
9897

9998
Before creating the Streamable HTTP client, we need to run a server that supports the Streamable HTTP transport.
10099

@@ -121,7 +120,7 @@ server = MCPServerStreamableHTTP('http://localhost:8000/mcp') # (1)!
121120
agent = Agent('openai:gpt-4o', toolsets=[server]) # (2)!
122121

123122
async def main():
124-
async with agent.run_toolsets(): # (3)!
123+
async with agent: # (3)!
125124
result = await agent.run('How many days between 2000-01-01 and 2025-03-18?')
126125
print(result.output)
127126
#> There are 9,208 days between January 1, 2000, and March 18, 2025.
@@ -138,7 +137,7 @@ _(This example is complete, it can be run "as is" with Python 3.10+ — you'll n
138137
The other transport offered by MCP is the [stdio transport](https://spec.modelcontextprotocol.io/specification/2024-11-05/basic/transports/#stdio) where the server is run as a subprocess and communicates with the client over `stdin` and `stdout`. In this case, you'd use the [`MCPServerStdio`][pydantic_ai.mcp.MCPServerStdio] class.
139138

140139
!!! note
141-
When using [`MCPServerStdio`][pydantic_ai.mcp.MCPServerStdio] servers, the [`agent.run_toolsets()`][pydantic_ai.Agent.run_toolsets] context manager is responsible for starting and stopping the server.
140+
When using [`MCPServerStdio`][pydantic_ai.mcp.MCPServerStdio] servers, the [`async with agent`][pydantic_ai.Agent.__aenter__] context manager is responsible for starting and stopping the server.
142141

143142
```python {title="mcp_stdio_client.py" py="3.10"}
144143
from pydantic_ai import Agent
@@ -160,7 +159,7 @@ agent = Agent('openai:gpt-4o', toolsets=[server])
160159

161160

162161
async def main():
163-
async with agent.run_toolsets():
162+
async with agent:
164163
result = await agent.run('How many days between 2000-01-01 and 2025-03-18?')
165164
print(result.output)
166165
#> There are 9,208 days between January 1, 2000, and March 18, 2025.
@@ -205,7 +204,7 @@ agent = Agent(
205204

206205

207206
async def main():
208-
async with agent.run_toolsets():
207+
async with agent:
209208
result = await agent.run('Echo with deps set to 42', deps=42)
210209
print(result.output)
211210
#> {"echo_deps":{"echo":"This is an echo message","deps":42}}
@@ -364,7 +363,7 @@ agent = Agent('openai:gpt-4o', toolsets=[server])
364363

365364

366365
async def main():
367-
async with agent.run_toolsets():
366+
async with agent:
368367
result = await agent.run('Create an image of a robot in a punk style.')
369368
print(result.output)
370369
#> Image file written to robot_punk.svg.

mcp-run-python/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ agent = Agent('claude-3-5-haiku-latest', toolsets=[server])
5656

5757

5858
async def main():
59-
async with agent.run_toolsets():
59+
async with agent:
6060
result = await agent.run('How many days between 2000-01-01 and 2025-03-18?')
6161
print(result.output)
6262
#> There are 9,208 days between January 1, 2000, and March 18, 2025.w

pydantic_ai_slim/pydantic_ai/_agent_graph.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -115,6 +115,7 @@ class GraphAgentDeps(Generic[DepsT, OutputDataT]):
115115
history_processors: Sequence[HistoryProcessor[DepsT]]
116116

117117
toolset: RunToolset[DepsT]
118+
sampling_model: models.Model
118119

119120
tracer: Tracer
120121
instrumentation_settings: InstrumentationSettings | None = None
@@ -561,6 +562,7 @@ def build_run_context(ctx: GraphRunContext[GraphAgentState, GraphAgentDeps[DepsT
561562
deps=ctx.deps.user_deps,
562563
model=ctx.deps.model,
563564
usage=ctx.state.usage,
565+
sampling_model=ctx.deps.sampling_model,
564566
prompt=ctx.deps.prompt,
565567
messages=ctx.state.message_history,
566568
run_step=ctx.state.run_step,

pydantic_ai_slim/pydantic_ai/_run_context.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,8 @@ class RunContext(Generic[AgentDepsT]):
2828
"""The model used in this run."""
2929
usage: Usage
3030
"""LLM usage associated with the run."""
31+
sampling_model: Model
32+
"""The model used for MCP sampling."""
3133
prompt: str | Sequence[_messages.UserContent] | None = None
3234
"""The original user prompt passed to the run."""
3335
messages: list[_messages.ModelMessage] = field(default_factory=list)

0 commit comments

Comments
 (0)