-
Notifications
You must be signed in to change notification settings - Fork 1k
Toolsets #2024
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Toolsets #2024
Changes from 87 commits
e290951
2056539
bceba19
0cb25c4
933b74e
7974df0
9cc19e2
bc6bb65
0e356a3
81312dc
52ef4d5
fe05956
94421f3
1902d00
1f53c9b
a4c2877
56e58f9
a5234e1
40def08
837d305
3598bef
5f71ba8
cfc2749
a137641
f495d46
449ed0d
e70d249
4592b0b
db1c628
f57d078
5112455
416cc7d
4b0e5cf
094920f
9f61706
9f51387
9a1e628
2b5fa81
6c4662b
3ed3431
3d77818
a86d7d4
ce985a0
71d1655
8c04144
d78b5f7
70d1197
2eb7fd1
25ccb54
9e00c32
7de3c0d
4029fac
98bccf2
8041cf3
9bfed04
0f8da74
8a29836
3d2012c
901267d
b9258d7
27ccbd1
3031e55
ebd0b57
867bf68
c1115ae
6abd603
a2f69df
0e0bf35
735df29
8745a7a
05aa972
ad6e826
84cd954
74a56ae
0360e77
6607b00
8a3febb
2e200ac
1cb7f32
a6eba43
0c96126
2348f45
9dc684e
f3124c0
f660cc1
64dacbb
5ca305e
c5ef5f6
badbe23
acddb8d
89fc266
7e3331b
ebf6f40
f7db040
fe07149
a0f4678
db82d00
dea8050
131a325
8203732
778962c
e6575a9
9f9ee55
a3c9a59
6eae653
2b3a9e5
b2aa894
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -29,7 +29,7 @@ Examples of both are shown below; [mcp-run-python](run-python.md) is used as the | |
[`MCPServerSSE`][pydantic_ai.mcp.MCPServerSSE] connects over HTTP using the [HTTP + Server Sent Events transport](https://spec.modelcontextprotocol.io/specification/2024-11-05/basic/transports/#http-with-sse) to a server. | ||
|
||
!!! note | ||
[`MCPServerSSE`][pydantic_ai.mcp.MCPServerSSE] requires an MCP server to be running and accepting HTTP connections before calling [`agent.run_mcp_servers()`][pydantic_ai.Agent.run_mcp_servers]. Running the server is not managed by PydanticAI. | ||
[`MCPServerSSE`][pydantic_ai.mcp.MCPServerSSE] requires an MCP server to be running and accepting HTTP connections before calling [`agent.run_toolsets()`][pydantic_ai.Agent.run_toolsets]. Running the server is not managed by PydanticAI. | ||
|
||
The name "HTTP" is used since this implementation will be adapted in future to use the new | ||
[Streamable HTTP](https://github.com/modelcontextprotocol/specification/pull/206) currently in development. | ||
|
@@ -47,11 +47,11 @@ from pydantic_ai import Agent | |
from pydantic_ai.mcp import MCPServerSSE | ||
|
||
server = MCPServerSSE(url='http://localhost:3001/sse') # (1)! | ||
agent = Agent('openai:gpt-4o', mcp_servers=[server]) # (2)! | ||
agent = Agent('openai:gpt-4o', toolsets=[server]) # (2)! | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Just for my naive understanding, mcp servers can provide more concepts than just tools (e.g. resources). I don't know if PydanticAI concepts exists for these other concepts; if it is the case (or will be in the future), how are we going to make use of them? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @Viicos That's a good point. Curious what Samuel etc think. For MCP resources, I think we're OK because "Resources are designed to be application-controlled, meaning that the client application can decide how and when they should be used." In the case of Pydantic AI, that would mean the user code has to explicitly do things like For MCP prompts, I'd expect the user to similarly explicitly need to call I'd expect other future MCP features to also not have the same "auto-use" dynamic that MCP tools do. So I'd interpret this less as "an MCP server is just a toolset now", and more "an MCP server can be used directly as a toolset, and other things". The only thing users would lack if they don't register it as a toolset would be automatic entering of the context when the agent context is entered, which I think is acceptable. |
||
|
||
|
||
async def main(): | ||
async with agent.run_mcp_servers(): # (3)! | ||
async with agent.run_toolsets(): # (3)! | ||
result = await agent.run('How many days between 2000-01-01 and 2025-03-18?') | ||
print(result.output) | ||
#> There are 9,208 days between January 1, 2000, and March 18, 2025. | ||
|
@@ -93,7 +93,7 @@ Will display as follows: | |
!!! note | ||
[`MCPServerStreamableHTTP`][pydantic_ai.mcp.MCPServerStreamableHTTP] requires an MCP server to be | ||
running and accepting HTTP connections before calling | ||
[`agent.run_mcp_servers()`][pydantic_ai.Agent.run_mcp_servers]. Running the server is not | ||
[`agent.run_toolsets()`][pydantic_ai.Agent.run_toolsets]. Running the server is not | ||
managed by PydanticAI. | ||
|
||
Before creating the Streamable HTTP client, we need to run a server that supports the Streamable HTTP transport. | ||
|
@@ -118,10 +118,10 @@ from pydantic_ai import Agent | |
from pydantic_ai.mcp import MCPServerStreamableHTTP | ||
|
||
server = MCPServerStreamableHTTP('http://localhost:8000/mcp') # (1)! | ||
agent = Agent('openai:gpt-4o', mcp_servers=[server]) # (2)! | ||
agent = Agent('openai:gpt-4o', toolsets=[server]) # (2)! | ||
|
||
async def main(): | ||
async with agent.run_mcp_servers(): # (3)! | ||
async with agent.run_toolsets(): # (3)! | ||
result = await agent.run('How many days between 2000-01-01 and 2025-03-18?') | ||
print(result.output) | ||
#> There are 9,208 days between January 1, 2000, and March 18, 2025. | ||
|
@@ -138,7 +138,7 @@ _(This example is complete, it can be run "as is" with Python 3.10+ — you'll n | |
The other transport offered by MCP is the [stdio transport](https://spec.modelcontextprotocol.io/specification/2024-11-05/basic/transports/#stdio) where the server is run as a subprocess and communicates with the client over `stdin` and `stdout`. In this case, you'd use the [`MCPServerStdio`][pydantic_ai.mcp.MCPServerStdio] class. | ||
|
||
!!! note | ||
When using [`MCPServerStdio`][pydantic_ai.mcp.MCPServerStdio] servers, the [`agent.run_mcp_servers()`][pydantic_ai.Agent.run_mcp_servers] context manager is responsible for starting and stopping the server. | ||
When using [`MCPServerStdio`][pydantic_ai.mcp.MCPServerStdio] servers, the [`agent.run_toolsets()`][pydantic_ai.Agent.run_toolsets] context manager is responsible for starting and stopping the server. | ||
|
||
```python {title="mcp_stdio_client.py" py="3.10"} | ||
from pydantic_ai import Agent | ||
|
@@ -156,11 +156,11 @@ server = MCPServerStdio( # (1)! | |
'stdio', | ||
] | ||
) | ||
agent = Agent('openai:gpt-4o', mcp_servers=[server]) | ||
agent = Agent('openai:gpt-4o', toolsets=[server]) | ||
|
||
|
||
async def main(): | ||
async with agent.run_mcp_servers(): | ||
async with agent.run_toolsets(): | ||
result = await agent.run('How many days between 2000-01-01 and 2025-03-18?') | ||
print(result.output) | ||
#> There are 9,208 days between January 1, 2000, and March 18, 2025. | ||
|
@@ -180,31 +180,32 @@ call needs. | |
from typing import Any | ||
|
||
from pydantic_ai import Agent | ||
from pydantic_ai.mcp import CallToolFunc, MCPServerStdio, ToolResult | ||
from pydantic_ai.mcp import MCPServerStdio, ToolResult | ||
from pydantic_ai.models.test import TestModel | ||
from pydantic_ai.tools import RunContext | ||
from pydantic_ai.toolsets.processed import CallToolFunc | ||
|
||
|
||
async def process_tool_call( | ||
ctx: RunContext[int], | ||
call_tool: CallToolFunc, | ||
tool_name: str, | ||
args: dict[str, Any], | ||
name: str, | ||
tool_args: dict[str, Any], | ||
) -> ToolResult: | ||
"""A tool call processor that passes along the deps.""" | ||
return await call_tool(tool_name, args, metadata={'deps': ctx.deps}) | ||
return await call_tool(name, tool_args, metadata={'deps': ctx.deps}) | ||
|
||
|
||
server = MCPServerStdio('python', ['mcp_server.py'], process_tool_call=process_tool_call) | ||
agent = Agent( | ||
model=TestModel(call_tools=['echo_deps']), | ||
deps_type=int, | ||
mcp_servers=[server] | ||
toolsets=[server] | ||
) | ||
|
||
|
||
async def main(): | ||
async with agent.run_mcp_servers(): | ||
async with agent.run_toolsets(): | ||
result = await agent.run('Echo with deps set to 42', deps=42) | ||
print(result.output) | ||
#> {"echo_deps":{"echo":"This is an echo message","deps":42}} | ||
|
@@ -242,7 +243,7 @@ calculator_server = MCPServerSSE( | |
# Both servers might have a tool named 'get_data', but they'll be exposed as: | ||
# - 'weather_get_data' | ||
# - 'calc_get_data' | ||
agent = Agent('openai:gpt-4o', mcp_servers=[weather_server, calculator_server]) | ||
agent = Agent('openai:gpt-4o', toolsets=[weather_server, calculator_server]) | ||
``` | ||
|
||
### Example with Stdio Server | ||
|
@@ -272,7 +273,7 @@ js_server = MCPServerStdio( | |
tool_prefix='js' # Tools will be prefixed with 'js_' | ||
) | ||
|
||
agent = Agent('openai:gpt-4o', mcp_servers=[python_server, js_server]) | ||
agent = Agent('openai:gpt-4o', toolsets=[python_server, js_server]) | ||
``` | ||
|
||
When the model interacts with these servers, it will see the prefixed tool names, but the prefixes will be automatically handled when making tool calls. | ||
|
@@ -359,11 +360,11 @@ from pydantic_ai import Agent | |
from pydantic_ai.mcp import MCPServerStdio | ||
|
||
server = MCPServerStdio(command='python', args=['generate_svg.py']) | ||
agent = Agent('openai:gpt-4o', mcp_servers=[server]) | ||
agent = Agent('openai:gpt-4o', toolsets=[server]) | ||
|
||
|
||
async def main(): | ||
async with agent.run_mcp_servers(): | ||
async with agent.run_toolsets(): | ||
result = await agent.run('Create an image of a robot in a punk style.') | ||
print(result.output) | ||
#> Image file written to robot_punk.svg. | ||
|
Uh oh!
There was an error while loading. Please reload this page.