-
Notifications
You must be signed in to change notification settings - Fork 1k
Toolsets #2024
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
# Conflicts: # tests/models/test_openai.py
# Conflicts: # pydantic_ai_slim/pydantic_ai/_output.py # pydantic_ai_slim/pydantic_ai/models/openai.py # tests/models/test_openai.py
….)', less 'output_schema.mode == ...'
# Conflicts: # pydantic_ai_slim/pydantic_ai/models/openai.py # pydantic_ai_slim/pydantic_ai/profiles/openai.py # tests/models/test_google.py # tests/models/test_openai_responses.py
# Conflicts: # pydantic_ai_slim/pydantic_ai/_utils.py # pydantic_ai_slim/pydantic_ai/agent.py # tests/models/test_anthropic.py # tests/test_utils.py
# Conflicts: # pydantic_ai_slim/pydantic_ai/agent.py # pydantic_ai_slim/pydantic_ai/mcp.py
…pass sampling_model to MCPServer through RunContext, and make Agent an async contextmanager instead of run_toolsets
… being stored on read broke a test
…gent.set_mcp_sampling_model
…egy stops later tools from being called
# Conflicts: # pydantic_ai_slim/pydantic_ai/agent.py # pydantic_ai_slim/pydantic_ai/mcp.py # tests/test_examples.py
@@ -464,4 +362,12 @@ class ToolDefinition: | |||
Note: this is currently only supported by OpenAI models. | |||
""" | |||
|
|||
kind: ToolKind = field(default='function') | |||
"""The kind of tool: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"""The kind of tool: | |
"""The kind of tool: | |
Otherwise list doesn't display properly.
@@ -47,11 +47,11 @@ from pydantic_ai import Agent | |||
from pydantic_ai.mcp import MCPServerSSE | |||
|
|||
server = MCPServerSSE(url='http://localhost:3001/sse') # (1)! | |||
agent = Agent('openai:gpt-4o', mcp_servers=[server]) # (2)! | |||
agent = Agent('openai:gpt-4o', toolsets=[server]) # (2)! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just for my naive understanding, mcp servers can provide more concepts than just tools (e.g. resources). I don't know if PydanticAI concepts exists for these other concepts; if it is the case (or will be in the future), how are we going to make use of them?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Viicos That's a good point. Curious what Samuel etc think.
For MCP resources, I think we're OK because "Resources are designed to be application-controlled, meaning that the client application can decide how and when they should be used." In the case of Pydantic AI, that would mean the user code has to explicitly do things like server.list_resources()
and server.read_resource(...)
(possibly from a tool or dynamic instructions function), and there's no automatic resources-related behavior they'd get just from registering the MCP server with the agent. So if they don't want to register it as a toolset, they can still use it directly. They would be recommended to enter its context manually to ensure the server is running, instead of getting this automatically from entering the agent context, but this PR also has the MCP server context be entered automatically when it's needed (to prepare for Temporal where every "activity" -- anything that does IO, like a tool call -- runs in isolated context and they can't share a connection anyway).
For MCP prompts, I'd expect the user to similarly explicitly need to call server.get_prompt(...)
from e.g. a dynamic instructions function (or just when building the agent).
I'd expect other future MCP features to also not have the same "auto-use" dynamic that MCP tools do.
So I'd interpret this less as "an MCP server is just a toolset now", and more "an MCP server can be used directly as a toolset, and other things". The only thing users would lack if they don't register it as a toolset would be automatic entering of the context when the agent context is entered, which I think is acceptable.
|
||
|
||
async def main(): | ||
async with agent.run_mcp_servers(): # (3)! | ||
async with agent: # (3)! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What was the motivation to make the context manager "implicit" here? Doesn't it feels weird to have Agent.__aenter__()
reserved for MCP logic?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We also intend to use it for other things, like httpx clients: #1695 (comment)
""" | ||
span_attributes = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An idea that might be worth exploring in the future: I'm wondering if attributes could be lazily computed if a NoOpTracer
is used (i.e. you don't have instrumentation set). Most likely the runtime overhead isn't too big, but would be interesting to investigate.
|
||
|
||
class DeferredToolset(AbstractToolset[AgentDepsT]): | ||
"""A toolset that holds deferred tool.""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"""A toolset that holds deferred tool.""" | |
"""A toolset that holds deferred tools.""" |
return self.__class__.__name__.replace('Toolset', ' toolset') | ||
|
||
@property | ||
def tool_name_conflict_hint(self) -> str: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe have this property private
async def call_tool(self, call: ToolCallPart, ctx: RunContext[AgentDepsT], allow_partial: bool = False) -> Any: | ||
ctx = replace(ctx, tool_name=call.tool_name, tool_call_id=call.tool_call_id) | ||
|
||
pyd_allow_partial: Literal['off', 'trailing-strings'] = 'trailing-strings' if allow_partial else 'off' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pyright should infer the type to a literal already
pyd_allow_partial: Literal['off', 'trailing-strings'] = 'trailing-strings' if allow_partial else 'off' | |
pyd_allow_partial = 'trailing-strings' if allow_partial else 'off' |
|
||
async def __aexit__(self, *args: Any) -> bool | None: | ||
self._entered_count -= 1 | ||
if self._entered_count <= 0 and self._exit_stack is not None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Logically it should be < 0?
if self._entered_count <= 0 and self._exit_stack is not None: | |
if self._entered_count == 0 and self._exit_stack is not None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copied from MCPServer's implementation, but agreed :)
Toolset
, take 3 #1973To fix:
MCPToolset
To improve:
MCPServer
a toolset, instead of needingMCPServer.as_toolset()
TODO
commentsDeferredToolCalls
withrun_stream
To add:
run
/iter
etc. Should overridetoolsets
passed toAgent
, but not output tools,@agent.tools
tools,Agent(tools=...)
tools, orAgent(mcp_servers=...)
tools. If you want those to be overridden, you should pass them asAgent(toolsets=FunctionToolset(...))
Toolset
, take 3 #1973 (comment)to_chat_completions
method #2041Agent(prepare_output_tools=...)
: Closes Make output_type be conditionally available #2042