-
Notifications
You must be signed in to change notification settings - Fork 811
Support for returning response directly from tool #1189
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Also facing the same issue. It's a waste of input and output tokens to do a final model round, and adds unnecessary latency and it could be prone to LLM errors after the model regurgitates the output. Also saw #847 which is the same discussion. Would love to see if folks have found a short-term workaround for this |
In terms of a short term work around, I've found success by
messages = [
{
"role": "system",
"content": "You are an AI assistant.",
},
{
"role": "user",
"content": "I have the following data: ...",
},
{
"role": "assistant",
"tool_calls": [
{
"id": "call_0_generated_uuid",
"type": "function",
"function": {
"name": "final_result",
"arguments": '{"response":["complex_foo": "bar"]}',
},
}
],
},
{
"role": "tool",
"tool_call_id": "call_0_generated_uuid",
"content": '["complex_foo": "bar"]',
},
{
"role": "user",
"content": 'Your actual user query.',
},
]
async def foo_result_validator(
ctx: GraphRunContext[FooState, FooDeps],
result: IntermediateFoo,
):
data = # Your actual tool call here
return data I haven't tried, but you may be able to use iter as well, iterate through the nodes and look for the specific ToolCallPart within CallToolsNode, then search through agent._function_tools and terminate with that call. |
Yeah, seems like |
I also spent lot of time investigating how to achieve this. One work around is abusing deps by storing some state value like the output of tool into it. And send only some bland message like 'executed toolx' as output of tool back to LLM, so it saves on tokens. Another solution would be to use pydantic graph itself, and have some sort of root agent, which takes user query and returns a structured output of agent_types. |
+1! |
This is so important, I would love to have something like this in pydantic-ai |
Description
I would like to request for responding directly from tool without going back to the model. Kind of like what what langchain has in their tool.
Currently, when an agent calls a tool, the response is typically returned to the model for further processing. Having an option in the tool to respond directly with going back to the model loop might be better in some cases.
Use Cases
I think this might be possible with using
Graph
andEnd
right now (have not tried it out yet). With langgraph I did the same with to achieve the direct tool response. But having a flag in tool decorator arguments will be great and be a great dev experience.If there is a different way to do this, please let me know.
References
https://python.langchain.com/api_reference/core/tools/langchain_core.tools.convert.tool.html
The text was updated successfully, but these errors were encountered: