Skip to content

Support for returning response directly from tool #1189

@droidnoob

Description

@droidnoob

Description

I would like to request for responding directly from tool without going back to the model. Kind of like what what langchain has in their tool.

Currently, when an agent calls a tool, the response is typically returned to the model for further processing. Having an option in the tool to respond directly with going back to the model loop might be better in some cases.

Use Cases

  • Privacy Concerns: Some tools return responses containing PII, and I want to avoid exposing them to the model.
  • Pre-formatted Responses: The tool response may already be in the required format, I want to avoid giving it to the model to parrot it back
  • Large Responses: The tool might be giving out large responses and exhausting the token limit

I think this might be possible with using Graph and End right now (have not tried it out yet). With langgraph I did the same with to achieve the direct tool response. But having a flag in tool decorator arguments will be great and be a great dev experience.

@agent.tool_plain(direct=True) # or a pydantic type 
async def documentation_queries():
    """
    Just a simple tool that says returns a joke
    """
    joke = await JokeGenerator()
    return joke

If there is a different way to do this, please let me know.

References

https://python.langchain.com/api_reference/core/tools/langchain_core.tools.convert.tool.html

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions