Replies: 7 comments 8 replies
-
To pass a runtime argument to a # Assuming you have a StructuredTool instance named `tool`
# and the tool expects arguments 'mailbox_email', 'message_id', 'email_reply', and 'tool_runtime'
input_args = {
'mailbox_email': 'example@example.com',
'message_id': '12345',
'email_reply': 'This is a reply.',
'tool_runtime': {'organization_id': 'org123'}
}
# For synchronous execution
result = tool.invoke(input_args)
# For asynchronous execution
result = await tool.ainvoke(input_args) The Additionally, you can create the tool dynamically at request time and bind to it the appropriate information. For example, this information may be the user ID as resolved from the request itself: from typing import List
from langchain_core.tools import BaseTool, tool
user_to_pets = {}
def generate_tools_for_user(user_id: str) -> List[BaseTool]:
"""Generate a set of tools that have a user id associated with them."""
@tool
def update_favorite_pets(pets: List[str]) -> None:
"""Add the list of favorite pets."""
user_to_pets[user_id] = pets
@tool
def delete_favorite_pets() -> None:
"""Delete the list of favorite pets."""
if user_id in user_to_pets:
del user_to_pets[user_id]
@tool
def list_favorite_pets() -> None:
"""List favorite pets if any."""
return user_to_pets.get(user_id, [])
return [update_favorite_pets, delete_favorite_pets, list_favorite_pets]
# Example usage
update_pets, delete_pets, list_pets = generate_tools_for_user("eugene")
update_pets.invoke({"pets": ["cat", "dog"]})
print(user_to_pets)
print(list_pets.invoke({})) This approach ensures that runtime-specific information, such as user IDs, is securely and dynamically bound to the tools. Moreover, when creating a structured chat agent, you can pass runtime arguments through the from langchain import hub
from langchain_community.chat_models import ChatOpenAI
from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain_core.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful assistant"),
("placeholder", "{chat_history}"),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
]
)
model = ChatOpenAI()
tools = ...
agent = create_tool_calling_agent(model, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
# Passing runtime arguments
agent_executor.invoke({"input": "hi"})
# Using with chat history
from langchain_core.messages import AIMessage, HumanMessage
agent_executor.invoke(
{
"input": "what's my name?",
"chat_history": [
HumanMessage(content="hi! my name is bob"),
AIMessage(content="Hello Bob! How can I assist you today?"),
],
}
) This method allows you to pass runtime arguments directly to the agent, ensuring that the necessary context and inputs are provided dynamically [1][2][3][4]. |
Beta Was this translation helpful? Give feedback.
-
here 's my code from langchain_core.tools import InjectedToolArg, tool
from typing_extensions import Annotated
from langchain_groq import ChatGroq
from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain_core.prompts import ChatPromptTemplate
groq_api_key = "****"
@tool(parse_docstring=True)
def add_numbers(a: int, b: int, user_id: Annotated[str, InjectedToolArg]) -> int:
"""
Add two numbers together.
Args:
a: The first number to add.
b: The second number to add.
user_id: The runtime arguments for the tool.
"""
print(f"Adding {a} and {b} for user {user_id = }")
return a + b
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful assistant"),
("placeholder", "{chat_history}"),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
]
)
model = ChatGroq(temperature=0, groq_api_key=groq_api_key)
tools = [add_numbers]
agent = create_tool_calling_agent(model, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
# Define the runtime argument
tool_runtime = {
'organization_id': 'org123',
'user_id': 'user123'
}
# Inject the runtime argument when invoking the agent
def invoke_agent(input_text):
return agent_executor.invoke({"input": input_text, "user_id": tool_runtime['user_id']})
# Example usage
response = invoke_agent("What's the sum of 1 and 2?")
print(response) Output
I get this error
how to use a tool in agent that has InjectedToolArg |
Beta Was this translation helpful? Give feedback.
-
pro tip: you can use instance methods as tools. Then you can create an instance of your "tool class", pass whatever special info you need, and then refer to this info via |
Beta Was this translation helpful? Give feedback.
-
I tried the above method a long time ago, but the agent doesn't seem to directly support passing run parameters. I checked a lot of source code but couldn't solve it. |
Beta Was this translation helpful? Give feedback.
-
Langgraph has it covered though. You can store values in a CustomState, inject values in the state at invoke time, and use them in the tool: Here is a minimal example: from langgraph.prebuilt.chat_agent_executor import AgentState
from typing_extensions import Annotated
from langgraph.prebuilt import InjectedState
from langchain_core.tools import tool
from langgraph.prebuilt import create_react_agent
# Create a state type that holds info you like
class CustomState(AgentState):
user_name: str
# Define a tool that get this state injected, and does something with it
@tool
def get_magic_number(state: Annotated[dict, InjectedState]) -> int:
"""Returns the magic number.
Returns:
int: The magic number
"""
if state["user_name"] == "Bobby":
return 10
return 8
# Create the agent with a model, tools, and the State type you defined above
model = MyCoolLLM()
tools = [get_magic_number]
agent = create_react_agent(
model,
tools,
state_schema=CustomState,
)
# Invoke the agent, and pass whatever you like in the state
config = {"configurable": {"thread_id": "abc123"}}
human_message = "What is the magic number"
response = agent.invoke(
{
"user_name": "Bobby",
"messages": [HumanMessage(content=human_message)],
},
config,
)
print(response)
|
Beta Was this translation helpful? Give feedback.
-
@vikasr111 Hi, i am also using AgentExecutor and i am asking myself how to inject arguments during runtime. As the discussion started already some months ago, did you find any solution in the meantime? |
Beta Was this translation helpful? Give feedback.
-
你可以这样做,修改create_tool_calling_agent中的代码,增加注入inject_config的代码:
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Checked other resources
Commit to Help
Example Code
Description
I use langchain AgentExecutor to run agent as explained in the code example. I use StructuredTool class to create tools from the function as shows in the code example. I am trying to implement injectable tool argument which can be passed at runtime and doesn't need to be controlled by LLM. So this argument will be hidden from LLM during agent execution. I am following the langchain guide for this https://python.langchain.com/v0.2/docs/how_to/tool_runtime/
This document seems very confusing and I am looking for some solution that can help me do this with StructuredTool and AgentExecutor.
System Info
langchain==0.2.11
langchain-community==0.2.10
langchain-core==0.2.25
langchain-openai==0.1.19
langchain-text-splitters==0.2.2
Platform: Mac M3
Python version: 3.9.17
Beta Was this translation helpful? Give feedback.
All reactions