How to implement the ReAct pattern with the OpenAI tools agent? #17451
-
Checked other resources
Commit to Help
Example Codefrom langchain_community.tools import DuckDuckGoSearchRun
from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain.prompts import PromptTemplate
search_tool = DuckDuckGoSearchRun()
tools = [search_tool]
react_openai_tools = """
Answer the following questions as best you can.
You have access to a number of tools, use them to get the answer to the question.
Reply in the following format:
Question: the input question you must answer
Thought: you should always think about what to do. Is the information so far sufficient,
or are more tool calls needed? ALWAYS start with a thought, NEVER just reply with a tool call.
Action: the action to take, should be calling one of the tools
Tool output: the result of the tool call
... (this Thought/Action/Tool output can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Begin!
Question: {input}
Thought:{agent_scratchpad}
"""
prompt = PromptTemplate.from_template(react_openai_tools)
llm = ChatOpenAI(model="gpt-4-0125-preview", temperature=0)
agent_executor = AgentExecutor(
agent=create_openai_tools_agent(llm, tools, prompt),
tools=tools,
verbose=True,
)
out = agent_executor.invoke(
{
"input": """Write me a prompt that implements the ReAct agent within LCEL using the OpenAI tools agent
as described at https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/agents/openai_tools/base.py
Use the tools at your disposal to browse the web if necessary.
""",
}
) DescriptionI'm trying to port the ReAct pattern to the OpenAI tools agent; specifically, I'm trying to make the agent first respond with a Thought before it calls a tool. Above is the prompt I used, adapted from the react agent example. However, when I run the above, the agent doesn't use the search tool at all, but instead responds with the below (whereas it uses the search tool fine when I use the prompt from the documentation). Please, how can I instruct the agent to reply with a specific sequence of messages, first a Thought and only then a tool call? Here is what the above code returns:
This is a simplified example and would need to be adjusted based on the specific requirements and functionalities of the ReAct agent as described in the documentation.
|
Beta Was this translation helpful? Give feedback.
Replies: 6 comments 7 replies
-
🤖 Hey there @ZmeiGorynych! Great to see you back with another intriguing challenge. How's the coding world treating you lately? Based on the context provided, it seems like you want to ensure that the OpenAI tools agent first responds with a Thought before calling a tool, and correctly feeds in the Observation (tool response). To achieve this, you can modify the Here's how you can do it: from langchain_core.runnables import RunnableThought
def create_openai_tools_agent(
llm: BaseLanguageModel, tools: Sequence[BaseTool], prompt: ChatPromptTemplate
) -> Runnable:
missing_vars = {"agent_scratchpad"}.difference(prompt.input_variables)
if missing_vars:
raise ValueError(f"Prompt missing required variables: {missing_vars}")
llm_with_tools = llm.bind(tools=[convert_to_openai_tool(tool) for tool in tools])
agent = (
RunnablePassthrough.assign(
agent_scratchpad=lambda x: format_to_openai_tool_messages(
x["intermediate_steps"]
)
)
| prompt
| RunnableThought(llm_with_tools) # Add a Thought step before calling the tool
| llm_with_tools # Feed the tool's response as an Observation to the next step
| OpenAIToolsAgentOutputParser()
)
return agent In this code, Please note that this is a suggested modification based on the context provided and may require further adjustments based on the specific requirements of your project. If you encounter any issues or need further assistance, feel free to ask. Regarding the Sources
|
Beta Was this translation helpful? Give feedback.
-
🤖 To implement the ReAct pattern with the OpenAI tools agent in LangChain, you need to modify the Here's how you can do it: from langchain_core.runnables import RunnableThought, RunnableToolCall
def create_openai_tools_agent(
llm: BaseLanguageModel, tools: Sequence[BaseTool], prompt: ChatPromptTemplate
) -> Runnable:
missing_vars = {"agent_scratchpad"}.difference(prompt.input_variables)
if missing_vars:
raise ValueError(f"Prompt missing required variables: {missing_vars}")
llm_with_tools = llm.bind(tools=[convert_to_openai_tool(tool) for tool in tools])
agent = (
RunnablePassthrough.assign(
agent_scratchpad=lambda x: format_to_openai_tool_messages(
x["intermediate_steps"]
)
)
| prompt
| RunnableThought(llm_with_tools) # First, the agent responds with a Thought
| RunnableToolCall(llm_with_tools) # Then, the agent calls a tool
| OpenAIToolsAgentOutputParser() # Finally, the agent feeds in the Observation (tool response)
)
return agent In this modified version of the function, I've added Regarding your question about the structure and functionality of the If you're still having trouble, could you provide more information about the specific issues you're encountering? This could help me provide a more accurate solution. Sources
|
Beta Was this translation helpful? Give feedback.
-
OK, figured it out, need to combine a RunnableParallel with a RunnableLambda to mash the outputs together, see below.
|
Beta Was this translation helpful? Give feedback.
-
❤️ |
Beta Was this translation helpful? Give feedback.
-
@ZmeiGorynych If not, please correct me. I'm new to this field. |
Beta Was this translation helpful? Give feedback.
-
And check out our latest and best shot at this: https://medium.com/motleycrew-ai/reliable-ai-at-your-fingertips-how-we-built-universal-react-agents-that-just-work-1876a9a674a8 |
Beta Was this translation helpful? Give feedback.
OK, figured it out, need to combine a RunnableParallel with a RunnableLambda to mash the outputs together, see below.
The only problem left is that the thought is not printed to console even in verbose mode, but hey there's always Langsmith :)