Chat history not working with Agent executor #26567
Replies: 2 comments 2 replies
-
The chat history is not being considered when using from langchain_core.chat_history import InMemoryChatMessageHistory
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
model = ChatOpenAI(model="gpt-4o")
memory = InMemoryChatMessageHistory(session_id="test-session")
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful assistant."),
("placeholder", "{chat_history}"),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
]
)
@tool
def magic_function(input: int) -> int:
"""Applies a magic function to an input."""
return input + 2
tools = [magic_function]
agent = create_tool_calling_agent(model, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools)
agent_with_chat_history = RunnableWithMessageHistory(
agent_executor,
lambda session_id: memory,
input_messages_key="input",
history_messages_key="chat_history",
output_messages_key="output" # Ensure this key matches the output of your agent
)
config = {"configurable": {"session_id": "test-session"}}
print(
agent_with_chat_history.invoke(
{"input": "Hi, I'm polly! What's the output of magic_function of 3?"}, config
)["output"]
)
print("---")
print(agent_with_chat_history.invoke({"input": "Remember my name?"}, config)["output"])
print("---")
print(
agent_with_chat_history.invoke({"input": "what was that output again?"}, config)[
"output"
]
) Make sure that the Additionally, consider using LangSmith for better inspection and debugging of your chain or agent. LangSmith allows you to log traces and inspect the internal workings of your application, which can be particularly useful for complex multi-step applications. To enable LangSmith, set the appropriate environment variables: os.environ["LANGCHAIN_TRACING_V2"] = "true"
if not os.environ.get("LANGCHAIN_API_KEY"):
os.environ["LANGCHAIN_API_KEY"] = getpass.getpass() This will help you ensure that the chat history is being correctly managed and utilized in your application [1][2][3][4][5]. |
Beta Was this translation helpful? Give feedback.
-
Hi,
It appears that, chathistory is being populated, but its not be considered.
i even tried setting the out_messages_key as suggested, but no change. See
the Chat_history[HumanMessage]. the content is populated and it is expected
that this Chat_history should be used by LLM to answer
```
Agent_with_message_history = RunnableWithMessageHistory(agent_executor,
get_session_history,input_messages_key="User_prompt",history_messages_key="chat_history",output_messages_key="output")
```
Agent output:
_```
I don't have a specific function available to determine your name. The
functions provided are for getting flight information and current
weather. If you'd like to use one of these functions, I can help with
that!
Finished chain.
{'chat_history': [HumanMessage(content='My name is XYZ. I am in
Chicago. Hows the weather there. Also flight info from london to
delhi'), AIMessage(content='<|python_tag|>{"name":
"get_current_weather", "parameters": {"location": "Chicago", "format":
"fahrenheit"}}; {"name": "get_flight_info_func", "parameters":
{"origin": "LHR", "Destination": "DEL"}}')], 'User_prompt': 'What is
my name', 'output': "I don't have a specific function available to
determine your name. The functions provided are for getting flight
information and current weather. If you'd like to use one of these
functions, I can help with that!", 'intermediate_steps': []}
```_
…On Tue, Sep 17, 2024 at 2:19 AM dosubot[bot] ***@***.***> wrote:
The chat history is not being considered when using
RunnableWithMessageHistory with your custom agent and tools because you
need to set the output_messages_key property correctly when initializing
the wrapper. Additionally, ensure that the input_messages_key and
history_messages_key are set appropriately. Here is the relevant code
snippet:
from langchain_core.chat_history import InMemoryChatMessageHistoryfrom langchain_core.runnables.history import RunnableWithMessageHistoryfrom langchain.agents import AgentExecutor, create_tool_calling_agentfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_openai import ChatOpenAIfrom langchain_core.tools import tool
model = ChatOpenAI(model="gpt-4o")memory = InMemoryChatMessageHistory(session_id="test-session")prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful assistant."),
("placeholder", "{chat_history}"),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
]
)
@tooldef magic_function(input: int) -> int:
"""Applies a magic function to an input."""
return input + 2
tools = [magic_function]
agent = create_tool_calling_agent(model, tools, prompt)agent_executor = AgentExecutor(agent=agent, tools=tools)
agent_with_chat_history = RunnableWithMessageHistory(
agent_executor,
lambda session_id: memory,
input_messages_key="input",
history_messages_key="chat_history",
output_messages_key="output" # Ensure this key matches the output of your agent
)
config = {"configurable": {"session_id": "test-session"}}print(
agent_with_chat_history.invoke(
{"input": "Hi, I'm polly! What's the output of magic_function of 3?"}, config
)["output"]
)print("---")print(agent_with_chat_history.invoke({"input": "Remember my name?"}, config)["output"])print("---")print(
agent_with_chat_history.invoke({"input": "what was that output again?"}, config)[
"output"
]
)
Make sure that the output_messages_key matches the key used by your agent
to output messages and that the input_messages_key and
history_messages_key are correctly set to handle the conversation history.
Additionally, consider using LangSmith for better inspection and debugging
of your chain or agent. LangSmith allows you to log traces and inspect the
internal workings of your application, which can be particularly useful for
complex multi-step applications. To enable LangSmith, set the appropriate
environment variables:
os.environ["LANGCHAIN_TRACING_V2"] = "true"if not os.environ.get("LANGCHAIN_API_KEY"):
os.environ["LANGCHAIN_API_KEY"] = getpass.getpass()
This will help you ensure that the chat history is being correctly managed
and utilized in your application [1]
<https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/chatbots_tools.ipynb>
[2]
<https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/agent_executor.ipynb>
[3]
<https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/message_history.ipynb>
[4]
<https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/migrate_agent.ipynb>
[5]
<https://github.com/langchain-ai/langchain/blob/master/docs/docs/tutorials/qa_chat_history.ipynb>
.
*To continue talking to Dosu <https://dosu.dev>, mention @dosu
<https://github.com/dosu>.*
—
Reply to this email directly, view it on GitHub
<#26567 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BG72HMOZF7NX36YJBGPUXODZW7CYJAVCNFSM6AAAAABOKXGXI2VHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTANRWGY3TKOA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I have created 2 tools and have a custom agent. I added chat history using
RunnableWithMessageHistory
, but it seems the chat history is not being considered. Please see below. As can be seen below the expectation is that the 2nd invoke output should be able to respond with name, but it doesnt. I am using meta llama 3.1-70B instruct model.I have also reviewed other discussions related to this #21764, but not luck. Please advice
Here is the output of first invoke:
_Entering new AgentExecutor chain...
<|python_tag|>{"name": "get_current_weather", "parameters": {"location": "Chicago", "format": "fahrenheit"}}; {"name": "get_flight_info_func", "parameters": {"origin": "LHR", "Destination": "DEL"}}
Output of 2nd invoke
Beta Was this translation helpful? Give feedback.
All reactions