Any way to store intermediate steps as part of AgentExecutor's invoke in history? #26615
Replies: 1 comment 6 replies
-
Hey there, @ni-todo-spot! 👋 I'm here to help you out with any bugs, questions, or contributions you need. Let's tackle this issue together! To store intermediate steps as part of the Here is the relevant part of the code that achieves this: agent = (
{
"input": lambda x: x["input"],
"chat_history": lambda x: _format_chat_history(x["chat_history"]),
"agent_scratchpad": lambda x: format_to_openai_function_messages(
x["intermediate_steps"]
),
}
| prompt
| llm_with_tools
| OpenAIFunctionsAgentOutputParser()
) In this setup:
This ensures that the intermediate steps are placed in the message history after the input human message but before the final output AI message. Additionally, you can wrap the from langchain_community.chat_message_histories import ChatMessageHistory
from langchain_core.runnables.history import RunnableWithMessageHistory
demo_ephemeral_chat_history_for_chain = ChatMessageHistory()
conversational_agent_executor = RunnableWithMessageHistory(
agent_executor,
lambda session_id: demo_ephemeral_chat_history_for_chain,
input_messages_key="messages",
output_messages_key="output",
)
conversational_agent_executor.invoke(
{"messages": [HumanMessage("I'm Nemo!")]},
{"configurable": {"session_id": "unused"}},
) This setup ensures that the intermediate steps are properly managed and stored in the message history, maintaining the conversational context. To further illustrate how intermediate steps can be formatted and added to the message history, consider the following example from the LangChain tests: from langchain_core.agents import AgentAction
from langchain_core.messages import AIMessage, HumanMessage
from langchain.agents.format_scratchpad.log_to_messages import format_log_to_messages
def test_single_intermediate_step_default_response() -> None:
intermediate_steps = [
(AgentAction(tool="Tool1", tool_input="input1", log="Log1"), "Observation1")
]
expected_result = [AIMessage(content="Log1"), HumanMessage(content="Observation1")]
assert format_log_to_messages(intermediate_steps) == expected_result
def test_multiple_intermediate_steps_default_response() -> None:
intermediate_steps = [
(AgentAction(tool="Tool1", tool_input="input1", log="Log1"), "Observation1"),
(AgentAction(tool="Tool2", tool_input="input2", log="Log2"), "Observation2"),
(AgentAction(tool="Tool3", tool_input="input3", log="Log3"), "Observation3"),
]
expected_result = [
AIMessage(content="Log1"),
HumanMessage(content="Observation1"),
AIMessage(content="Log2"),
HumanMessage(content="Observation2"),
AIMessage(content="Log3"),
HumanMessage(content="Observation3"),
]
assert format_log_to_messages(intermediate_steps) == expected_result This example demonstrates how intermediate steps can be converted into a sequence of Additionally, the LangChain tests provide further examples of how to manage message history using the def test_input_messages() -> None:
runnable = RunnableLambda(
lambda messages: "you said: "
+ "\n".join(str(m.content) for m in messages if isinstance(m, HumanMessage))
)
store: Dict = {}
get_session_history = _get_get_session_history(store=store)
with_history = RunnableWithMessageHistory(runnable, get_session_history)
config: RunnableConfig = {"configurable": {"session_id": "1"}}
output = with_history.invoke([HumanMessage(content="hello")], config)
assert output == "you said: hello"
output = with_history.invoke([HumanMessage(content="good bye")], config)
assert output == "you said: hello\ngood bye"
output = [*with_history.stream([HumanMessage(content="hi again")], config)]
assert output == ["you said: hello\ngood bye\nhi again"]
assert store == {
"1": InMemoryChatMessageHistory(
messages=[
HumanMessage(content="hello"),
AIMessage(content="you said: hello"),
HumanMessage(content="good bye"),
AIMessage(content="you said: hello\ngood bye"),
HumanMessage(content="hi again"),
AIMessage(content="you said: hello\ngood bye\nhi again"),
]
)
} This test shows how the |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hey all,
Consider the following snippet emulating my project's code:
The following setup works in terms of plain functionality.
The thing is the intermediate steps are not saved to history, so I do it explicitly in my code:
Any ideas for how to save the intermediate steps as part of the
invoke
call?Ideally they'll be stored after the input human message but before final output ai message.
Using latest langchain version.
Using SQLChatMessageHistory with the agent.
Many thanks in advance!
Beta Was this translation helpful? Give feedback.
All reactions