How to use memory with create_react_agent #26337
-
Checked other resources
Commit to Help
Example Codellm = AzureChatOpenAI(deployment_name="azure-xxxxx", temperature=0.0)
prompt = PromptTemplate(
template=COPILOT_PROMPT,
input_variables=["input", "agent_scratchpad"],
)
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
agent = ZeroShotAgent(
llm_chain=LLMChain(llm=llm, prompt=prompt, memory=memory),
allowed_tools=[tool.name for tool in tools],
)
-------------------------------------------------------------------------------------
agent = create_react_agent(llm, tools, prompt) DescriptionI have an application which builds a "process". Although I have tested the application and it works, but we want to pass external memory, We can use ZeroShotAgent with memory but it's deprecated and we're suggest to use create_react_agent. But create_react_agent does not have an option to pass memory. Can someone please help me figure out how I can use memory with create_react_agent? Am I using wrong agent for this use case? System Infolangchain==0.2.16 |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments 2 replies
-
To use memory with the from langgraph.checkpoint.memory import MemorySaver # an in-memory checkpointer
from langgraph.prebuilt import create_react_agent
from langchain_anthropic import ChatAnthropic
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_core.messages import HumanMessage
# Create the agent
memory = MemorySaver()
model = ChatAnthropic(model_name="claude-3-sonnet-20240229")
search = TavilySearchResults(max_results=2)
tools = [search]
agent_executor = create_react_agent(model, tools, checkpointer=memory)
# Use the agent
config = {"configurable": {"thread_id": "abc123"}}
for chunk in agent_executor.stream(
{"messages": [HumanMessage(content="hi im bob! and i live in sf")]}, config
):
print(chunk)
print("----")
for chunk in agent_executor.stream(
{"messages": [HumanMessage(content="whats the weather where I live?")]}, config
):
print(chunk)
print("----") Additionally, you can create a custom agent with shared memory across both the agent and its tools. Here is an example: from langchain import hub
from langchain.agents import AgentExecutor, Tool, ZeroShotAgent, create_react_agent
from langchain.chains import LLMChain
from langchain.memory import ConversationBufferMemory, ReadOnlySharedMemory
from langchain.prompts import PromptTemplate
from langchain_community.utilities import GoogleSearchAPIWrapper
from langchain_openai import OpenAI
# Define the prompt template
template = """This is a conversation between a human and a bot:
{chat_history}
Write a summary of the conversation for {input}:
"""
prompt = PromptTemplate(input_variables=["input", "chat_history"], template=template)
# Create memory objects
memory = ConversationBufferMemory(memory_key="chat_history")
readonlymemory = ReadOnlySharedMemory(memory=memory)
# Create the summarization chain
summary_chain = LLMChain(
llm=OpenAI(),
prompt=prompt,
verbose=True,
memory=readonlymemory, # use the read-only memory to prevent the tool from modifying the memory
)
# Define tools
search = GoogleSearchAPIWrapper()
tools = [
Tool(
name="Search",
func=search.run,
description="useful for when you need to answer questions about current events",
),
Tool(
name="Summary",
func=summary_chain.run,
description="useful for when you summarize a conversation. The input to this tool should be a string, representing who will read this summary.",
),
]
# Pull the prompt from the hub
prompt = hub.pull("hwchase17/react")
# Create the agent
model = OpenAI()
agent = create_react_agent(model, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, memory=memory)
# Use the agent
agent_executor.invoke({"input": "What is ChatGPT?"})
agent_executor.invoke({"input": "Who developed it?"})
agent_executor.invoke({"input": "Thanks. Summarize the conversation, for my daughter 5 years old."})
# Confirm that the memory was correctly updated
print(agent_executor.memory.buffer) This code demonstrates how to create a For more advanced memory management, you can use the from langchain_experimental.generative_agents.memory import GenerativeAgentMemory
from langchain_core.language_models import BaseLanguageModel
from langchain.retrievers import TimeWeightedVectorStoreRetriever
# Initialize the memory retriever and language model
memory_retriever = TimeWeightedVectorStoreRetriever()
llm = BaseLanguageModel()
# Create the generative agent memory
generative_memory = GenerativeAgentMemory(
llm=llm,
memory_retriever=memory_retriever,
verbose=True,
reflection_threshold=0.5
)
# Add a memory
generative_memory.add_memory("I had a great conversation with Bob about the weather in SF.") This setup allows the agent to manage its memory more effectively, scoring the importance of memories and reflecting on them to generate new insights. For a detailed walkthrough of LangChain's conversation memory abstractions, visit the How to add message history (memory) LCEL page [1][2][3][4][5]. |
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
@dosu Okay so I found the way to pass redis memory to the agent executor. Here is my code:
Why won't it make any entries in the redis database? What am I doing wrong? |
Beta Was this translation helpful? Give feedback.
-
Okay I am able to see the entries in redis. Closing the thread. |
Beta Was this translation helpful? Give feedback.
@dosu Okay so I found the way to pass redis memory to the agent executor.
But when I run the agent, I see that it doesn't make any entry in the redis database.
Here is my code: