Need advice on agent applications scale out #26400
Replies: 1 comment
-
I found a similar solved discussion that might be helpful: What's the best way to save a retriever and an established chain in Redis or any other database?. The discussion suggests using a single retriever and namespacing the documents according to which user owns them. This approach can make the instantiation of the retriever per user request very cheap, as it's just a configuration object, and will be faster than making a round-trip to a database to fetch a retriever. For making your agent stateless between user inputs, you can refer to the LangChain documentation, which provides an example of how to create and run a stateless agent: # Create the agent
from langgraph.prebuilt import create_react_agent
from langchain_anthropic import ChatAnthropic
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_core.messages import HumanMessage
model = ChatAnthropic(model_name="claude-3-sonnet-20240229")
search = TavilySearchResults(max_results=2)
tools = [search]
agent_executor = create_react_agent(model, tools)
# Run the agent on stateless queries
response = agent_executor.invoke({"messages": [HumanMessage(content="hi!")]})
print(response["messages"])
response = agent_executor.invoke({"messages": [HumanMessage(content="whats the weather in sf?")]})
print(response["messages"]) This code initializes an agent and runs it on a few queries without maintaining any state between interactions [1][2]. For checkpointing and resuming agent state, LangChain provides built-in mechanisms. The # If working with agent executor
agent.agent.save(file_path="path/agent.yaml") Additionally, in LangGraph, memory is handled through persistence, also known as checkpointing. You can add a For integrating Redis with LangChain to store agent state, here are some recommended practices:
These practices ensure efficient and effective integration of Redis with LangChain for storing agent state. Your manager's concerns about memory growth and node crashes are justified, and using Redis for state management can address these issues effectively. This approach avoids reinventing the wheel and leverages existing solutions within LangChain. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Checked other resources
Commit to Help
Example Code
Description
I have created an react agent based application which builds a "process".
It has a tool to take user input and other tools to add/remove/update process object.
It starts by detecting items for the process from the initial input and then using appropriate tools to add these items.
Then it uses the 'User Input' tool to get further items about the process and adds it to the process.
This goes on for a while until it has all the items required to create a process, at which point it ends.
We have to put it in production where 1000s of users will be interacting concurrently with the application on 20 nodes:
My manager is worried about:
So he is asking me to write my own agent code where instead of keeping the state in memory I would save it in redis like checkpoint.
Is this is the best way to go about it? Are there better ways to handle his worries? Are these worries justified?
Seems like re-inventing the wheel writing the agent code when it's already written.
System Info
langchain==0.2.16
langchain-community==0.2.16
langchain-core==0.2.38
langchain-openai==0.1.7
langchain-text-splitters==0.2.4
Beta Was this translation helpful? Give feedback.
All reactions