How to summarize the context in Lagchain ReAct Agents #31762
Unanswered
Vedapani0402
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Checked other resources
Commit to Help
Example Code
Description
I am currently using Lagchain's ReAct Agent to dynamic analysis with tools defined. I am using Mistral Model from Ollama. The issue i am facing now is that after few tool calls, the agent is halucinating and not working properly.. Approximately after 10K Tokens the agent is failing to do the analysis it was supposed to and not even following ReAct Style mentioned in the prompt.
As the whole context of previous tool calls, tool data, prompt would be going as an input to the Agent, The context gradually increases. I want to know if there is a way to summarize the context for n tool calls and pass that summary instead of the whole context.
Langgraph has something called as SummarizationNode which does the same. Isn't there anything such in Langchain.
I tried using langraph ReAct agent with Open-Source Models but it did not work.
Can anyone help me with this.
Thank you
System Info
python version : 3.11
RTX A6000 48GB VRAM Linux System
Beta Was this translation helpful? Give feedback.
All reactions