Enhancing ChatAgent Memory Flexibility with Memory Callback Parameter #1879
coolbeevip
started this conversation in
Ideas
Replies: 1 comment
-
After carefully reviewing the code, I've decided to abandon the previous idea of using a callback. Implementing this functionality within the The underlying reason is that I want to have precise control over the tokens input into the large model. Is this achievable? For example:
This way, I can manage all the tokens myself outside of the |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
ChatAgent can write to memory, which helps developers with business logic, but it needs more flexibility. I propose adding a memory_callback parameter to the step method to decide if a Q&A pair should be saved based on the input and output messages.
We should also move the input_message method to run after the model's result, allowing us to use the memory_callback return value to determine if it should be saved. This change may impact current memory recall logic, and we need to include an input_message to a copy of self.memory.get_context().
Here's a more specific example
Beta Was this translation helpful? Give feedback.
All reactions