is it meaningful to add Conversational Buffer memory to Q&A application #12947
IamExperimenting
started this conversation in
General
Replies: 1 comment 1 reply
-
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi Team,
I'm using Dollyv2 LLM for Q&A application, and using Langchain.
Here, my input data are from pdf files, I have around 50 pdf files and each files are different from each other.
Here, my question is, is it meaningful to add conversational buffer memory to this application (Q&A)? because when I logically think, LLM expects Prompt + Question + relevant chunks. so every time if I ask question to the model it sends different question with different context and those context are different from each other.
when I ask question to them model for first time, it takes/remember the question, context, answer and this question and context belongs to 1st pdf document. And all of a sudden if I ask a question about 25th pdf document which is completely different the 1st pdf document, here does the conversation buffer memory helps?
Beta Was this translation helpful? Give feedback.
All reactions