If we use RAG, don't we need to summarize the chat history? #7088
Unanswered
crimson206
asked this question in
Questions
Replies: 1 comment
-
Hi @crimson206 Sorry, I have some trouble understanding your question. Retrieval Augmented Generation (RAG) pipelines in Haystack retrieve relevant documents from a document store given a query. Based on the retrieved documents, an LLM generates an answer to the query. Out of the box, there is no caching implemented in Haystack. The underlying document store (vector database) however could implement it. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Is the RAG one save cache? If we perform RAG on the same dataset, won't it extract the vectors again?
What if we use RAG on chat log? Although most parts remains the same, the object will change. Would the RAG still use the save cache?
Beta Was this translation helpful? Give feedback.
All reactions