Replies: 1 comment
-
@DeanChensj could you help on the questions here ? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Describe the bug
In vertex_ai_rag_memory_service, the entire session conversation (the text parts) are being combined
and uploaded as a file.
adk-python/src/google/adk/memory/vertex_ai_rag_memory_service.py
Line 67 in 62a543b
Q1:
I would have thought that only the "user" prompt/input would be recoded as the memory and not the output/response of the AI.
Q2:
Assuming one vector embedding is being generated for the entire file, the quality of the feature vector would be low. I don't know the underlying behavior/implementation of VertexAiRag so it is possible that some chunking is happening. Even if it is the case you are storing event as json and chunking etc may not be optimal.
Q3:
Wouldn't it make sense to summarize the conversation or extract key elements and then store them as memory?
Regards & thanks
Kapil
Beta Was this translation helpful? Give feedback.
All reactions