Query node and caching. #41670
Replies: 2 comments
-
|
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
As per my understanding:
Query Node Chunk cache is only used to download the original vector data into the local disk from remote to speed performance.
Q: What if my local disk size doesn't contain enough storage? Eg: If my remote size is 100GB but my local disk is 10GB will it still serve to queries by fetching 10GB from local disk and search the remaining from remote or would just fail?
When vectors are inserted, the inserted data will be accumulated into querynode's memory.
Q: So if I am inserting 100GB data with 32GB query node's RAM will I have to keep doing a memory.release in order to avoid any error message of the memory getting full (error reports like "memory usage in high level, all dml requests would be rejected")?.
Milvus v2.x doesn't support swapping data between memory and disk. It requires the target collection must be fully loaded into memory before searching.
Q: Does it mean all segments of that collections need to loaded to memory? So my memory size must be same as disk size?
Q: Does Milvus support caching so that repeated queries are not sent to knowhere engine but are handled at query node itself. By that I mean if multiple queries which are in similar nature, could they be handled in query node itself instead of it repeatedly going to knowhere to perform a disk based search?
Beta Was this translation helpful? Give feedback.
All reactions