Replies: 1 comment 1 reply
-
@dosu is this something you can help with? |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I'm needing to do a bulk update of a lot of data points in my collection. This should be really easy to do quickly with qdrant's set payload API using filters (e.g. update anything that matches a set of doc ids or some specific metadata, to upsert a couple data points).
However, I've noticed that llamaindex includes a
_node_content
metadata field that duplicates all of the metadata in the point._node_content
is just another metadata field as far as qdrant is concerned.My question is, in order for llamaindex's qdrant retrievers to work properly, can I still use this endpoint, or do I need to download all the points, hydrate them with llama-index, update the metadata, then re-upload them to qdrant? that would be way less efficient, but i don't want to break anything.
I feel like I should be able to just delete the
_node_content
payload field - I don't get why it all needs to be duplicated in the first place - but just checking if it's actually used for something. I do see a metadata_dict_to_node function that seems to pull content from_node_content
and ignore the rest of the payload, so I could see that being problematic, but I'm not seeing where this function is actually used.I did find some similar, stale discussions/issues about
_node_content
but couldn't find anything definitive, including:Beta Was this translation helpful? Give feedback.
All reactions