-
I have a simple document search pipeline with llm as embedding retriever.
result looks like this:
The scores for all my tests are very narrow range, I am trying to figure out what the score here, since this is embedding. The document_store items has score as none. Anyone has any pointer? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Hello @amy-why! Your So, the score that the retriever assigns to each document measures how much the document is semantically similar (≈relevant) to the query. You can read more about this topic in the Sentence Transformers docs. Feel free to ask for clarification. |
Beta Was this translation helpful? Give feedback.
Hello @amy-why!
Your
EmbeddingRetriever
uses a Sentence Transformers model that converts both the query and the documents into embeddings (=vectors).These vectors can be compared using similarity functions, such as cosine similarity and dot product.
(The similarity function is chosen when you initialize the Document Store, using the
similarity
parameter)So, the score that the retriever assigns to each document measures how much the document is semantically similar (≈relevant) to the query.
You can read more about this topic in the Sentence Transformers docs.
Feel free to ask for clarification.