Replies: 1 comment
-
🤖 Hey there, @aiwalter! Good to see you back. Let's dive into this new adventure together, shall we? 🦙 Based on the issues and solutions found in the LlamaIndex repository, there are a couple of ways to control the context relevancy of the responses from your query engine. One way is to use the from llama_index.optimization.optimizer import Optimizer
optimizer = SentenceEmbeddingOptimizer(
percentile_cutoff=0.5,
threshold_cutoff=0.7
)
query_engine = index.as_query_engine(optimizer=optimizer)
response = query_engine.query("<query_str>") In this example, the optimizer is set to use the top 50% of sentences based on the percentile cutoff or sentences with a similarity score higher than 0.7 based on the threshold cutoff. These cutoffs can be used together or separately. You can adjust these parameters to suit your needs. More details can be found in the optimizer.py file. Another way is to modify the Remember to adjust the I hope this helps! If you have further questions or need more clarification, feel free to ask. Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I am using a query engine and I realize its sometimes giving responses that are completely out of context, so the content is not in any of the nodes. Is there a way to control this, e.g. how much it is allowed to reply out of context and how much it should be in context? I know I can use promting, but that seems not to work too well for my case
Beta Was this translation helpful? Give feedback.
All reactions