Why OpenAIAnswerGenerator gives answers out of context? #4279
-
Hi! I'm using the (retriever + answer_generator) QA pipeline with the Davinci-003 (openAI) model as generator. My problem is that whenever I give my retriever a question that doesn't fit with any document It still gives me the most similar document. The given context is fed to the answer_generator as well as the question. Now, my question is: Is there any way to force the generator to give an answer that is only related to the given context ? Example: question: document_text from retriever: Now, the answer generated by the davinci-003 model is something like: By I'd like that the answer fits with the given context. For example, something acceptable would be: I don't know if there's a way of adjusting the answer with some parameters like the temperature, or the frequence_penalty |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 7 replies
-
I'm just giving you an opinion and maybe it's not the best idea...
|
Beta Was this translation helpful? Give feedback.
I'm just giving you an opinion and maybe it's not the best idea...
What are the scores returned by the retriever for unrelated documents?
If these scores are low (as I hope), you can hand-craft a mechanism that doesn't call
OpenAIAnswerGenerator
in these cases, it just returns something likeI cannot give you an answer with the given context
Another option would involve the
PromptNode
, which already has aquestion-answering-check
template.It would try to figure out if the answer is contained in the context...