Bad results on talking with my own documents #981
guilhermelia
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello guys, I would like to know your experiencies with your own documents and the history of previous interactions being passed to the OpenAIFunction:
messages = [{"role": "system", "content": system_message}]
for message in chat_history:
messages.append({"role": message["role"], "content": message["content"]})
messages.append({"role": "user", "content": user_message})
I realized that some chunks was passed in the context but LLM was unable do answer correctly. When I bypass the history of messages it could respond correctly. Anyone of you experience a similar situation? Now im working without the history of messages being pulled to the openai method.
Im using GPT4-o model.
Beta Was this translation helpful? Give feedback.
All reactions