Replies: 1 comment 1 reply
-
Judging by the model filename, it seems like you are using base llama2 model, which isn't trained for QnA or instruct type inference. If you are using chat model or instruct model, you should use a prompt format appropriate for given model. Otherwise, the model has no way of understanding what you want from it, and will just produce text roughly similar to the prompt. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I am trying to obtain questions relative to a given context. But I obtain hallucinating answers.
I am not sure , if the problem is related to the prompt or not
here is my program
Beta Was this translation helpful? Give feedback.
All reactions