LOCAL model can't read Long Notes #562
Unanswered
fabiensc0ville
asked this question in
Q&A
Replies: 2 comments
-
Consumer-grade graphics cards cannot support the complete context token of the local model |
Beta Was this translation helpful? Give feedback.
0 replies
-
@fabiensc0ville have you checked the doc and set the context window explicitly in Ollama? https://github.com/logancyang/obsidian-copilot/blob/master/local_copilot.md#ollama Also you didn't mention which model you were using, how long is its context window? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Describe how to reproduce
Both Chat and Long Note QA work great with LOCAL OLLAMA. However when I try local model on a long note, it ignores it, and comes back to the default prompt about Obsidian and AI
Expected behavior
Answer questions about the active note
Screenshots
Additional context
I believe the difference is the note length because it is the obvious one (it has 1 000 000 char and 1.2MB file size!), but I am not sure about it
Beta Was this translation helpful? Give feedback.
All reactions