Replies: 1 comment
-
What embedding model did you pick? Can you share a screenshot of your setting. The example I posted was using Ollama because Ollama can spin up 2 servers at the same time, a chat model and an embedding model. Not sure if LM Studio can do that? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Why do i need an OpenAI API Key while using a local model for vaultQA?
I get an error while trying to index for my vault.

LM Studio is up and running. I can chat, but not vaultQA.
Beta Was this translation helpful? Give feedback.
All reactions