LocalAI embeddings #738
etlweather
started this conversation in
General
Replies: 1 comment 8 replies
-
You should check the server log for 500 errors. If you are not comfortable with that, I suggest other easier alternatives such as Ollama. |
Beta Was this translation helpful? Give feedback.
8 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I am using Local AI as my local AI engine. I set up Obsidian to use a "3rd party (openai-format)" model for embeddings.
If I use
curl
, I can get embeddings for text from my Local AI server with the same model name I provided.But when I try to index my vault, all I get is errors on the Local AI backend and in Obsidian. The error in the console isn't useful (even in debug mode) from what I can tell, all it tells me is that it got a 500 error back and I have debug turned on.
What am I doing wrong?
Beta Was this translation helpful? Give feedback.
All reactions