Running a Local LLM #492
Unanswered
tezvikhyath
asked this question in
Q&A
Replies: 2 comments 6 replies
-
if you are running it locally on the same computer, it should already be in fabric. no need for --remoteOllamaServer. just run fabric --listmodels and the ollama models should be there. the --remoteOllamaServer is if you are running ollama on a non-default port or another computer |
Beta Was this translation helpful? Give feedback.
6 replies
-
I still have the same issue. No local models are showing up and it seems there is no communication between Ollama and Fabric |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
hello, i've been trying to run a local llm (llama3:70b) for fabric and i seem to have a problem here. I dont understand how to even run it for fabric.
fabric --remoteOllamaServer
i've tried this line of code in my terminal and i end up withfabric: error: argument --remoteOllamaServer: expected one argument
. I'm starving to find a solution for this problem. Thank You.(im running onna mac m3 )
Beta Was this translation helpful? Give feedback.
All reactions