Not able to start local llama Server #8352
-
I am unable to activate the llama server on my mac. I tried running |
Beta Was this translation helpful? Give feedback.
Answered by
NikhilKalloli
Jul 7, 2024
Replies: 1 comment
-
You can try this: |
Beta Was this translation helpful? Give feedback.
0 replies
Answer selected by
sorohere
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
You can try this:
./server -m /path-to-the-model/gemma-2b-Q4_0.gguf -ngl 999 -c 2048
.It specifies the model path and sets the number of GPU layers to 999 and the context size to 2048.
Ensure you replace
/path-to-the-model/
with the actual path where your model is stored.