Skip to content

Not able to start local llama Server #8352

Answered by NikhilKalloli
sorohere asked this question in Q&A
Discussion options

You must be logged in to vote

You can try this: ./server -m /path-to-the-model/gemma-2b-Q4_0.gguf -ngl 999 -c 2048.
It specifies the model path and sets the number of GPU layers to 999 and the context size to 2048.
Ensure you replace /path-to-the-model/ with the actual path where your model is stored.

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by sorohere
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants