-
I am trying to run llama.cpp with the following model https://huggingface.co/TheBloke/CodeLlama-13B-Instruct-GGUF/tree/main, however, I cannot seem to make the GPU support work. I build llama.cpp with
|
Beta Was this translation helpful? Give feedback.
Replies: 5 comments 17 replies
-
I saw that there is a similar issue #3423, however, there is no conclusion. |
Beta Was this translation helpful? Give feedback.
-
@uetuluk, What was your exact build process? Do both |
Beta Was this translation helpful? Give feedback.
-
out of the box thought: |
Beta Was this translation helpful? Give feedback.
-
Also,
python-test-MPS.py
|
Beta Was this translation helpful? Give feedback.
-
My case was caused by something in the terminal configuration. Here is how I built llama.cpp to avoid this issue.
env -i zsh -f
|
Beta Was this translation helpful? Give feedback.
My case was caused by something in the terminal configuration. Here is how I built llama.cpp to avoid this issue.
make -j
inside this terminal