How to load simple GGUF models? #3724
Replies: 4 comments 3 replies
-
I'm having the same issue on an Oracle aarch64 vm with the same mistral gguf. I had to set LLAMA_NO_K_QUANTS=1 to build successfully but I get the same runtime error, whether I use a Q4_K_M or a Q5_0 variant. |
Beta Was this translation helpful? Give feedback.
-
Hi guys, Any news to help us run gguf models with llama.cpp? Thanks. |
Beta Was this translation helpful? Give feedback.
-
So just confirming,
it fails? if so, what is the error message? |
Beta Was this translation helpful? Give feedback.
-
That build error can be ignored... Is that model path correct? So the model is in the same directory as the binary... |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
Im trying load two GGUF models, but im getting error:
Error:
In their website (HF) they use this line of command:
How to convert that params to C++ code with llama.cpp?
Models:
Thanks.
Beta Was this translation helpful? Give feedback.
All reactions