Could not load Llama model #3505
-
Hi, I've been using the GGML model, specifically the ggml-gpt4all-j-v1.3-groovy version, and it was working perfectly. However, today, when I attempted to use it again, I encountered an issue. It displayed an error message: 'Could not load Llama model from path: ggml-gpt4all-j-v1.3-groovy.bin.' I tried to research this problem and came across a website that mentioned that Llama.cpp no longer supports GGML models. I'm wondering if this information is accurate. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
There are two (2) versions of GGUF. The latest llama.cpp code requires GGUFv2 Perhaps grab the latest llama.cpp code... and a recent GGUFv2 model from TheBloke like Mistral-7B-OpenOrca-GGUF, say mistral-7b-openorca.Q4_K_M.gguf... and you should be good. |
Beta Was this translation helpful? Give feedback.
There are two (2) versions of GGUF. The latest llama.cpp code requires GGUFv2
Perhaps grab the latest llama.cpp code... and a recent GGUFv2 model from TheBloke like Mistral-7B-OpenOrca-GGUF, say mistral-7b-openorca.Q4_K_M.gguf... and you should be good.