Error while using llama.cpp with model bge-small-en-v1.5 (converted to .gguf with convert_hf_to_gguf.py) #8430
-
Hello, I am a new user of llama.cpp. I converted the bge-small-en-v1.5 model into .gguf format with the
(on llama.cpp version 2950)
both versions I tried does not work,
I also tried all different quantization levels for the .gguf conversion, but all of them have the same error messages. I was wondering if this particular model is just not compatible with llama.cpp or if I missed anything here.... |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
This is an embedding model - use |
Beta Was this translation helpful? Give feedback.
This is an embedding model - use
llama-embedding
orllama-server --embeddings
. See #7712 for more info