Skip to content

Error while using llama.cpp with model bge-small-en-v1.5 (converted to .gguf with convert_hf_to_gguf.py) #8430

Answered by ggerganov
michael5511b asked this question in Q&A
Discussion options

You must be logged in to vote

This is an embedding model - use llama-embedding or llama-server --embeddings. See #7712 for more info

Replies: 1 comment 2 replies

Comment options

You must be logged in to vote
2 replies
@michael5511b
Comment options

@ggerganov
Comment options

Answer selected by michael5511b
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants