Replies: 1 comment
-
Found the answer, I was using the wrong image. My local image was old. I removed it and pull the new image and it seems to work. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello
I am running the llama-cpp server using docker and docker-compose. I have my gguf on hugging face and I want to use them with docker. However, my setup is not working.
Here is what I am trying.
With that setup, the server is not pulling my model.
I am getting this error:
Am I running the right version of the docker image?
The equivalent command from the command line works directly without any problem.
../llama.cpp/build/bin/llama-server --hf-repo "espoir/congo_news_summarizer_qwen_models" --hf-file Qwen_1.5_8Q.gguf
Beta Was this translation helpful? Give feedback.
All reactions