Replies: 1 comment 2 replies
-
I saw several config files that need to setup for each new model? But I have no idea where to put them. And the procedure in adapting a new gguf model for LocalAI. Thanks! |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, I am just getting strating using this project.
In this tutorial
https://localai.io/basics/getting_started/index.html
It only copy the model into a model dir which later mount to docker. But it doesn't seem to work for unimplemented models.
I was trying to load one of the mode from this repo https://huggingface.co/TheBloke/Xwin-LM-7B-V0.1-GGUF. Which is a fresh new llama2 fine-tuned model just released yesterday.
I suppose all I need it to put a single .gguf file into ./model, but localAI complains about this new model is "unimplemented" in the curl msg returned.
Maybe a stupid question, could any one please point me to a more detailed tutorail on how to add brand new gguf models (which llama.cpp supports, but somehow LocalAI is unimplemented)?
Beta Was this translation helpful? Give feedback.
All reactions