How to fine-tune a .gguf model? #7792
Unanswered
SeanZhang7
asked this question in
Q&A
Replies: 1 comment 1 reply
-
Total beginner here but, it seems to me what you do is apply an LoRA adaper to the .gguf file and llama.cpp does the work of applying it to the model in real time. Can you then save the adapted model? I've not figured that out yet. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
May I ask How could I fine-tune a .gguf model? I saw some discussions about converting it into pytorch form. But this method seems not infeasible.
Beta Was this translation helpful? Give feedback.
All reactions