-
So I'm trying to convert my safetensors LoRA into gguf. Right now I have converted safetensors LoRA into a ggml-adapter-model.bin. (using a different repository to make the .bin file than llama.cpp) Next I need the llama-export-lora, but I can't find it in llama.cpp repository - It did exist, because I managed to get it some time ago, but now I don't have it anymore so that's why I need to redownload it again. |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 1 reply
-
To convert an Then you should be able to use it with the |
Beta Was this translation helpful? Give feedback.
-
hi @compilade , Let me reference how I'm trying to do it right now:
Thank you for your reply! |
Beta Was this translation helpful? Give feedback.
-
btw thanks again for helping me out converting lora adapter to gguf! btw I figured out that convert_lora_to_gguf appeared after running
.....
|
Beta Was this translation helpful? Give feedback.
You need the base model as well as the LoRA adapter directories.
For example, I'm going to convert https://huggingface.co/grimjim/Llama-3-Instruct-abliteration-LoRA-8B, which uses https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct as a base model.
To make this easier to follow, here are the file listings of the model directories: