Replies: 2 comments
-
Any ideas? |
Beta Was this translation helpful? Give feedback.
0 replies
-
I am having the same problem. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
If am I trying to convert a Lora Adapter based on LLama-3.1-8b to gguf, I am getting the error:
ERROR:lora-to-gguf:Unexpected name 'base_model.model.lm_head.weight': Not a lora_A or lora_B tensor
ERROR:lora-to-gguf:Embeddings is present in the adapter. This can be due to new tokens added during fine tuning
ERROR:lora-to-gguf:Hint: if you are using TRL, make sure not to call setup_chat_format()
We are using indeed in pre-training phase new tokens, lm_head.
Any ideas how to solve the problem?
Thank you
https://github.com/ggerganov/llama.cpp/blob/c421ac072d46172ab18924e1e8be53680b54ed3b/convert_lora_to_gguf.py#L350
Beta Was this translation helpful? Give feedback.
All reactions