Replies: 1 comment
-
If I merge it directly with the large model it works:
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
After training an 8bit LoRA and merging it back to LLama 3.2 loaded as 8bit then trying to convert it to GGUF with convert_hf_to_gguf.py I get this error:
Can not map tensor 'model.layers.0.mlp.down_proj.SCB'
What am I messing up (again) and is that the right way to do it? This is the python script that does the merge:
Thanks in advance!
Beta Was this translation helpful? Give feedback.
All reactions