Replies: 1 comment 1 reply
-
Hmm interesting and no other errors? Have you tried setting specific file output type, --outtype f16 for example? |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I have some torchtune checkpoint files from a fine-tuning run of llama3. They were done using the HuggingFace compatible checkpointer:
I purposely moved the original model-*.safetensors files out of the model directory to be sure the conversion script was using the fine-tuned checkpoints and not the original model. I tried using convert_hf_to_gguf.py but it just produced a very small (7.5 MB) output file. Any help would be much appreciated. Thank you!
./convert_hf_to_gguf.py --outfile /gguf/llama3.1-trained-0001.gguf --verbose /data/llama3.1-8b-instruct/
Beta Was this translation helpful? Give feedback.
All reactions