Replies: 2 comments 1 reply
-
Hello, I had to follow the readme but I've exactly the same trouble as you, but with 7B model. Did you found how to get the params.json ? I tried to find solution with ChatGPT4 but it's not solved at all for the moment. |
Beta Was this translation helpful? Give feedback.
0 replies
-
I see bunch of .bin files, you need .pth files. I don't think there's currently a working script to deal with that, the closest you can use is this script if you change it: https://github.com/tloen/alpaca-lora/blob/main/export_state_dict_checkpoint.py |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I'm trying to convert following model, and got error. I didn't see params.json in may model repos.
Could you please help to suggest what I missed?
https://huggingface.co/chavinlo/gpt4-x-alpaca
[root@qd-graphics koboldai-client]# bin/micromamba run -r runtime -n koboldai python3 llama.cpp/convert-pth-to-ggml.py models/gpt4-x-alpaca/ 1
Traceback (most recent call last):
File "llama.cpp/convert-pth-to-ggml.py", line 274, in
main()
File "llama.cpp/convert-pth-to-ggml.py", line 239, in main
hparams, tokenizer = load_hparams_and_tokenizer(dir_model)
File "llama.cpp/convert-pth-to-ggml.py", line 102, in load_hparams_and_tokenizer
with open(fname_hparams, "r") as f:
FileNotFoundError: [Errno 2] No such file or directory: 'models/gpt4-x-alpaca//params.json'
Beta Was this translation helpful? Give feedback.
All reactions