Convert OpenAssistant/oasst-sft-6-llama-30b-xor #3295
Replies: 4 comments
-
You ran |
Beta Was this translation helpful? Give feedback.
-
Did you un-XOR it first? It looks like even the metadata like JSON files are XORed. |
Beta Was this translation helpful? Give feedback.
-
Idk I Got it from hugging face |
Beta Was this translation helpful? Give feedback.
-
Okay, then that's a "no". Please read the model card for the model you're talking about having issues with: https://huggingface.co/OpenAssistant/oasst-sft-6-llama-30b-xor It's basically encrypted. You'll need to follow the instructions there to decrypt it. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I'm trying to convert to GGUF the new OpenAssistant/oasst-sft-6-llama-30b-xor
python3 convert.py /Downloads/phi-1_5
Loading model file /Downloads/ooost/pytorch_model.bin
Traceback (most recent call last):
File "/llama.cpp/convert.py", line 1196, in
main()
File "/llama.cpp/convert.py", line 1145, in main
params = Params.load(model_plus)
^^^^^^^^^^^^^^^^^^^^^^^
File "/llama.cpp/convert.py", line 299, in load
params = Params.loadHFTransformerJson(model_plus.model, hf_config_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/llama.cpp/convert.py", line 209, in loadHFTransformerJson
n_embd = config["hidden_size"]
~~~~~~^^^^^^^^^^^^^^^
KeyError: 'hidden_size'
Beta Was this translation helpful? Give feedback.
All reactions