llama3 8B instruct give error when running convert.py #7120
Unanswered
vishnuthegeek
asked this question in
Q&A
Replies: 1 comment 1 reply
-
The https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/tree/main |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
downloaded llama3 8B instruct from llama official website but when trying to convert the model which is the first step for quantization i get below error:
INFO:convert:Loading model file /chatops/llama/llama3/Meta-Llama-3-8B-Instruct/consolidated.00.pth
INFO:convert:params = Params(n_vocab=128256, n_embd=4096, n_layer=32, n_ctx=4096, n_ff=14336, n_head=32, n_head_kv=8, n_experts=None, n_experts_used=None, f_norm_eps=
1e-05, rope_scaling_type=None, f_rope_freq_base=500000.0, f_rope_scale=None, n_orig_ctx=None, rope_finetuned=None, ftype=<GGMLFileType.MostlyF16: 1>, path_model=Posix
Path('/chatops/llama/llama3/Meta-Llama-3-8B-Instruct'))
Traceback (most recent call last):
File "/chatops/latest-cpp/llama.cpp/convert.py", line 1567, in
main()
File "/chatops/latest-cpp/llama.cpp/convert.py", line 1535, in main
vocab, special_vocab = vocab_factory.load_vocab(vocab_types, model_parent_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/chatops/latest-cpp/llama.cpp/convert.py", line 1426, in load_vocab
vocab = self._create_vocab_by_path(vocab_types)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/chatops/latest-cpp/llama.cpp/convert.py", line 1416, in _create_vocab_by_path
raise FileNotFoundError(f"Could not find a tokenizer matching any of {vocab_types}")
FileNotFoundError: Could not find a tokenizer matching any of ['bpe']
By the way tried the recommended way of adding --vocab-type bpe at the end still the same :(
Beta Was this translation helpful? Give feedback.
All reactions