Replies: 7 comments 8 replies
-
Didn't see the pull request already created. |
Beta Was this translation helpful? Give feedback.
-
I've been able to play with it and it seems to work. Are you seeing problems? Here are GGUFs from TheBloke https://huggingface.co/TheBloke/Mistral-7B-v0.1-GGUF |
Beta Was this translation helpful? Give feedback.
-
How can I convert Mistral model to GGUF? convert.py does not work for me. I do not want to use the one from TheBloke (I am fine tuning). |
Beta Was this translation helpful? Give feedback.
-
Loading vocab file '/home/user/mistral-up/tokenizer.model', type 'spm' |
Beta Was this translation helpful? Give feedback.
-
This did not help. I am actually trying to convert this finetuned model that has added tokens: https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca/blob/main/added_tokens.json If I remove added_tokens.json, it does not work:
|
Beta Was this translation helpful? Give feedback.
-
Hi, It was solved? How i can use this model with llama.cpp? Thanks. |
Beta Was this translation helpful? Give feedback.
-
Remove the first tokens (numbered 0, 1, ..) from additional_tokens.json |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
https://mistral.ai/news/announcing-mistral-7b/
https://huggingface.co/mistralai/Mistral-7B-v0.1
https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1
Highlights from the announcement:
Beta Was this translation helpful? Give feedback.
All reactions