Replies: 2 comments 1 reply
-
I found a workaround by using a local repository |
Beta Was this translation helpful? Give feedback.
0 replies
-
Mistral (as well as Llama) requires accepting their terms. I fixed doing: |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
Did someone have the same problem? I have an error message "repository not found when trying to load the v0.2 model.
python -m vllm.entrypoints.openai.api_server --model mistralai/Mistral-7B-Instruct-v0.2 --max-model-len 32192 --dtype float16 --gpu-memory-utilization 0.6
File "/var/www/vllm/lib/python3.11/site-packages/huggingface_hub/hf_file_system.py", line 789, in _raise_file_not_found raise FileNotFoundError(msg) from err FileNotFoundError: mistralai/Mistral-7B-Instruct-v0.2 (repository not found)
Beta Was this translation helpful? Give feedback.
All reactions