How to load llama-3.1? #9067
Unanswered
opendeluxe
asked this question in
Q&A
Replies: 1 comment 3 replies
-
@opendeluxe you can get it from https://huggingface.co/QuantFactory/Meta-Llama-3.1-8B-Instruct-GGUF/tree/main and run it like so in conversation mode:
you can also use llama-server or llama.cui https://github.com/dspasyuk/llama.cui |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
The README says
So they are supported, nice. How can I apply these models to use with llama.cpp?
All other models in that list have a link to download their GGUF, but not for LLaMa. Dows this mean it is somehow integrated without further download being required?
Beta Was this translation helpful? Give feedback.
All reactions