-
Happened when I try to load Llama 3.2 vision-instruct type, such as the 11b vision instruct
I know multimodal is not yet supported, but does that necessarily means we can't use the meta llama 3.2 11b model since it only has vision-instruct type? |
Beta Was this translation helpful? Give feedback.
Answered by
giladgd
Nov 21, 2024
Replies: 1 comment 1 reply
-
The |
Beta Was this translation helpful? Give feedback.
1 reply
Answer selected by
giladgd
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
The
mllama
architecture is not supported byllama.cpp
.If you don't need the vision capabilities, then there's no advantage to using Llama 3.2 with vision over Llama 3.1 (reference)