Skip to content

Model fails to load/run in chat on Jan (v0.6.2 & v0.6.3) on Apple Silicon (M4) #5631

Answered by LazyYuuki
rodrigobhz asked this question in Get Help
Discussion options

You must be logged in to vote

Hi @rodrigobhz I will mark this as answered now, the problem with Gemma-3n is known to us because right now we are transitioning from our old engine to fully llama.cpp. This problem will be corrected in v0.7.0, so do please be a bit patient with us. Do let us know if you still have problem with loading any other model.

Replies: 11 comments 2 replies

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
1 reply
@louis-menlo
Comment options

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
1 reply
@LazyYuuki
Comment options

Answer selected by LazyYuuki
Comment options

You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
3 participants