Best Ollama model for local LLM? #1345
Replies: 1 comment 1 reply
-
The answer is, it's complicated. It depends the types of capabilities you want your model to have such as As far as your computer, it sounds like a powerhouse. You should be able to comfortably run medium-size models up to ≈30-40B parameters without any issues. Larger than this and you might start to see performance degradation. If you're curious, you can always pop open the Last, the choice among open sources models such as llama, mistral, qwen, or gemma generally comes down to the types of tasks you're going to ask it to perform, as well as your preferred "writing style" generated by the model. Each will have a bit of a different character, and the best way to choose is to experiment with the options until you feel satisfied with the model that best meets the demands of your project. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
First of all, this is an awesome tool!
What would be the best Ollama model to choose for this project? In my case, I´m running it locally on my M4 Max with 64GB of RAM.
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions