Replies: 1 comment 6 replies
-
|
Hm, qwen2.5vl:3b only takes up about 3-4 GB of ram. I picked it because most people with 16GB ram should be able to run it. Is it taking up 28GB of ram on your machine? |
Beta Was this translation helpful? Give feedback.
6 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment


Uh oh!
There was an error while loading. Please reload this page.
-
Although Local VLM is recommended for those who cares about privacy but qwen2.5vl:3b model requires approximately 28 GB of RAM. Most people's computers are not yet powerful enough to run local VLM. The only remaining option is Gemini, but many people don't trust Google either. Therefore, people need to be able to use their own custom endpoints for other models. This would also allow us to use providers such as OpenAI and OpenRouter.
Beta Was this translation helpful? Give feedback.
All reactions