Request for hardware support: Running Qwen models offline for quantum material research #1465
stas-creator
started this conversation in
Ideas
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello Qwen team,
I’m an independent researcher using
llama.cppto run Qwen-1.8B and Qwen2 models entirely offline for scientific analysis of nanoscale quantum structures in SiO₂. My goal is 100% local inference — no cloud, no data leakage.My current machine (4 GB RAM, 10+ years old) limits me to small models. To scale to Qwen2-7B, I need a modest desktop (16+ GB RAM, Linux-compatible).
Since Qwen is designed for open and sovereign AI, I believe this use case aligns with your mission. If your team or Alibaba Cloud offers hardware grants or community support for offline AI research, I’d be honored to collaborate.
I’m happy to:
Thank you for your incredible work on open LLMs.
— Stanislav Kravchenko, goodluckoll123@gmail.com
Beta Was this translation helpful? Give feedback.
All reactions