-
Notifications
You must be signed in to change notification settings - Fork 12.4k
Description
Prerequisites
- I am running the latest code. Mention the version if possible as well.
- I carefully followed the README.md.
- I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
- I reviewed the Discussions, and have a new and useful enhancement to share.
Feature Description
Hi and thank you for the incredible work on llama.cpp.
I’ve been trying to build and run the project on an ARMv7 Android device via Termux. While compilation begins successfully with LLAMA_PORTABLE=1 , it ultimately fails due to architecture-specific issues like NEON intrinsic redefinitions and other compatibility errors. Even when the build partially succeeds, the resulting binaries don’t run reliably.
It seems that full support for ARMv7 and Termux environments may not be currently in scope. Just wanted to raise this in case broader compatibility with mobile devices is considered in the future. Thanks again for the amazing project
Motivation
I believe adding support—or at least reducing the barrier—for Termux/ARMv7 devices could unlock a significant set of use cases, especially in regions where mobile devices are the primary computing platform. These devices are often affordable, widely available, and increasingly capable of basic on-device inference for small models like TinyLLaMA.
Having a portable, lightweight LLM that runs entirely offline on such devices would have huge implications for accessibility, privacy-preserving AI, education, and localized chatbots in low-resource settings. The ability to run even a tiny GGUF model locally on mobile without internet could be transformative.
I understand this may be outside current priorities, but I wanted to share the value this feature could bring and express appreciation for the project. Thank you again for the amazing tool you've created and continue to improve.
Possible Implementation
No response