Skip to content

Removing LLAMA from the AUR's ggml-vulkan build #1305

Discussion options

You must be logged in to vote
  1. This is the ggml repository. It itself has not much "llama" specific code.
  2. The llama.cpp project outgrew its initial focus on just llama rather quickly. It supports qwen, deepseek, mistral and many more. So the name "llama.cpp" is just a remnant of what once was. There is no reason to "remove" any code related to a specific model, also most models share some parts of the architecture.
  3. For AUR specific questions you would have to talk to the packager(s). It is a common misconception that the creators of a project are also doing the package maintaining for distribuitons. (I don't know who that is and that is out of scope for this question anyway)

Replies: 1 comment 1 reply

Comment options

You must be logged in to vote
1 reply
@mohamadali-halwani
Comment options

Answer selected by mohamadali-halwani
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants