Backends Supported with llama.cpp #6998
Unanswered
yomultihead
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
In addition to BLAS, llama.cpp supports various backends such as Vulkan, Kompute, and SYCL. Wanted to understand which backend is most suitable for Low Precision Inference on CPUs.
Beta Was this translation helpful? Give feedback.
All reactions