-
Is there any way to use ggerganov's llama.cpp with support for AVX512? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
I'm not sure how to test it in order to add simple and convenient support for it, but you can pass custom CMake build flags to the I wouldn't advise opening a PR for it right now though, as the interface of |
Beta Was this translation helpful? Give feedback.
I'm not sure how to test it in order to add simple and convenient support for it, but you can pass custom CMake build flags to the
getLlama
method in the version 3.0 beta to enable it inllama.cpp
.After version 3.0 officially comes out, you're welcome to open a PR to add support for it (it should happen in about a month or so).
I wouldn't advise opening a PR for it right now though, as the interface of
node-llama-cpp
is going to significantly change in the next few beta versions.