Skip to content

Using AVX512 #174

Closed Answered by giladgd
physimo asked this question in Q&A
Discussion options

You must be logged in to vote

I'm not sure how to test it in order to add simple and convenient support for it, but you can pass custom CMake build flags to the getLlama method in the version 3.0 beta to enable it in llama.cpp.
After version 3.0 officially comes out, you're welcome to open a PR to add support for it (it should happen in about a month or so).

I wouldn't advise opening a PR for it right now though, as the interface of node-llama-cpp is going to significantly change in the next few beta versions.

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by giladgd
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants