Skip to content

a few questions #45

Closed Answered by giladgd
hiqsociety asked this question in Q&A
Discussion options

You must be logged in to vote
  1. llama.cpp supports some parameters you can configure to customize the behavior of CUDA.
    You can read more about it here and then set the parameters you need as environment variables when you compile using node-llama-cpp download --cuda and when you run your code
  2. Docs: gpuLayers
  3. Docs: use a model as a chatbot

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by giladgd
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants