Didn't detected GPU when Deployed on AWS #1331
Answered
by
Muxelmann
touqeerShah
asked this question in
Q&A
-
Hello , I am try to deployed Private GPT on AWS when I run it , it will not detected the GPU on Cloud but when i run it detected and work fine |
Beta Was this translation helpful? Give feedback.
Answered by
Muxelmann
Nov 30, 2023
Replies: 1 comment 4 replies
-
Did you install both CUDA and llama.cpp? |
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I set up privateGPT in a VM with an Nvidia GPU passed through and got it to work. Before running
make run
, I executed the following command for building llama-cpp with CUDA support:CMAKE_ARGS='-DLLAMA_CUBLAS=on' poetry run pip install --force-reinstall --no-cache-dir llama-cpp-python
From start (fresh Ubuntu installation) to finish, these were the commands I used: