Skip to content

Didn't detected GPU when Deployed on AWS #1331

Answered by Muxelmann
touqeerShah asked this question in Q&A
Discussion options

You must be logged in to vote

I set up privateGPT in a VM with an Nvidia GPU passed through and got it to work. Before running make run, I executed the following command for building llama-cpp with CUDA support:

CMAKE_ARGS='-DLLAMA_CUBLAS=on' poetry run pip install --force-reinstall --no-cache-dir llama-cpp-python

From start (fresh Ubuntu installation) to finish, these were the commands I used:

# Initial update and basic dependencies
sudo apt update
sudo apt upgrade
sudo apt install git curl zlib1g-dev tk-dev libffi-dev libncurses-dev libssl-dev libreadline-dev libsqlite3-dev liblzma-dev

# Check for GPU drivers and install them automatically
sudo ubuntu-drivers
sudo ubuntu-drivers list
sudo ubuntu-drivers autoinstall

#

Replies: 1 comment 4 replies

Comment options

You must be logged in to vote
4 replies
@touqeerShah
Comment options

@Muxelmann
Comment options

Answer selected by touqeerShah
@touqeerShah
Comment options

@Muxelmann
Comment options

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
3 participants