Torch not compiled with CUDA enabled (on RTX 3060 6gb card with cuda 12.1 installed) #556
Replies: 4 comments
-
From my understanding CUDA 12.1 is not supported. Can you try installing CUDA 11.8 and cuDNN for 11.8? (I don't know which version it was again) |
Beta Was this translation helpful? Give feedback.
-
Just a note, I personally run this on cuda 12.1 myself. In general, I would expect "Torch not compiled with CUDA enabled" issues to just be a bad install of torch. I would try manually installing torch from https://pytorch.org/ into your conda env. I'm not sure at the moment how reliable the pip installer is at choosing the right torch version, however, since I always manually install my torch. |
Beta Was this translation helpful? Give feedback.
-
Judging by this issue I don't even think installing CUDA Toolkit and cuDNN is necessary even So yeah, maybe a reinstall of pytorch could already fix it? |
Beta Was this translation helpful? Give feedback.
-
Hello, I'm using RTX 3060 laptop version 3.7.2 and both infer and train works correctly even For this case, follow the PyTorch installation instruction correctly https://pytorch.org/get-started/locally/
The additional |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Describe the bug
Torch not compiled with CUDA enabled
when i run the code on gpu by adding -d cuda this error comes and i have a RTX 3060 6gb card with cuda 12.1 installed
To Reproduce
i ran the following command
svc infer "C:\\Users\\user1\\Documents\\Voice-Clonning\\Waiting_for_Rain_30sec.wav" --speaker "tokaiteio" -c "C:\Users\user1\Documents\Voice-Clonning\AllVoices\Tokai-Teio\config.json" -m "C:\Users\user1\Documents\Voice-Clonning\AllVoices\Tokai-Teio\G_531200.pth" -d cuda
Additional context
nvidia-smi result
Beta Was this translation helpful? Give feedback.
All reactions