Skip to content

GPU Parallelism issue when using Pytorch as backend in DeePMD-kit 3.0.0b4 #4175

Answered by njzjz
chenggoj asked this question in Q&A
Discussion options

You must be logged in to vote

However, when I run the dpa2 descriptor using Pytorch as backend using "CUDA_VISIBLE_DEVICES=0,1,2,3 mpirun -np 4 dp --pt" I found I can only use one GPU and the other three GPUs are idle. I

MPI training is not supported in the PyTorch backend. xref: #3951 (comment)

Replies: 1 comment 3 replies

Comment options

You must be logged in to vote
3 replies
@chenggoj
Comment options

@njzjz
Comment options

njzjz Oct 2, 2024
Maintainer

@chenggoj
Comment options

Answer selected by chenggoj
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants