Replies: 1 comment
-
In the LAMMPS, does it happen in the first step? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Dear DeepMD-kit community,
I am currently working with two different servers - one with CPU nodes and another with GPU nodes. On both servers, I have installed DeepMD-kit using the same command:
conda create -n deepmd deepmd-kit lammps horovod -c conda-forge
I've noticed an interesting behavior regarding precision settings. When I use models trained with default precision settings, the LAMMPS calculations yield identical results whether run on GPU or CPU nodes. However, when using models trained with "precision": "float32", I observe different results between GPU and CPU calculations. The difference is relatively small, approximately -0.0005 eV/atom.
I would greatly appreciate if someone could help me understand:
Why this discrepancy occurs specifically with float32 precision
What potential solutions might be available to ensure consistent results across both computing environments
Thank you in advance for your time and assistance.
Best regards
Beta Was this translation helpful? Give feedback.
All reactions