Replies: 1 comment
-
TF32 has been used in other machine learning potentials as reported at https://dl.acm.org/doi/10.1145/3581784.3627041. The author claimed that TF32 didn't hurt the accuracy but largely improved the speed. So far, there is no evidence that TF32 will affect the result. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi deepmodeling developers,
I found that pr #4726 is changing the default bf16 matmul to tf32 (or fp19 we say) rather than fp32. tf32 is definitely better than bf16 with its extra ~1 significant decimal digit. But given its ~4 max significant decimal digits vs ~7 of fp32, I'm really skeptical about whether the meager speed boost it provides is worth the huge loss in precision, especially when the error accumulates in MD. Some relevant discussions here triton-lang/triton#4574 and openmm/openmm#2706
Beta Was this translation helpful? Give feedback.
All reactions