v3.1.0 #4799
njzjz
announced in
Announcement
v3.1.0
#4799
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
What's Changed
Highlights
DPA3
DPA3 is an advanced interatomic potential leveraging the message-passing architecture. Designed as a large atomic model (LAM), DPA3 is tailored to integrate and simultaneously train on datasets from various disciplines, encompassing diverse chemical and materials systems across different research domains. Its model design ensures exceptional fitting accuracy and robust generalization within and beyond the training domain. Furthermore, DPA3 maintains energy conservation and respects the physical symmetries of the potential energy surface, making it a dependable tool for a wide range of scientific applications.
Refer to
examples/water/dpa3/input_torch.json
for the training script. After training, the PyTorch model can be converted to the JAX model.PaddlePaddle backend
The PaddlePaddle backend features a similar Python interface to the PyTorch backend, ensuring compatibility and flexibility in model development. PaddlePaddle has introduced dynamic-to-static functionality and PaddlePaddle JIT compiler (CINN) in DeePMD-kit, which allow for dynamic shapes and higher-order differentiation. The dynamic-to-static functionality automatically captures the user’s dynamic graph code and converts it into a static graph. After conversion, the CINN compiler is used to optimize the computational graph, thereby enhancing the efficiency of model training and inference. In experiments with the DPA-2 model, we achieved approximately a 40% reduction in training time compared to the dynamic graph, effectively improving the model training efficiency.
Breaking changes
Other new features
trainable
to property fitting by @ChiahsinChu in feat(pt): addtrainable
to property fitting #4599All changes in v3.0.1, v3.0.2, and v3.0.3 are included.
Contributors
--model-branch
as alias #4730 feat(pt): add AdamW for pt training #4757 feat(pt/dp): add dynamic sel for DPA3 #4754 feat(pt/dp): add exponential switch function #4756 feat(pt/dp): add distance init for DPA3 edge feat #4760 chore: update dpa3 example #4778 Doc: update DPA3 reference #4781 feat(pt/pd): add size option to dp show #4783 doc: useDPA3
instead ofDPA-3
#4792install-from-c-library.md
#4484 docs: fix the header of the scaling test table #4507 docs: add v3 paper citations #4619 merge master to devel (v3.0.0) #4410 fix: unmarkeval_pd
asabstractmethod
#4438 chore(tests): ensure the same result of frame 0 and 1 #4442 fix(tf): passtype_one_side
&exclude_types
toDPTabulate
inse_r
#4446 fix(cc): copy nloc atoms from neighbor list #4459 fix: print dlerror if dlopen fails #4485 fix: fix seed with multiple ranks #4479 chore: fix spelling PRECISON -> PRECISION #4508 fix(pt): fix clearing the list in set_eval_descriptor_hook #4534 feat: dpmodel energy loss & consistent tests #4531 feat(tf): support tensor fitting with hybrid descriptor #4542 chore: test consistency of rotation matrix #4550 docs: addsphinx.configuration
to .readthedocs.yml #4553 CI: switch linux_aarch64 to GitHub hosted runners #4557 chore: improve neighbor stat log #4561 fix: fix YAML conversion #4565 fix(cc): remove C++ 17 usage #4570 chore: bump pytorch to 2.6.0 #4575 fix(pt): detach computed descriptor tensor to prevent OOM #4547 fix(pt): throw errors for GPU tensors and the CPU OP library #4582 CI: pin jax to 0.5.0 #4613 fix(array-api): fix xp.where errors #4624 feat(pd): add se_atten_v2 #4558 fix(pt): improve OOM detection #4638 chore(tf): throw an error if type map is missing in change_energy_bias #4636 fix(jax): fix typo c_differentiable -> r_differentiable #4640 feat(jax): Hessian #4649 fix(jax): fix Hessian NaN for DPA-3 #4668 fix: fix compatibility with CMake 4.0 #4680 CI: use libtorch in wheels viaUSE_PT_PYTHON_LIBS
#4720 breaking: enable PyTorch backend for PyPI LAMMPS #4728 docs: update the citation of deepmd-kit v3 #4738 fix(CI): set CMAKE_POLICY_VERSION_MINIMUM environment variable #4692 fix(CI): upgrade setuptools to fix its compatibility with wheel #4700 breaking(wheel): bump minimal macos version to 11.0 #4704 fix(tests): fix tearDownClass and release GPU memory #4702 CI: bump PyTorch to 2.7 #4717 fix(jax): fix NaN in sigmoid grad #4724 fix(jax): setdefault_matmul_precision
totensorfloat32
#4726 feat(array-api): env mat stat #4729 fix(tf): always use float64 for the global tensor #4735 fix(tf): fix dplr Python inference #4753 fix(dpmodel): fix normalize scale of initial parameters #4774 fix(dpmodel): fix energy loss #4765 fix(jax): workaround for "xxTracer is not a valid JAX type" #4776 fix(jax): fix repflows JIT issues #4775 fix(jax): make display_if_exist jit-able #4766 docs: fix PyTorch compression command #4780 fix(tf): fix UV resolution with TF 2.19 #4786 fix(jax): fix DPA3 force NaN with edge_init_use_dist #4794input.json/type_map
#4639water/se_e2_a
#4302 pd: skip certain UT and fix paddle ver in in test_cuda.yml #4439 pd: support dpa1 #4414 pd: fix learning rate setting when resume #4480 pd: fix oom error #4493 pd: add missingdp.eval()
in pd backend #4488 pd: fix typo in deepmd-kit-tmp/deepmd/pd/utils/dataloader.py #4512 pd: add CPP inference with LAMMPS #4467 pd: add CINN compiler for dpa2, dpa1 training #4514 pd: Ignore if branch of 0-size #4617 feat(pd): Add dpa1 + lammps inference #4556 [pd] Use rc whl instead of dev whl for python cpu test #4656 pd: update paddlepaddle version to release/3.0 #4694 pd: support dpa3 with paddle backend #4701 pd: fix model saving in DDP mode #4715 feat(pt): add eta message for pt backend #4725 pd: revert einsum to matmul for paddle backend #4768 pd: suppport CINN for se_e2_a inference #4770num_workers
to 4 #4535 docs: add PyTorch Profiler support details to TensorBoard documentation #4615 feat: add new batch size rules for large systems #4659 Perf: print summary on rank 0 #4434 perf: optimize training loop #4426 chore: refactor training loop #4435 Perf: remove redundant checks on data integrity #4433 refactor: simplify dataset construction #4437 Perf: use fused Adam optimizer #4463 Perf: replace unnecessarytorch.split
with indexing #4505 Perf: load data systems on rank 0 #4478 chore: align dataset summary output #4541 Perf: use F.linear for MLP #4513 fix: print summary on local_rank=0 #4597 fix(pt): ensure proper cleanup of distributed process group #4622 CI: remove duration flag from pytest commands in workflows #4662 fix: set fused option for Adam optimizer based on device type #4669 perf: change order of element-wise op in edge angle update calculations #4677 perf: calculate grad on-the-fly for SiLUT #4678 perf: reschedule plus op #4688 perf: usetorch.split
in replace of slicing ops in repflow #4687 fix: remove the use ofBufferedIterator
#4737 perf: usetorch.embedding
for type embedding #4747 perf: use einsum to calculate virial #4746 chore(pt): use more warmup steps #4761 feat: add use_loc_mapping #4772 perf: skip bincount if unnecessary #4773 fix: set NUM_WORKERS=0 for non-fork multiprocessing start methods #4784 perf: use torch.topk to construct nlist #4751 perf: avoid graph break for SiLUT when inferring #4790trainable
to property fitting #4599 feat: add plugin mode for data modifier #4621build_strategy
withbackend
option into_static
API #4664New Contributors
build_strategy
withbackend
option into_static
API #4664Full Changelog: v3.0.0...v3.1.0rc0
This discussion was created from the release v3.1.0.
Beta Was this translation helpful? Give feedback.
All reactions