Abstract
Machine learning interatomic potentials (MLIPs) are revolutionizing the field of molecular dynamics (MD) simulations. Recent MLIPs have tended towards more complex architectures trained on larger datasets. The resulting increase in computational and memory costs may prohibit the application of these MLIPs to perform large-scale MD simulations. Herein, we present a teacher-student training framework in which the latent knowledge from the teacher (atomic energies) is used to augment the students' training. We show that the light-weight student MLIPs have faster MD speeds at a fraction of the memory footprint compared to the teacher models. Remarkably, the student models can even surpass the accuracy of the teachers, even though both are trained on the same quantum chemistry dataset. Our work highlights a practical method for MLIPs to reduce the resources required for large-scale MD simulations.
| Original language | English |
|---|---|
| Pages (from-to) | 2502-2511 |
| Number of pages | 10 |
| Journal | Digital Discovery |
| Volume | 4 |
| Issue number | 9 |
| DOIs | |
| State | Published - Sep 10 2025 |
Funding
This work was supported by the United States Department of Energy (US DOE), Office of Science, Basic Energy Sciences, Chemical Sciences, Geosciences, and Biosciences Division under the Triad National Security, LLC (‘Triad’) contract grant no. 89233218CNA000001 (FWP: LANLE3F2, LANLE8AN). We acknowledge the Los Alamos National Laboratory (LANL) Directed Research and Development (LDRD) for funding support. This research was performed in part at the Center for Nonlinear Studies (CNLS) at LANL. This research used resources provided by the Los Alamos National Laboratory Institutional Computing Program and Darwin testbed at LANL, which is funded by the Computational Systems and Software Environments subprogram of LANL's Advanced Simulation and Computing program.