Over the last few years, key architectural advances have been proposed for neural network interatomic potentials (NNIPs), such as incorporating message-passing networks, equivariance, or many-body expansion terms. Although modern NNIP models exhibit small differences in energy/forces errors, improvements in accuracy are still considered the main target when developing new NNIP architectures. In this work, we show how architectural and optimization choices influence the generalization of NNIPs, revealing trends in molecular dynamics (MD) stability, data efficiency, and loss landscapes. Using the 3BPA dataset, we show that test errors in NNIP follow a scaling relation and can be robust to noise, but cannot predict MD stability in the high-accuracy regime. To circumvent this problem, we propose the use of loss landscape visualizations and a metric of loss entropy for predicting the generalization power of NNIPs. With a large-scale study on NequIP and MACE, we show that the loss entropy predicts out-of-distribution error and MD stability despite being computed only on the training set. Using this probe, we demonstrate how the choice of optimizers, loss function weighting, data normalization, and other architectural decisions influence the extrapolation behavior of NNIPs. Finally, we relate loss entropy to data efficiency, demonstrating that flatter landscapes also predict learning curve slopes. Our work provides a deep learning justification for the extrapolation performance of many common NNIPs, and introduces tools beyond accuracy metrics that can be used to inform the development of next-generation models.
翻译:近年来,对于神经网络原子势(NNIP),已经提出了一些关键的架构性进展,例如将消息传递网络、等变性或多体展开项纳入其中。尽管现代NNIP模型在能量/力误差方面存在微小的差异,但精度的改进仍被认为是开发新的NNIP架构的主要目标。在本文中,我们展示了架构和优化选择如何影响NNIP的概括能力,揭示了分子动力学(MD)稳定性、数据效率和损失景观的趋势。使用3BPA数据集,我们展示了NNIP中测试误差遵循一个缩放关系,可以抵抗噪声,但无法预测高精度区域中的MD稳定性。为解决这个问题,我们提出使用损失景观可视化和损失熵度量来预测NNIP的概括能力。通过对NequIP和MACE进行大规模研究,我们展示了损失熵可以预测越出分布误差和MD稳定性,尽管只在训练集上计算。通过这个探针,我们展示了优化器、损失函数加权、数据归一化和其他架构决策如何影响NNIP的外推行为。最后,我们将损失熵与数据效率联系起来,证明了平坦的景观也可以预测学习曲线的斜率。我们的工作为许多常见的NNIP的外推性能提供了深度学习的理由,并引入了超越精度指标的工具,可以用来指导下一代模型的开发。