The rapid progress of machine learning interatomic potentials over the past couple of years produced a number of new architectures. Particularly notable among these are the Atomic Cluster Expansion (ACE), which unified many of the earlier ideas around atom density-based descriptors, and Neural Equivariant Interatomic Potentials (NequIP), a message passing neural network with equivariant features that showed state of the art accuracy. In this work, we construct a mathematical framework that unifies these models: ACE is generalised so that it can be recast as one layer of a multi-layer architecture. From another point of view, the linearised version of NequIP is understood as a particular sparsification of a much larger polynomial model. Our framework also provides a practical tool for systematically probing different choices in the unified design space. We demonstrate this by an ablation study of NequIP via a set of experiments looking at in- and out-of-domain accuracy and smooth extrapolation very far from the training data, and shed some light on which design choices are critical for achieving high accuracy. Finally, we present BOTNet (Body-Ordered-Tensor-Network), a much-simplified version of NequIP, which has an interpretable architecture and maintains accuracy on benchmark datasets.
翻译:在过去几年中,机器学习的跨原子潜力的快速进步产生了若干新的结构。其中特别值得注意的是原子集群扩展(ACE),它统一了原子密度基描述器和神经等异性间相相相貌(NequIP)的许多早期想法,这是一个传递信息的神经网络,具有不等异的特性,显示艺术的准确性。在这项工作中,我们构建了一个数学框架,使这些模型统一起来:ACE被概括化,以便可以重新作为多层结构的一个层。从另一个角度看,NequIP的线性版本被理解为一个更大型多边模型的特定压缩。我们的框架还提供了一个实用工具,在统一的设计空间中系统性地选择不同的选择。我们通过一系列研究内部和外部的准确性和光滑滑的外推法来证明这一点。我们目前对NequIPNet(Orality-Ordalization-Orgal-Oral-Oral-Oral-Oral-Oral-Orest-Or-Or-Oris-Orislation-Oral-I-Orislation-Oration-Orislation-Orislation-Or-Or-I-Or-Orislation-I-Oration-O-O-O-I-I-I-O-O-I-O-I-I-I)的架构)。