Neural network (NN)-based interatomic potentials provide fast prediction of potential energy surfaces with the accuracy of electronic structure methods. However, NN predictions are only reliable within well-learned training domains, with unknown behavior when extrapolating. Uncertainty quantification through NN committees identify domains with low prediction confidence, but thoroughly exploring the configuration space for training NN potentials often requires slow atomistic simulations. Here, we employ adversarial attacks with a differentiable uncertainty metric to sample new molecular geometries and bootstrap NN potentials. In combination with an active learning loop, the extrapolation power of NN potentials is improved beyond the original training data with few additional samples. The framework is demonstrated on multiple examples, leading to better sampling of kinetic barriers and collective variables without extensive prior data on the relevant geometries. Adversarial attacks are new ways to simultaneously sample the phase space and bootstrap NN potentials, increasing their robustness and enabling a faster, accurate prediction of potential energy landscapes.
翻译:基于神经网络(NN)的跨原子潜力的配置空间往往需要缓慢的原子模拟。在这里,我们采用具有不同不确定度的对抗性攻击来抽样新的分子地貌和靴子陷阱NN的潜力。结合一个积极的学习循环,NN潜力的外推力除原有的培训数据外,还增加了少量样本。框架通过NN委员会以不确定的量化方式确定预测信任度低的领域,但彻底探索培训NN潜力的配置空间往往需要缓慢的原子模拟。亚马逊式攻击是同时采样阶段空间和靴子陷阱NNNN潜力的新方法,增加其坚固性,并能够更快、准确地预测潜在的能源景观。