Recently, it has been shown that neural networks not only approximate the ground-state wave functions of a single molecular system well but can also generalize to multiple geometries. While such generalization significantly speeds up training, each energy evaluation still requires Monte Carlo integration which limits the evaluation to a few geometries. In this work, we address the inference shortcomings by proposing the Potential learning from ab-initio Networks (PlaNet) framework, in which we simultaneously train a surrogate model in addition to the neural wave function. At inference time, the surrogate avoids expensive Monte-Carlo integration by directly estimating the energy, accelerating the process from hours to milliseconds. In this way, we can accurately model high-resolution multi-dimensional energy surfaces for larger systems that previously were unobtainable via neural wave functions. Finally, we explore an additional inductive bias by introducing physically-motivated restricted neural wave function models. We implement such a function with several additional improvements in the new PESNet++ model. In our experimental evaluation, PlaNet accelerates inference by 7 orders of magnitude for larger molecules like ethanol while preserving accuracy. Compared to previous energy surface networks, PESNet++ reduces energy errors by up to 74%.
翻译:最近,人们已经表明,神经网络不仅能够很好地接近单一分子系统的地面波功能,而且可以推广到多种地貌。尽管这种一般化可以大大加快培训,但每个能源评价仍然需要蒙特卡洛整合,从而将评价限制在少数几处地貌。在这项工作中,我们通过提出从 ab-initio 网络(PlaNet) 框架中的潜在学习来解决推论缺陷,我们在这个框架中除了神经波功能之外,还同时训练一个代谢模型。在推断时,代孕避免了昂贵的蒙特-卡洛整合,直接估计能源,将过程从小时加速到毫秒。在这种方式中,我们可以精确地为以前无法通过神经波功能获得的较大系统模拟高分辨率多维能源表面。最后,我们通过引入物理动机限制的神经波功能模型来探索额外的诱导偏差。我们在新的PESNet++模型中实施这样的功能,通过直接的实验评估,PlaNet通过7级速度加速蒙特-Carlolo的整合,将过程从小时加速到毫秒。这样,我们可以准确地为更大的系统模拟系统模拟,同时通过保存以前的能量定位,例如气压,将能量到电压将能量网络降低。