Physics-Informed Neural Networks (PINNs) have emerged recently as a promising application of deep neural networks to the numerical solution of nonlinear partial differential equations (PDEs). However, it has been recognized that adaptive procedures are needed to force the neural network to fit accurately the stubborn spots in the solution of "stiff" PDEs. In this paper, we propose a fundamentally new way to train PINNs adaptively, where the adaptation weights are fully trainable and applied to each training point individually, so the neural network learns autonomously which regions of the solution are difficult and is forced to focus on them. The self-adaptation weights specify a soft multiplicative soft attention mask, which is reminiscent of similar mechanisms used in computer vision. The basic idea behind these SA-PINNs is to make the weights increase as the corresponding losses increase, which is accomplished by training the network to simultaneously minimize the losses and maximize the weights. We show how to build a continuous map of self-adaptive weights using Gaussian Process regression, which allows the use of stochastic gradient descent in problems where conventional gradient descent is not enough to produce accurate solutions. Finally, we derive the Neural Tangent Kernel matrix for SA-PINNs and use it to obtain a heuristic understanding of the effect of the self-adaptive weights on the dynamics of training in the limiting case of infinitely-wide PINNs, which suggests that SA-PINNs work by producing a smooth equalization of the eigenvalues of the NTK matrix corresponding to the different loss terms. In numerical experiments with several linear and nonlinear benchmark problems, the SA-PINN outperformed other state-of-the-art PINN algorithm in L2 error, while using a smaller number of training epochs.
翻译:物理进化神经网络(PINNs)最近作为一种有希望的深层神经网络应用于非线性部分差异方程式(PDEs)的数字解决方案(PDEs ) 。 然而,人们已经认识到,需要调整程序来迫使神经网络准确地适应“stiff” PDEs 解决方案中顽固的点。在本文中,我们提出了一个全新的方法来适应性地培训PINNs,使适应权重完全可培训,并单独适用于每个培训点,因此神经网络可以自主地学习解决方案所在区域难以实现的自我适应权重,并被迫关注这些区域。自我适应权重指定了一个软的多复制软关注面罩,这是计算机视觉中所用类似机制的记忆。这些 SA-PINNNS的基本想法是随着相应的损失增加而使权重增加, 通过培训网络同时将损失最小化和最大权重用于每个培训点,我们展示了自适应权重重的地图图图图, 高斯进程回归后, 允许在Snal-national-national-alalalalalalalalal-deal-deal-deal Procial-deal-livaldal-deal-deal-laves代, 在使用一个常规的模型中,我们开始使用一个不使用一个正常的磁基底基底基流学问题。