Physics-Informed Neural Networks (PINNs) has become a prominent application of deep learning in scientific computation, as it is a powerful approximator of solutions to nonlinear partial differential equations (PDEs). There have been numerous attempts to facilitate the training process of PINNs by adjusting the weight of each component of the loss function, called adaptive loss balancing algorithms. In this paper, we propose an Augmented Lagrangian relaxation method for PINNs (AL-PINNs). We treat the initial and boundary conditions as constraints for the optimization problem of the PDE residual. By employing Augmented Lagrangian relaxation, the constrained optimization problem becomes a sequential max-min problem so that the learnable parameters $\lambda$'s adaptively balance each loss component. Our theoretical analysis reveals that the sequence of minimizers of the proposed loss functions converges to an actual solution for the Helmholtz, viscous Burgers, and Klein--Gordon equations. We demonstrate through various numerical experiments that AL-PINNs yields a much smaller relative error compared with that of state-of-the-art adaptive loss balancing algorithms.
翻译:物理进化神经网络(PINNs)已成为科学计算中深层学习的一个突出应用,因为它是非线性部分差异方程式解决方案的强大近似方程式。 已经多次尝试通过调整损失函数每个组成部分的重量来便利PINNs的培训过程, 称为适应性损失平衡算法。 在本文中, 我们为 PINNs( AL- PINNs) 提议了一个强化拉格朗吉放松方法。 我们把初始和边界条件作为PDE剩余部分优化问题的限制。 我们通过各种数字实验, 利用拉格朗吉放松, 限制的优化问题成了一个顺序上的最大问题, 从而使得可学习参数 $\lambda$ 的适应性平衡每个损失部分。 我们的理论分析表明, 拟议的损失函数的最小化序列与Helmholtz、 comus Burgers 和lein- Gordon等式的实际解决方案一致。 我们通过各种数字实验证明, AL- PINS 与州适应性损失方程式的平衡相比, 相对错误要小得多。