The Physics-Informed Neural Network (PINN) approach is a new and promising way to solve partial differential equations using deep learning. The $L^2$ Physics-Informed Loss is the de-facto standard in training Physics-Informed Neural Networks. In this paper, we challenge this common practice by investigating the relationship between the loss function and the approximation quality of the learned solution. In particular, we leverage the concept of stability in the literature of partial differential equation to study the asymptotic behavior of the learned solution as the loss approaches zero. With this concept, we study an important class of high-dimensional non-linear PDEs in optimal control, the Hamilton-Jacobi-Bellman(HJB) Equation, and prove that for general $L^p$ Physics-Informed Loss, a wide class of HJB equation is stable only if $p$ is sufficiently large. Therefore, the commonly used $L^2$ loss is not suitable for training PINN on those equations, while $L^{\infty}$ loss is a better choice. Based on the theoretical insight, we develop a novel PINN training algorithm to minimize the $L^{\infty}$ loss for HJB equations which is in a similar spirit to adversarial training. The effectiveness of the proposed algorithm is empirically demonstrated through experiments.
翻译:物理进化神经网络(PINN)是利用深层学习解决部分差异方程式的有希望的新方式。 $L$2$物理化损失是培训物理进化神经网络的“ 实际标准 ” 。 在本文中,我们通过调查损失功能与学习解决方案近似质量之间的关系来挑战这一共同做法。 特别是,我们利用部分差异方程式的稳定性概念,研究学习的解决方案的“无机会”行为,将其作为零损失。 有了这个概念,我们研究了一个在最佳控制方面高度非线性PDE的重要类别,即汉密尔顿- 贾科比- 贝尔曼(HJB) 的平价标准。 并且证明,对于普通的“美元物理进化”损失功能和学习解决方案的近似质量,只有当美元足够大时,才稳定。 因此,通常使用的$L$2美元损失并不适合培训PINN的这些方程式, 而用$infty} 损失是一个更好的选择。 基于理论深入了解,我们从理论学学上提出了一种模拟算式的“损失”。