The Physics-Informed Neural Network (PINN) approach is a new and promising way to solve partial differential equations using deep learning. The $L^2$ Physics-Informed Loss is the de-facto standard in training Physics-Informed Neural Networks. In this paper, we challenge this common practice by investigating the relationship between the loss function and the approximation quality of the learned solution. In particular, we leverage the concept of stability in the literature of partial differential equation to study the asymptotic behavior of the learned solution as the loss approaches zero. With this concept, we study an important class of high-dimensional non-linear PDEs in optimal control, the Hamilton-Jacobi-Bellman(HJB) Equation, and prove that for general $L^p$ Physics-Informed Loss, a wide class of HJB equation is stable only if $p$ is sufficiently large. Therefore, the commonly used $L^2$ loss is not suitable for training PINN on those equations, while $L^{\infty}$ loss is a better choice. Based on the theoretical insight, we develop a novel PINN training algorithm to minimize the $L^{\infty}$ loss for HJB equations which is in a similar spirit to adversarial training. The effectiveness of the proposed algorithm is empirically demonstrated through experiments. Our code is released at https://github.com/LithiumDA/L_inf-PINN.
翻译:物理进化神经网络(PINN)是利用深层学习解决部分差异方程式的有希望的新方式。 $L2$物理进化损失是培训物理进化神经网络的实实在在标准。 在本文中,我们通过调查损失功能与学习解决方案近似质量之间的关系来挑战这一常见做法。 特别是, 我们利用部分差异方程式的稳定性概念, 研究学习的解决方案的无损方程式行为作为零损率。 有了这一概念, 我们研究了一个高度非线性非线性PDE的重要类别, 在最佳控制方面, 汉密尔顿- Jacobi- Bellman (HJBB) 的平价标准。 并且证明,对于普通的美元物理进化损失, 只有当美元足够大时, 才会保持稳定。 因此, 通常使用的 $L2$L2 损失不适合对 PINN 进行这些方程式的培训, 而 $Linftyl 损失是一个更好的选择。 基于理论洞察, 我们的理论分析中, 一种模拟到我们所展示的硬化的磁算法 。