There have been extensive studies on solving differential equations using physics-informed neural networks. While this method has proven advantageous in many cases, a major criticism lies in its lack of analytical error bounds. Therefore, it is less credible than its traditional counterparts, such as the finite difference method. This paper shows that one can mathematically derive explicit error bounds for physics-informed neural networks trained on a class of linear systems of differential equations. More importantly, evaluating such error bounds only requires evaluating the differential equation residual infinity norm over the domain of interest. Our work shows a link between network residuals, which is known and used as loss function, and the absolute error of solution, which is generally unknown. Our approach is semi-phenomonological and independent of knowledge of the actual solution or the complexity or architecture of the network. Using the method of manufactured solution on linear ODEs and system of linear ODEs, we empirically verify the error evaluation algorithm and demonstrate that the actual error strictly lies within our derived bound.
翻译:对使用物理知情神经网络解决差异方程式进行了广泛的研究。虽然这种方法在许多情况中证明是有利的,但主要的批评在于它缺乏分析错误界限。因此,它不如传统的对口单位那么可信,例如有限差异法。本文表明,在数学上可以得出物理学知情神经网络的明显错误界限,这些网络受过不同方程式的线性系统的培训。更重要的是,评价这种错误界限只要求评价不同方程式在利益领域上的剩余无限性规范。我们的工作显示了网络剩余部分(已知并用作损失函数)与绝对的解决方案错误(通常不为人所知)之间的联系。我们的方法是半同体学的,独立于对实际解决方案或网络复杂性或结构的了解。我们使用线性代码和线性代码系统的制造解决方案的方法,对错误评价算法进行了实验性核查,并证明实际错误严格存在于我们所得的界限之内。