Physics-informed neural networks (PINNs) are one popular approach to incorporate a priori knowledge about physical systems into the learning framework. PINNs are known to be robust for smaller training sets, derive better generalization problems, and are faster to train. In this paper, we show that using PINNs in comparison with purely data-driven neural networks is not only favorable for training performance but allows us to extract significant information on the quality of the approximated solution. Assuming that the underlying differential equation for the PINN training is an ordinary differential equation, we derive a rigorous upper limit on the PINN prediction error. This bound is applicable even for input data not included in the training phase and without any prior knowledge about the true solution. Therefore, our a posteriori error estimation is an essential step to certify the PINN. We apply our error estimator exemplarily to two academic toy problems, whereof one falls in the category of model-predictive control and thereby shows the practical use of the derived results.
翻译:物理知情神经网络(PINNs)是将物理系统方面的先验知识纳入学习框架的一种流行方法。已知PINNs对于小型培训组而言是强健的,能够产生更概括的问题,而且培训速度更快。在本文中,我们表明,与纯数据驱动神经网络相比,使用PINNs不仅有利于培训绩效,而且使我们能够获取关于近似解决方案质量的重要信息。假设PINN培训的基本差分方程是一个普通的差分方程,我们对PINN预测错误得出严格的上限。这一约束甚至适用于未包括在培训阶段且未事先了解真正解决方案的输入数据。因此,我们的事后误差估计是认证PINN的关键步骤。我们将我们的误差估计器用于两个学术问题,其中之一属于模型预测控制类别,从而显示得出的结果的实际用途。