We revisit the original approach of using deep learning and neural networks to solve differential equations by incorporating the knowledge of the equation. This is done by adding a dedicated term to the loss function during the optimization procedure in the training process. The so-called physics-informed neural networks (PINNs) are tested on a variety of academic ordinary differential equations in order to highlight the benefits and drawbacks of this approach with respect to standard integration methods. We focus on the possibility to use the least possible amount of data into the training process. The principles of PINNs for solving differential equations by enforcing physical laws via penalizing terms are reviewed. A tutorial on a simple equation model illustrates how to put into practice the method for ordinary differential equations. Benchmark tests show that a very small amount of training data is sufficient to predict the solution when the non linearity of the problem is weak. However, this is not the case in strongly non linear problems where a priori knowledge of training data over some partial or the whole time integration interval is necessary.
翻译:我们通过在训练过程中在损失函数中添加专门的项来回顾使用深度学习和神经网络求解微分方程的原始方法。所谓的物理学认知神经网络 (PINNs) 在多种学术普通微分方程中进行了测试,以凸显与标准积分方法相比,这种方法的优点和缺点。我们关注使用尽可能少的数据进入训练过程的可能性。梳理了通过通过惩罚项来强制物理定律求解微分方程的 PINNs 的原理。通过对一个简单的方程模型的教程来演示如何将这种方法应用于普通微分方程。基准测试表明,在问题的非线性性较弱时,仅需要很少的训练数据就足以预测解。然而,在问题的非常非线性时,需要先验知识,包括在部分或整个时间积分间隔内的训练数据。