Physics informed neural networks approximate solutions of PDEs by minimizing pointwise residuals. We derive rigorous bounds on the error, incurred by PINNs in approximating the solutions of a large class of linear parabolic PDEs, namely Kolmogorov equations that include the heat equation and Black-Scholes equation of option pricing, as examples. We construct neural networks, whose PINN residual (generalization error) can be made as small as desired. We also prove that the total $L^2$-error can be bounded by the generalization error, which in turn is bounded in terms of the training error, provided that a sufficient number of randomly chosen training (collocation) points is used. Moreover, we prove that the size of the PINNs and the number of training samples only grow polynomially with the underlying dimension, enabling PINNs to overcome the curse of dimensionality in this context. These results enable us to provide a comprehensive error analysis for PINNs in approximating Kolmogorov PDEs.
翻译:通过尽量减少点残余物,我们从物理向神经网络通报PDE的近似解决方案。我们从PINN在接近大量线性抛物线式PDE的解决方案时发生的错误中得出严格的界限,即包括热方程和选择定价的黑分方程的Kolmogorov等方程式。我们建造神经网络,其PINN的剩余(一般化错误)可以像所希望的那样小一些。我们还证明,总价值2美元-error可以受一般化错误的约束,而一般化错误又以培训错误为界限,但必须使用足够数量的随机选择培训(合用)点。此外,我们证明,PINN的大小和培训样品数量只能与基本层面多倍增长,使PINN能够克服这一背景下的维度诅咒。这些结果使我们能够为匹配Kolmogorov PDEs的PINNs提供全面的错误分析。