Recently, neural networks have been widely applied for solving partial differential equations (PDEs). Although such methods have been proven remarkably successful on practical engineering problems, they have not been shown, theoretically or empirically, to converge to the underlying PDE solution with arbitrarily high accuracy. The primary difficulty lies in solving the highly non-convex optimization problems resulting from the neural network discretization, which are difficult to treat both theoretically and practically. It is our goal in this work to take a step toward remedying this. For this purpose, we develop a novel greedy training algorithm for shallow neural networks. Our method is applicable to both the variational formulation of the PDE and also to the residual minimization formulation pioneered by physics informed neural networks (PINNs). We analyze the method and obtain a priori error bounds when solving PDEs from the function class defined by shallow networks, which rigorously establishes the convergence of the method as the network size increases. Finally, we test the algorithm on several benchmark examples, including high dimensional PDEs, to confirm the theoretical convergence rate. Although the method is expensive relative to traditional approaches such as finite element methods, we view this work as a proof of concept for neural network-based methods, which shows that numerical methods based upon neural networks can be shown to rigorously converge.
翻译:最近,神经网络已广泛应用于解决偏微分方程(PDE)。尽管这样的方法在实际工程问题上已被证明极其成功,但从理论或经验方面都没有证明它们可以收敛到PDE解的任意高精度。主要困难在于神经网络离散化导致的极非凸优化问题,这既难以理论上解决,也难以在实践中解决。本文的目标是朝着解决这个问题迈出一步。为此,我们开发了一种新的贪婪训练算法用于浅层神经网络。我们的方法适用于PDE的变分形式以及物理知识神经网络(PINN)的残差最小化形式。我们分析了该方法,并获得了求解由浅层网络定义的函数类的PDE时的先验误差界,从而严格地证明了该方法收敛到PDE解的证明。最后,我们在几个基准示例中测试了该算法,包括高维PDE,以确认理论收敛率。虽然该方法相对于传统方法(如有限元方法)而言较昂贵,但我们将这项工作视为基于神经网络的方法的概念验证,它表明基于神经网络的数值方法可以被证明严格收敛。