Recently, neural networks have been widely applied for solving partial differential equations (PDEs). Although such methods have been proven remarkably successful on practical engineering problems, they have not been shown, theoretically or empirically, to converge to the underlying PDE solution with arbitrarily high accuracy. The primary difficulty lies in solving the highly non-convex optimization problems resulting from the neural network discretization, which are difficult to treat both theoretically and practically. It is our goal in this work to take a step toward remedying this. For this purpose, we develop a novel greedy training algorithm for shallow neural networks. Our method is applicable to both the variational formulation of the PDE and also to the residual minimization formulation pioneered by physics informed neural networks (PINNs). We analyze the method and obtain a priori error bounds when solving PDEs from the function class defined by shallow networks, which rigorously establishes the convergence of the method as the network size increases. Finally, we test the algorithm on several benchmark examples, including high dimensional PDEs, to confirm the theoretical convergence rate. Although the method is expensive relative to traditional approaches such as finite element methods, we view this work as a proof of concept for neural network-based methods, which shows that numerical methods based upon neural networks can be shown to rigorously converge.
翻译:最近,神经网络被广泛应用于解决部分差异方程式(PDEs ) 。 虽然这些方法在实际工程问题上被证明非常成功,但从理论上或经验上都未能证明这些方法在理论上或经验上都与PDE基本解决方案相融合,其任意性极高的精确度高。主要困难在于解决由神经网络分解产生的高度非电解优化问题,这些问题在理论上和实际上都难以处理。我们在此工作中的目标是为纠正这一问题迈出一步。为此目的,我们为浅神经网络开发了一种新的贪婪培训算法。虽然我们的方法既适用于PDE的变异性配方,也适用于物理知情神经网络(PINNS)所开创的剩余最小化配方。我们分析该方法,并在从浅网络界定的功能类别中解析PDE时获得先验误,严格地确定方法的趋近,因为网络规模增大。最后,我们用几个基准示例测试算法,包括高维度PDEs,以证实理论趋同率。尽管我们的方法对于诸如定质元素方法等传统方法来说费用很高,但我们将这一方法视为基于稳定的网络的趋同方法,我们所展示的一致的方法可以证明。