Recently, neural networks have been widely applied for solving partial differential equations (PDEs). However, with current training algorithms the numerical convergence of neural networks when solving PDEs has not been empirically observed. The primary difficulty lies in solving the highly non-convex optimization problems resulting from the neural network discretization. Theoretically analyzing the optimization process presents significant difficulties and empirical experiments require extensive hyperparameter tuning to achieve acceptable results. In order to conquer this challenge, we develop a novel greedy training algorithm for shallow neural networks in this paper. We also analyze the resulting method and obtain a priori error bounds when solving PDEs from the function class defined by shallow networks. This rigorously establishes the convergence of the method as the network size increases. Finally, we test the algorithm on several benchmark examples, including high dimensional PDEs, to confirm the theoretical convergence rate and to establish its efficiency and robustness. An advantage of this method is its straightforward applicability to high-order equations on general domains.
翻译:最近,神经网络被广泛应用于解决部分差异方程式(PDEs) 。然而,由于目前的培训算法,解决PDEs时神经网络的数值趋同还没有从经验上观察到。主要困难在于解决神经网络离散产生的高度非电离层优化问题。从理论上分析优化过程会带来巨大的困难,实验实验实验需要大量的超参数调整才能取得可接受的结果。为了克服这一挑战,我们为本文中的浅神经网络开发了一种新的贪婪培训算法。我们还分析了由此产生的方法,并在从浅网络定义的功能类中解决PDEs时获得先验误差界限。随着网络规模的扩大,这严格地确立了方法的趋同。最后,我们用几个基准示例测试算法,包括高维度PDEs,以证实理论趋同率,并确立其效率和稳健性。这种方法的一个优点是,它直接适用于一般领域的高阶方程式。