This paper analyzes the convergence rate of a deep Galerkin method for the weak solution (DGMW) of second-order elliptic partial differential equations on $\mathbb{R}^d$ with Dirichlet, Neumann, and Robin boundary conditions, respectively. In DGMW, a deep neural network is applied to parametrize the PDE solution, and a second neural network is adopted to parametrize the test function in the traditional Galerkin formulation. By properly choosing the depth and width of these two networks in terms of the number of training samples $n$, it is shown that the convergence rate of DGMW is $\mathcal{O}(n^{-1/d})$, which is the first convergence result for weak solutions. The main idea of the proof is to divide the error of the DGMW into an approximation error and a statistical error. We derive an upper bound on the approximation error in the $H^{1}$ norm and bound the statistical error via Rademacher complexity.
翻译:本文分析了在Drichlet、Neumann和Robin边界条件下,对二等分流部分差分方程式的薄弱溶液(DGMW)的深度Galerkin方法(DGMW)的趋同率。在DGMW中,一个深神经网络用于对PDE解决方案进行配对,第二个神经网络用于对传统Galerkin配方的测试功能进行配对。从培训样本数量中适当选择这两个网络的深度和宽度,显示DGMW的趋同率是$\mathcal{O}(n ⁇ -1/d})$,这是薄弱解决方案的第一个趋同结果。证据的主要目的是将DGMW的错误分为近似差和统计错误。我们从$H ⁇ 1}标准中的近似差错中得出了一个上限,并通过Rademacher的复杂性将统计错误捆绑起来。