Deep Ritz methods (DRM) have been proven numerically to be efficient in solving partial differential equations. In this paper, we present a convergence rate in $H^{1}$ norm for deep Ritz methods for Laplace equations with Dirichlet boundary condition, where the error depends on the depth and width in the deep neural networks and the number of samples explicitly. Further we can properly choose the depth and width in the deep neural networks in terms of the number of training samples. The main idea of the proof is to decompose the total error of DRM into three parts, that is approximation error, statistical error and the error caused by the boundary penalty. We bound the approximation error in $H^{1}$ norm with $\mathrm{ReLU}^{2}$ networks and control the statistical error via Rademacher complexity. In particular, we derive the bound on the Rademacher complexity of the non-Lipschitz composition of gradient norm with $\mathrm{ReLU}^{2}$ network, which is of immense independent interest. We also analysis the error inducing by the boundary penalty method and give a prior rule for tuning the penalty parameter.
翻译:深Ritz 方法( DRM) 已被从数字上证明在解决部分差异方程式方面是有效的。 在本文中,我们以$H$1美元为标准,用深Ritz 方法对Drichlet边界条件的Laplace 方程式提出一个以$H$1美元为标准的汇合率,错误取决于深神经网络的深度和宽度以及样本的数量。 此外,我们可以从培训样本的数量中适当选择深神经网络的深度和宽度。 证据的主要想法是将DRM的全部错误分解为三个部分, 即近似错误、 统计错误和边界处罚造成的错误。 我们用$H$1美元将差错与$H$1美元为标准, 并通过Rademacher 复杂程度控制统计错误。 特别是, 我们从梯度规范的非Lademacher 构成的不利普西茨复杂程度中, $mathrm{ ReLU%2}网络, 它具有巨大的独立利益。 我们还分析边界处罚方法导致的错误, 并对调整参数作出事先规则。