Methods for solving PDEs using neural networks have recently become a very important topic. We provide an error analysis for such methods which is based on an a priori constraint on the $\mathcal{K}_1(\mathbb{D})$-norm of the numerical solution. We show that the resulting constrained optimization problem can be efficiently solved using a greedy algorithm, which replaces stochastic gradient descent. Following this, we show that the error arising from discretizing the energy integrals is bounded both in the deterministic case, i.e. when using numerical quadrature, and also in the stochastic case, i.e. when sampling points to approximate the integrals. In the later case, we use a Rademacher complexity analysis, and in the former we use standard numerical quadrature bounds. This extends existing results to methods which use a general dictionary of functions to learn solutions to PDEs and importantly gives a consistent analysis which incorporates the optimization, approximation, and generalization aspects of the problem. In addition, the Rademacher complexity analysis is simplified and generalized, which enables application to a wide range of problems.
翻译:使用神经网络解决 PDE 的方法最近已成为一个非常重要的议题。 我们为这种方法提供了一种错误分析, 其依据是对数值解决方案的 $\ mathcal{K ⁇ 1 (\mathbb{D}})$- norm 的先验限制。 我们表明, 由此产生的限制优化问题可以用贪婪的算法来有效解决, 该算法可以取代随机梯度的梯度下降。 在此之后, 我们显示, 在确定性案例中, 即当使用数字二次曲线时, 以及当取样点接近集成物时, 也在随机情况下 。 在后一种情况下, 我们使用Rademacher 复杂度分析, 而在前一种我们使用标准数字矩形界限的方法。 这将现有结果扩大到使用通用函数词典来学习 PDE 解决方案的方法, 并且重要的是提供一致的分析, 其中包括问题的优化、 近似和概括方面。 此外, Rademacher 复杂度分析是简化和概括的, 从而能够应用到广泛的问题。