In this work we analyze how Gaussian or Newton-Cotes quadrature rules of different precisions and piecewise polynomial test functions of different degrees affect the convergence rate of Variational Physics Informed Neural Networks (VPINN) with respect to mesh refinement, while solving elliptic boundary-value problems. Using a Petrov-Galerkin framework relying on an inf-sup condition, we derive an a priori error estimate in the energy norm between the exact solution and a suitable high-order piecewise interpolant of a computed neural network. Numerical experiments confirm the theoretical predictions, and also indicate that the error decay follows the same behavior when the neural network is not interpolated. Our results suggest, somehow counterintuitively, that for smooth solutions the best strategy to achieve a high decay rate of the error consists in choosing test functions of the lowest polynomial degree, while using quadrature formulas of suitably high precision.
翻译:在这项工作中,我们分析了不同精度和不同度的片度多角度多角度测试功能的Gaussian或牛顿-科特斯二次曲线规则如何影响变异物理信息神经网络(VPINN)在网状精细方面的趋同率,同时解决椭圆边界价值问题。我们利用一个依靠内侧条件的Petrov-Galerkin框架,在精确解决方案和计算神经网络的适当的高阶分级干涉器之间的能量规范中得出了先验错误估计。数字实验证实了理论预测,并且也表明在神经网络不内插时,错误的衰变也遵循了同样的行为。我们的结果表明,对于顺利解决高误差衰减率的最佳战略来说,在选择最低多元度的测试函数的同时使用适当高精确度的二次公式。