Physics-informed neural networks (PINNs) and their variants have recently emerged as alternatives to traditional partial differential equation (PDE) solvers, but little literature has focused on devising accurate numerical integration methods for neural networks (NNs), which is essential for getting accurate solutions. In this work, we propose adaptive quadratures for the accurate integration of neural networks and apply them to loss functions appearing in low-dimensional PDE discretisations. We show that at opposite ends of the spectrum, continuous piecewise linear (CPWL) activation functions enable one to bound the integration error, while smooth activations ease the convergence of the optimisation problem. We strike a balance by considering a CPWL approximation of a smooth activation function. The CPWL activation is used to obtain an adaptive decomposition of the domain into regions where the network is almost linear, and we derive an adaptive global quadrature from this mesh. The loss function is then obtained by evaluating the smooth network (together with other quantities, e.g., the forcing term) at the quadrature points. We propose a method to approximate a class of smooth activations by CPWL functions and show that it has a quadratic convergence rate. We then derive an upper bound for the overall integration error of our proposed adaptive quadrature. The benefits of our quadrature are evaluated on a strong and weak formulation of the Poisson equation in dimensions one and two. Our numerical experiments suggest that compared to Monte-Carlo integration, our adaptive quadrature makes the convergence of NNs quicker and more robust to parameter initialisation while needing significantly fewer integration points and keeping similar training times.
翻译:物理启发式神经网络(PINNs)及其变体最近被提出作为传统偏微分方程(PDE)求解器的替代,但还没有太多文献集中在为神经网络(NN)设计精确的数值积分方法,并且这对于获得准确的解是至关重要的。在本文中,我们提出了适应性求积法来精确集成神经网络,并将其应用于低维PDE离散化中出现的损失函数。我们展示了在两个相反的极端情况下,连续分段线性(CPWL)激活函数使得可以界定积分误差,而平滑激活函数则使优化问题的收敛更容易。我们通过考虑平滑激活函数的CPWL逼近来取得平衡。CPWL激活函数用于将域分解成几乎是线性的区域,从而获得自适应网格。我们从这个网格推导出一种自适应的全局求积法。然后,在求积点处评估平滑网络(以及其他数量,例如外力项)来获得损失函数。我们提出了一种近似类平滑激活函数的CPWL函数,并证明它具有二次收敛速度。接下来,我们导出所提出的自适应求积法的整体积分误差上限。我们将我们的求积法的优点在一维和二维的泊松方程的强和弱形式中进行了评估。我们的数值实验表明,与蒙特卡罗积分相比,我们的自适应求积法使NN的收敛更快,对参数初始化更鲁棒,同时需要更少的积分点并保持类似的训练时间。