Physics-informed neural networks (PINNs) and their variants have recently emerged as alternatives to traditional partial differential equation (PDE) solvers, but little literature has focused on devising accurate numerical integration methods for neural networks (NNs), which is essential for getting accurate solutions. In this work, we propose adaptive quadratures for the accurate integration of neural networks and apply them to loss functions appearing in low-dimensional PDE discretisations. We show that at opposite ends of the spectrum, continuous piecewise linear (CPWL) activation functions enable one to bound the integration error, while smooth activations ease the convergence of the optimisation problem. We strike a balance by considering a CPWL approximation of a smooth activation function. The CPWL activation is used to obtain an adaptive decomposition of the domain into regions where the network is almost linear, and we derive an adaptive global quadrature from this mesh. The loss function is then obtained by evaluating the smooth network (together with other quantities, e.g., the forcing term) at the quadrature points. We propose a method to approximate a class of smooth activations by CPWL functions and show that it has a quadratic convergence rate. We then derive an upper bound for the overall integration error of our proposed adaptive quadrature. The benefits of our quadrature are evaluated on a strong and weak formulation of the Poisson equation in dimensions one and two. Our numerical experiments suggest that compared to Monte-Carlo integration, our adaptive quadrature makes the convergence of NNs quicker and more robust to parameter initialisation while needing significantly fewer integration points and keeping similar training times.
翻译:物理信息神经网络(PINN)及其变体最近成为传统偏微分方程(PDE)求解器的替代方案,但很少有文献关注于设计准确的神经网络(NN)数值积分方法,这是获得准确解的关键。在本文中,我们提出了自适应积分法,用于精确积分神经网络,并将其应用于低维PDE离散化中出现的损失函数。我们发现在两个极端,连续分段线性(CPWL)激活函数可以限制积分误差,而平滑激活函数可以缓解优化问题的收敛。我们通过考虑一个平滑激活函数的CPWL逼近来达到平衡。CPWL激活函数用于将域自适应地分解成网络几乎是线性的区域,我们从这个网格中得到自适应全局积分。然后通过在积分点上评估平滑网络(以及其他量,例如作用项)来获得损失函数。我们提出了一种通过CPWL函数逼近一类平滑激活函数的方法,并证明其具有二次收敛速率。然后为我们提出的自适应积分法导出了总积分误差的上限。我们在一维和二维情况下的Poisson方程的强式和弱式离散化上评估了自适应积分的优势。我们的数值实验表明,与蒙特卡罗积分相比,我们的自适应积分法使NN的收敛速度更快,而且对参数初始化更健壮,同时需要更少的积分点,并保持类似的训练时间。