Physics-Informed Neural Networks (PINNs) are a class of deep neural networks that are trained, using automatic differentiation, to compute the response of systems governed by partial differential equations (PDEs). The training of PINNs is simulation-free, and does not require any training dataset to be obtained from numerical PDE solvers. Instead, it only requires the physical problem description, including the governing laws of physics, domain geometry, initial/boundary conditions, and the material properties. This training usually involves solving a non-convex optimization problem using variants of the stochastic gradient descent method, with the gradient of the loss function approximated on a batch of collocation points, selected randomly in each iteration according to a uniform distribution. Despite the success of PINNs in accurately solving a wide variety of PDEs, the method still requires improvements in terms of computational efficiency. To this end, in this paper, we study the performance of an importance sampling approach for efficient training of PINNs. Using numerical examples together with theoretical evidences, we show that in each training iteration, sampling the collocation points according to a distribution proportional to the loss function will improve the convergence behavior of the PINNs training. Additionally, we show that providing a piecewise constant approximation to the loss function for faster importance sampling can further improve the training efficiency. This importance sampling approach is straightforward and easy to implement in the existing PINN codes, and also does not introduce any new hyperparameter to calibrate. The numerical examples include elasticity, diffusion and plane stress problems, through which we numerically verify the accuracy and efficiency of the importance sampling approach compared to the predominant uniform sampling approach.
翻译:内建的内建的神经网络(内建的内建的内建网络(内建的内建的内建网络)是一组深神经网络,经过培训,使用自动区分法,计算由部分差异方程式(PDEs)管理的系统的反应。 PINNs的培训是模拟的,不需要从数字的 PDE 解答器中获取任何培训数据集。相反,它只需要物理问题描述,包括物理法则、域几何学、初始/边界条件和物质属性。这一培训通常涉及使用直径梯度下降法的变异来解决非convex优化问题,损失函数的梯度大约在一组合用点(PDDEs)进行计算。PINs的培训是随机随机随机挑选的,不需要从数字中获取任何培训数据集数据集。 方法仍然需要提高计算效率。 为此,我们进一步研究了高效PINNs的高效培训方法, 与理论证据一起,我们展示了每次培训的简化的准确度递增率率率, 将不断递增的机算的递增到不断的递增的机压功能。