We present a new technique for the accelerated training of physics-informed neural networks (PINNs): discretely-trained PINNs (DT-PINNs). The repeated computation of partial derivative terms in the PINN loss functions via automatic differentiation during training is known to be computationally expensive, especially for higher-order derivatives. DT-PINNs are trained by replacing these exact spatial derivatives with high-order accurate numerical discretizations computed using meshless radial basis function-finite differences (RBF-FD) and applied via sparse-matrix vector multiplication. The use of RBF-FD allows for DT-PINNs to be trained even on point cloud samples placed on irregular domain geometries. Additionally, though traditional PINNs (vanilla-PINNs) are typically stored and trained in 32-bit floating-point (fp32) on the GPU, we show that for DT-PINNs, using fp64 on the GPU leads to significantly faster training times than fp32 vanilla-PINNs with comparable accuracy. We demonstrate the efficiency and accuracy of DT-PINNs via a series of experiments. First, we explore the effect of network depth on both numerical and automatic differentiation of a neural network with random weights and show that RBF-FD approximations of third-order accuracy and above are more efficient while being sufficiently accurate. We then compare the DT-PINNs to vanilla-PINNs on both linear and nonlinear Poisson equations and show that DT-PINNs achieve similar losses with 2-4x faster training times on a consumer GPU. Finally, we also demonstrate that similar results can be obtained for the PINN solution to the heat equation (a space-time problem) by discretizing the spatial derivatives using RBF-FD and using automatic differentiation for the temporal derivative. Our results show that fp64 DT-PINNs offer a superior cost-accuracy profile to fp32 vanilla-PINNs.
翻译:我们提出了一个加速培训物理知情神经网络(PINNN)的新技术:通过离散培训 PINN(DT-PINN) 。已知在培训期间通过自动区分重复计算PINN损失功能中部分衍生术语,计算成本昂贵,特别是对于较高级衍生产品。DT-PINNN通常在GPU上储存和训练这些精确的空间衍生工具,用高端准确的数字离散计算这些精确空间衍生工具,使用无线辐射基函数(RBFF-FD)和通过稀疏矢量矢量倍增应用。使用RBF-FDNNNP(RB-F-FD),使得DFT-PNNN(即使是在非常规域域域域域域的点样样本样本样本中,也能够通过点定点的点云端点样本-PINNF)的精确度和精确度比例。我们展示了SDFI(R-DF)网络的精确度和精确度(ODF-I) 的精确度和精确度(O)网络(SL)的精确度(ODFDFIL) 和(O),通过一系列实验显示SDFDF-L)的结果。