Our recent intensive study has found that physics-informed neural networks (PINN) tend to be local approximators after training. This observation leads to this novel physics-informed radial basis network (PIRBN), which can maintain the local property throughout the entire training process. Compared to deep neural networks, a PIRBN comprises of only one hidden layer and a radial basis "activation" function. Under appropriate conditions, we demonstrated that the training of PIRBNs using gradient descendent methods can converge to Gaussian processes. Besides, we studied the training dynamics of PIRBN via the neural tangent kernel (NTK) theory. In addition, comprehensive investigations regarding the initialisation strategies of PIRBN were conducted. Based on numerical examples, PIRBN has been demonstrated to be more effective and efficient than PINN in solving PDEs with high-frequency features and ill-posed computational domains. Moreover, the existing PINN numerical techniques, such as adaptive learning, decomposition and different types of loss functions, are applicable to PIRBN. The programs that can regenerate all numerical results can be found at https://github.com/JinshuaiBai/PIRBN.
翻译:我们最近的深入研究发现,物理学知识的神经网络(PINN)在训练后往往成为局部逼近器。这种观察结果引出了这种新颖的基于物理学的径向基网络(PIRBN),它可以在整个训练过程中保持局部特性。与深度神经网络相比,一个 PIRBN 仅包括一个隐藏层和一个径向基“激活”函数。在适当的条件下,我们证明了使用梯度下降方法训练 PIRBN 可以收敛到高斯过程。此外,我们通过神经切向核(NTK)理论研究了 PIRBN 的训练动态。另外,我们进行了有关 PIRBN 初始化策略的全面研究。基于数值实例,PIRBN 已经被证明比 PINN 在解决具有高频特征和病态计算域的偏微分方程时更加有效和高效。此外,现有的 PINN 数值技术,如自适应学习、分解和不同类型的损失函数,都适用于 PIRBN。可以在 https://github.com/JinshuaiBai/PIRBN 上找到所有数值结果的程序。