Our recent intensive study has found that physics-informed neural networks (PINN) tend to be local approximators after training. This observation leads to this novel physics-informed radial basis network (PIRBN), which can maintain the local property throughout the entire training process. Compared to deep neural networks, a PIRBN comprises of only one hidden layer and a radial basis "activation" function. Under appropriate conditions, we demonstrated that the training of PIRBNs using gradient descendent methods can converge to Gaussian processes. Besides, we studied the training dynamics of PIRBN via the neural tangent kernel (NTK) theory. In addition, comprehensive investigations regarding the initialisation strategies of PIRBN were conducted. Based on numerical examples, PIRBN has been demonstrated to be more effective and efficient than PINN in solving PDEs with high-frequency features and ill-posed computational domains. Moreover, the existing PINN numerical techniques, such as adaptive learning, decomposition and different types of loss functions, are applicable to PIRBN. The programs that can regenerate all numerical results can be found at https://github.com/JinshuaiBai/PIRBN.
翻译:近期的研究发现,物理学启发的神经网络(PINN)在训练后倾向于成为局部逼近器。这一发现引出了这种新颖的物理学启发的径向基函数网络(PIRBN),其可以在整个训练过程中保持局部特性。与深层神经网络不同,PIRBN仅包括一个隐藏层和一个径向基“激活”函数。在适当的条件下,我们证明了使用梯度下降方法对PIRBN进行训练可以收敛到高斯过程。此外,我们还通过神经切向核(NTK)理论研究了PIRBN的训练动态。此外,我们还进行了关于PIRBN初始化策略的全面研究。通过数值实例,我们证明了PIRBN在求解具有高频特征和不适当计算域的PDE时比PINN更有效和高效。此外,现有的PINN数值技术,如自适应学习、分解和不同类型的损失函数,也适用于PIRBN。可以在https://github.com/JinshuaiBai/PIRBN找到可以重新生成所有数值结果的程序。