Approximating the numerical solutions of Partial Differential Equations (PDEs) using neural networks is a promising application of deep learning. The smooth architecture of a fully connected neural network is appropriate for finding the solutions of PDEs; the corresponding loss function can also be intuitively designed and guarantees the convergence for various kinds of PDEs. However, the rate of convergence has been considered as a weakness of this approach. This paper introduces a novel loss function for the training of neural networks to find the solutions of PDEs, making the training substantially efficient. Inspired by the recent studies that incorporate derivative information for the training of neural networks, we develop a loss function that guides a neural network to reduce the error in the corresponding Sobolev space. Surprisingly, a simple modification of the loss function can make the training process similar to Sobolev Training although solving PDEs with neural networks is not a fully supervised learning task. We provide several theoretical justifications for such an approach for the viscous Burgers equation and the kinetic Fokker--Planck equation. We also present several simulation results, which show that compared with the traditional $L^2$ loss function, the proposed loss function guides the neural network to significantly faster convergence. Moreover, we provide empirical evidence that shows that the proposed loss function, together with the iterative sampling techniques, performs better in solving high dimensional PDEs.
翻译:使用神经网络的局部差异等同(PDEs)的数值解决方案是深层次学习的一个很有希望的应用。完全连接的神经网络的光滑结构结构对于寻找PDEs的解决方案是合适的;相应的损失功能也可以直观设计,保证各种PDEs的趋同。然而,趋同率被认为是这种方法的一个弱点。本文件为神经网络的培训引入了一个新的损失功能,以寻找PDEs的解决方案,使培训具有极大的效率。根据最近将衍生信息纳入神经网络培训的研究,我们开发了一种损失功能,引导神经网络减少相应的Sobolev空间的错误;令人惊讶的是,对损失功能的简单修改可以使培训过程与Sobolev培训相似,尽管用神经网络网络解决PDEs并不是一个完全受监督的学习任务。我们为这种方法提供了几个理论理由,用于对 Burgers 方程式和动态Fokker- Planc 方程式的培训。我们还提出了几项模拟结果,用以指导神经网络减少错误的错误, 与高层次的计算功能相比,我们提出了更快速的计算。