Physics Informed Neural Networks (PINNs) is a promising application of deep learning. The smooth architecture of a fully connected neural network is appropriate for finding the solutions of PDEs; the corresponding loss function can also be intuitively designed and guarantees the convergence for various kinds of PDEs. However, the rate of convergence has been considered as a weakness of this approach. This paper proposes Sobolev-PINNs, a novel loss function for the training of PINNs, making the training substantially efficient. Inspired by the recent studies that incorporate derivative information for the training of neural networks, we develop a loss function that guides a neural network to reduce the error in the corresponding Sobolev space. Surprisingly, a simple modification of the loss function can make the training process similar to \textit{Sobolev Training} although PINNs is not a fully supervised learning task. We provide several theoretical justifications that the proposed loss functions upper bound the error in the corresponding Sobolev spaces for the viscous Burgers equation and the kinetic Fokker--Planck equation. We also present several simulation results, which show that compared with the traditional $L^2$ loss function, the proposed loss function guides the neural network to a significantly faster convergence. Moreover, we provide the empirical evidence that shows that the proposed loss function, together with the iterative sampling techniques, performs better in solving high dimensional PDEs.
翻译:物理、 信息神经网络( PINNs) 是一个很有希望的深层学习应用。 一个完全连接的神经网络的光滑结构结构对于寻找 PDEs 的解决方案是合适的; 相应的损失函数也可以直观地设计, 保证各种 PDEs 的趋同。 但是, 趋同率被认为是这种方法的一个弱点。 本文提出了Sobolev- PINNs, 这是培训 PINNs 的新的损失函数, 使培训效率高。 受最近将衍生信息纳入神经网络培训的研究的启发, 我们开发了一个损失函数, 引导神经网络减少相应的 Sobolev 空间的错误。 令人惊讶的是, 简单的修改损失函数可以使培训过程类似于\ textit{Sobolev train} 。 尽管 PINNs并不是一个完全监督的学习任务。 我们提供了几个理论依据, 提议的损失函数将相应的Sobolev空间的错误与对应的 Burgers 等式 Fokker- Prect 等 。 我们还提出一些模拟网络的误判结果, 与我们提出的高估测测损失函数相比, 提供了一种更快速的计算结果, 。