Physics-informed neural networks (PINNs) [4, 10] are an approach for solving boundary value problems based on differential equations (PDEs). The key idea of PINNs is to use a neural network to approximate the solution to the PDE and to incorporate the residual of the PDE as well as boundary conditions into its loss function when training it. This provides a simple and mesh-free approach for solving problems relating to PDEs. However, a key limitation of PINNs is their lack of accuracy and efficiency when solving problems with larger domains and more complex, multi-scale solutions. In a more recent approach, finite basis physics-informed neural networks (FBPINNs) [8] use ideas from domain decomposition to accelerate the learning process of PINNs and improve their accuracy. In this work, we show how Schwarz-like additive, multiplicative, and hybrid iteration methods for training FBPINNs can be developed. We present numerical experiments on the influence of these different training strategies on convergence and accuracy. Furthermore, we propose and evaluate a preliminary implementation of coarse space correction for FBPINNs.
翻译:物理知情神经网络(PINNs)[4、10]是解决基于差异方程式(PDEs)的边界价值问题的一种方法。PINNs的关键想法是利用神经网络来接近PDE的解决方案,并在培训时将PDE的剩余部分和边界条件纳入损失功能。这为解决与PDEs有关的问题提供了简单和无网状的方法。然而,PINN的关键限制是,在解决较大领域和更为复杂、多规模解决方案的问题时,它们缺乏准确性和效率。更近一些的方法是,使用有限基础物理知情神经网络(FBPINNs)[8]利用领域拆解的构想来加速PINNs的学习进程并提高其准确性。在这项工作中,我们展示了如何为FBPINNs开发Swarz类的添加、倍增和混合循环方法。我们对这些不同培训战略对趋同性和准确性的影响进行了数字实验。此外,我们提议并评估了FBPINNs公司初步实施粗空空间校正的情况。