Physics-informed neural networks (PINNs) have recently become a powerful tool for solving partial differential equations (PDEs). However, finding a set of neural network parameters that lead to fulfilling a PDE can be challenging and non-unique due to the complexity of the loss landscape that needs to be traversed. Although a variety of multi-task learning and transfer learning approaches have been proposed to overcome these issues, there is no incremental training procedure for PINNs that can effectively mitigate such training challenges. We propose incremental PINNs (iPINNs) that can learn multiple tasks (equations) sequentially without additional parameters for new tasks and improve performance for every equation in the sequence. Our approach learns multiple PDEs starting from the simplest one by creating its own subnetwork for each PDE and allowing each subnetwork to overlap with previously learned subnetworks. We demonstrate that previous subnetworks are a good initialization for a new equation if PDEs share similarities. We also show that iPINNs achieve lower prediction error than regular PINNs for two different scenarios: (1) learning a family of equations (e.g., 1-D convection PDE); and (2) learning PDEs resulting from a combination of processes (e.g., 1-D reaction-diffusion PDE). The ability to learn all problems with a single network together with learning more complex PDEs with better generalization than regular PINNs will open new avenues in this field.
翻译:近来,物理启发式神经网络(PINNs)已经成为求解偏微分方程(PDE)的强大工具。然而,由于需要遍历复杂的损失函数空间,找到使PDE完整的神经网络参数集可能是具有挑战性和非唯一的。尽管已经提出了各种多任务学习和转移学习方法以克服这些问题,但是目前没有针对PINNs的增量训练程序,可以有效缓解这些培训难题。我们提出了增量PINNs(iPINNs),可以逐个学习多个任务(方程),而不需要为新任务添加其他参数,并提高了每个方程的性能。我们的方法从最简单的方程开始学习多个PDE,为每个PDE创建自己的子网络,并允许每个子网络与先前学习的子网络重叠。我们证明,如果PDE之间存在相似性,则之前的子网络是新方程的良好初始化。我们还展示了iPINNs在两种不同情况下比普通PINNs实现更低的预测误差:(1)学习方程族(例如1维对流PDE);和(2)学习由多种过程组合而成的PDE(例如1维反应扩散PDE)。能够使用单个网络学习所有问题以及比常规PINNs更好地学习更复杂的PDE,将在该领域开辟新的途径。