Physics-informed neural networks (PINNs) have emerged as a powerful tool for solving partial differential equations (PDEs) in a variety of domains. While previous research in PINNs has mainly focused on constructing and balancing loss functions during training to avoid poor minima, the effect of sampling collocation points on the performance of PINNs has largely been overlooked. In this work, we find that the performance of PINNs can vary significantly with different sampling strategies, and using a fixed set of collocation points can be quite detrimental to the convergence of PINNs to the correct solution. In particular, (1) we hypothesize that training of PINNs rely on successful "propagation" of solution from initial and/or boundary condition points to interior points, and PINNs with poor sampling strategies can get stuck at trivial solutions if there are \textit{propagation failures}. (2) We demonstrate that propagation failures are characterized by highly imbalanced PDE residual fields where very high residuals are observed over very narrow regions. (3) To mitigate propagation failure, we propose a novel \textit{evolutionary sampling} (Evo) method that can incrementally accumulate collocation points in regions of high PDE residuals. We further provide an extension of Evo to respect the principle of causality while solving time-dependent PDEs. We empirically demonstrate the efficacy and efficiency of our proposed methods in a variety of PDE problems.
翻译:物理知情神经网络(PINNs)作为解决不同领域部分差异方程式(PDEs)的有力工具已经形成。虽然以前对PINNs的研究主要侧重于在培训期间构建和平衡损失功能以避免微弱的微粒,但抽样合用点对PINNs绩效的影响在很大程度上被忽略。在这项工作中,我们发现PINNs的表现随着不同的取样战略的不同而大不相同,使用一套固定的合用点可能会大大损害PINNs与正确解决方案的趋同。特别是(1) 我们低估了对PINNs的培训依赖于成功地“调整”解决办法,从初始和/或边界条件点到内部点,而抽样战略差的PINNs则可能陷入微不足道的解决方案。(2) 我们证明,传播失败的特征是高度不平衡的PDE剩余区,在非常狭窄的区域观察到非常高的残余物。(3) 为了减轻传播失败,我们建议对PINNs的培训进行新颖的“解释”,从最初和(或)边界条件点到内部点,对解决办法的“解释”的“解释性效率”进行我们不断积累的“结果”的方法。我们为持续的“Bres的“结果”的“结果”的延伸”的方法提供了一种渐进性处理方法。