This paper empirically studies commonly observed training difficulties of Physics-Informed Neural Networks (PINNs) on dynamical systems. Our results indicate that fixed points which are inherent to these systems play a key role in the optimization of the in PINNs embedded physics loss function. We observe that the loss landscape exhibits local optima that are shaped by the presence of fixed points. We find that these local optima contribute to the complexity of the physics loss optimization which can explain common training difficulties and resulting nonphysical predictions. Under certain settings, e.g., initial conditions close to fixed points or long simulations times, we show that those optima can even become better than that of the desired solution.
翻译:本文的经验性研究通常观察到物理化神经网络(PINNs)在动态系统方面的训练困难。我们的结果表明,这些系统固有的固定点在优化PINNs嵌入物理损失功能方面发挥着关键作用。我们发现,损失地貌显示了当地因存在固定点而形成的选择。我们发现,这些局部选择性有助于物理学损失优化的复杂性,这可以解释常见的培训困难和由此产生的非物理预测。在某些环境下,例如,在接近固定点的初始条件或长期模拟时,我们表明这些选择性甚至能够比理想的解决方案更好。