Learning the solution of partial differential equations (PDEs) with a neural network (known in the literature as a physics-informed neural network, PINN) is an attractive alternative to traditional solvers due to its elegancy, greater flexibility and the ease of incorporating observed data. However, training PINNs is notoriously difficult in practice. One problem is the existence of multiple simple (but wrong) solutions which are attractive for PINNs when the solution interval is too large. In this paper, we propose to expand the solution interval gradually to make the PINN converge to the correct solution. To find a good schedule for the solution interval expansion, we train an ensemble of PINNs. The idea is that all ensemble members converge to the same solution in the vicinity of observed data (e.g., initial conditions) while they may be pulled towards different wrong solutions farther away from the observations. Therefore, we use the ensemble agreement as the criterion for including new points for computing the loss derived from PDEs. We show experimentally that the proposed method can improve the accuracy of the found solution.
翻译:以神经网络(在文献中称为物理知情神经网络,PINN)学习部分差异方程式(PDEs)是传统解决方案的一种有吸引力的替代方法,因为它具有灵活性,具有更大的灵活性,而且易于纳入观察到的数据。然而,培训PINNs在实践上非常困难。一个问题在于,当解决方案间隔过长时,对于PINNs来说,多种简单(但错误)的解决方案是具有吸引力的。在本文中,我们提议扩大解决方案间隔,使PINN逐渐与正确的解决方案相融合。为了找到解决方案间隔扩展的好时间表,我们培训了一批PINNs。想法是,所有共同成员在观测到的数据附近(例如初始条件)都聚集到相同的解决方案,而它们可能会被拉向离观测更远的不同错误的解决方案。因此,我们使用共同协议作为列入计算PDEs损失新点的标准。我们实验性地表明,拟议的方法可以提高所发现解决方案的准确性。