Physics-informed neural networks (PINNs) have emerged as an effective technique for solving PDEs in a wide range of domains. It is noticed, however, the performance of PINNs can vary dramatically with different sampling procedures. For instance, a fixed set of (prior chosen) training points may fail to capture the effective solution region (especially for problems with singularities). To overcome this issue, we present in this work an adaptive strategy, termed the failure-informed PINNs (FI-PINNs), which is inspired by the viewpoint of reliability analysis. The key idea is to define an effective failure probability based on the residual, and then, with the aim of placing more samples in the failure region, the FI-PINNs employs a failure-informed enrichment technique to adaptively add new collocation points to the training set, such that the numerical accuracy is dramatically improved. In short, similar as adaptive finite element methods, the proposed FI-PINNs adopts the failure probability as the posterior error indicator to generate new training points. We prove rigorous error bounds of FI-PINNs and illustrate its performance through several problems.
翻译:物理知情神经网络(PINNs)已成为在广泛领域解决PDE的有效技术,但人们注意到,由于取样程序不同,PINNs的性能会因取样程序的不同而大不相同,例如,一套固定的(主要选择的)训练点可能无法捕捉有效的解决办法区域(特别是因奇数问题)。为了克服这一问题,我们在这项工作中提出了一个适应性战略,称为 " 失灵知情的PINNs(FI-PINNs) " (FI-PINNs),它受到可靠性分析观点的启发。关键思想是,根据剩余物界定有效的故障概率,然后,为了在故障区域放置更多的样品,FI-PINNs采用了一种不知情的浓缩技术,在训练中添加新的合用点,从而大大改进了数字精确性。简言之,拟议的FI-PINNs将失败概率作为事后误差指标,作为产生新的训练点。我们证明FI-PINNs有严格的错误,并用几个问题来说明其表现。