Physics-informed neural networks (PINNs) have emerged as an effective technique for solving PDEs in a wide range of domains. Recent research has demonstrated, however, that the performance of PINNs can vary dramatically with different sampling procedures, and that using a fixed set of training points can be detrimental to the convergence of PINNs to the correct solution. In this paper, we present an adaptive approach termed failure-informed PINNs(FI-PINNs), which is inspired by the viewpoint of reliability analysis. The basic idea is to define a failure probability by using the residual, which represents the reliability of the PINNs. With the aim of placing more samples in the failure region and fewer samples in the safe region, FI-PINNs employs a failure-informed enrichment technique to incrementally add new collocation points to the training set adaptively. Using the new collocation points, the accuracy of the PINNs model is then improved. The failure probability, similar to classical adaptive finite element methods, acts as an error indicator that guides the refinement of the training set. When compared to the conventional PINNs method and the residual-based adaptive refinement method, the developed algorithm can significantly improve accuracy, especially for low regularity and high-dimensional problems. We prove rigorous bounds on the error incurred by the proposed FI-PINNs and illustrate its performance through several problems.
翻译:物理知情神经网络(PINNs)已成为在广泛领域解决PDE的有效技术,但最近的研究表明,由于取样程序不同,PINNs的性能可能因取样程序不同而大不相同,使用固定的培训点可能不利于PINNs与正确解决办法的趋同。在本文中,我们介绍了一种适应性办法,称为不为人知的PINNs(FI-PINNs),它受到可靠性分析观点的启发。基本想法是通过使用残留物来确定失败概率,这代表PINNs的可靠性。为了在故障区域放置更多的样品,减少安全区域的样品,FI-PINNNs采用不为人知的浓缩技术,逐步将新的合用点与正确的解决办法相融合。我们利用新的合点,提高PINNs模型的准确性。与典型的适应性限定性要素方法相似,作为指导培训改进的错误指标。与传统的PINNNs方法相比,FI-PINs在常规性精确性方面采用不甚测的精确性,通过严格的精确性方法,我们通过严格的精确性地测算方法,可以明显地改进了。