Multifidelity simulation methodologies are often used in an attempt to judiciously combine low-fidelity and high-fidelity simulation results in an accuracy-increasing, cost-saving way. Candidates for this approach are simulation methodologies for which there are fidelity differences connected with significant computational cost differences. Physics-informed Neural Networks (PINNs) are candidates for these types of approaches due to the significant difference in training times required when different fidelities (expressed in terms of architecture width and depth as well as optimization criteria) are employed. In this paper, we propose a particular multifidelity approach applied to PINNs that exploits low-rank structure. We demonstrate that width, depth, and optimization criteria can be used as parameters related to model fidelity, and show numerical justification of cost differences in training due to fidelity parameter choices. We test our multifidelity scheme on various canonical forward PDE models that have been presented in the emerging PINNs literature.
翻译:多种纤维模拟方法常常用于试图明智地将低忠诚和高忠诚模拟结果结合起来,从而提高准确性,节省费用。这种方法的候选者是模拟方法,其真实性差异与重大的计算成本差异有关。物理知情神经网络(PINNs)是这些类型方法的候选者,因为在采用不同忠诚(以建筑宽度和深度以及优化标准表示)时所需的培训时间差异很大。在本文中,我们提议对PINNs采用一种特殊的多忠诚方法,利用低等级结构。我们证明宽度、深度和优化标准可以用作与模型忠诚有关的参数,并表明因忠诚参数选择而进行培训的成本差异的数字理由。我们测试了在新兴的PINNs文献中介绍的多种直观前方PDE模型。