Many important problems in science and engineering require solving the so-called parametric partial differential equations (PDEs), i.e., PDEs with different physical parameters, boundary conditions, shapes of computational domains, etc. Typical reduced order modeling techniques accelarate solution of the parametric PDEs by projecting them onto a linear trial manifold constructed in the offline stage. These methods often need a predefined mesh as well as a series of precomputed solution snapshots, andmay struggle to balance between efficiency and accuracy due to the limitation of the linear ansatz. Utilizing the nonlinear representation of neural networks, we propose Meta-Auto-Decoder (MAD) to construct a nonlinear trial manifold, whose best possible performance is measured theoretically by the decoder width. Based on the meta-learning concept, the trial manifold can be learned in a mesh-free and unsupervised way during the pre-training stage. Fast adaptation to new (possibly heterogeneous) PDE parameters is enabled by searching on this trial manifold, and optionally fine-tuning the trial manifold at the same time. Extensive numerical experiments show that the MAD method exhibits faster convergence speed without losing accuracy than other deep learning-based methods.
翻译:科学和工程方面的许多重要问题要求解决所谓的参数部分差异方程式(PDEs),即具有不同物理参数、边界条件、计算域的形状的PDEs等所谓的参数偏差部分方程式(PDEs),即具有不同物理参数的PDEs、边界条件、计算域的形状等。典型的减序模型技术通过投射到离线阶段建造的线性试验元件上,将参数参数参数的偏差法放大成分数的公式。这些方法往往需要预先定义的网格和一系列预先计算的解决办法简况,并可能由于线性肛门的限制而难以在效率和准确性之间求得平衡。利用神经网络的非线性代表性,我们提议Meta-Auto-Decoder(MAD)建造一个非线性试验元件,其最佳的性能在理论上用解码宽度来衡量。根据元学习概念,在培训前阶段可以以无网格和不超超强的方式学习试验元。快速适应以新的(可能混化的)PDE参数,通过在试验中搜索这一试验中进行多重和可选微调整试验的试度调整试验速度,而无需以同一时间学习速度展示。