The efficient approximation of parametric PDEs is of tremendous importance in science and engineering. In this paper, we show how one can train Galerkin discretizations to efficiently learn quantities of interest of solutions to a parametric PDE. The central component in our approach is an efficient neural-network-weighted Minimal-Residual formulation, which, after training, provides Galerkin-based approximations in standard discrete spaces that have accurate quantities of interest, regardless of the coarseness of the discrete space.
翻译:在科学和工程领域中,高效地近似参数化偏微分方程是极为重要的。本文展示了如何训练Galerkin离散化方法来有效地学习参数化偏微分方程的解的感兴趣量。我们方法的中心组件是一种高效的神经网络加权最小残差公式,在训练之后,在标准离散空间中提供Galerkin的近似值,无论离散空间的粗细如何,都能保持准确的感兴趣量。