We present a dimension-reduced KRnet map approach (DR-KRnet) for high-dimensional inverse problems, which is based on an explicit construction of a map that pushes forward the prior measure to the posterior measure in the latent space. Our approach consists of two main components: data-driven VAE prior and density approximation of the posterior of the latent variable. In reality, it may not be trivial to initialize a prior distribution that is consistent with available prior data; in other words, the complex prior information is often beyond simple hand-crafted priors. We employ variational autoencoder (VAE) to approximate the underlying distribution of the prior dataset, which is achieved through a latent variable and a decoder. Using the decoder provided by the VAE prior, we reformulate the problem in a low-dimensional latent space. In particular, we seek an invertible transport map given by KRnet to approximate the posterior distribution of the latent variable. Moreover, an efficient physics-constrained surrogate model without any labeled data is constructed to reduce the computational cost of solving both forward and adjoint problems involved in likelihood computation. Numerical experiments are implemented to demonstrate the validity, accuracy, and efficiency of DR-KRnet.
翻译:我们为高维反向问题提出了一个维度降 KRnet 映射方法( DR- KRnet ), 其基础是明确绘制地图, 将先前的测量推进到潜层空间的后方测量。 我们的方法由两个主要部分组成: 数据驱动 VAE 前端和潜层变量后方的密度近似。 实际上, 初始化一个与先前数据相一致的先前分布图可能并非微不足道; 换句话说, 复杂的先前信息往往超出简单的手制前端。 我们使用变式自动计算器( VAE) 来估计前一组数据集的基本分布, 以通过潜伏变量和解密器实现。 我们使用前方VAE 提供的解码器, 将问题重新定位在低维层潜层空间。 特别是, 我们寻找 KRnet 提供的不可撤销的传输图, 以近似潜在变量的远端分布。 此外, 一个高效的物理调节的代谢模型, 正在构建一个没有贴标签的数据来降低R的计算成本, 并演示前方和顶点的概率。</s>