We present a dimension-reduced KRnet map approach (DR-KRnet) for high-dimensional Bayesian inverse problems, which is based on an explicit construction of a map that pushes forward the prior measure to the posterior measure in the latent space. Our approach consists of two main components: data-driven VAE prior and density approximation of the posterior of the latent variable. In reality, it may not be trivial to initialize a prior distribution that is consistent with available prior data; in other words, the complex prior information is often beyond simple hand-crafted priors. We employ variational autoencoder (VAE) to approximate the underlying distribution of the prior dataset, which is achieved through a latent variable and a decoder. Using the decoder provided by the VAE prior, we reformulate the problem in a low-dimensional latent space. In particular, we seek an invertible transport map given by KRnet to approximate the posterior distribution of the latent variable. Moreover, an efficient physics-constrained surrogate model without any labeled data is constructed to reduce the computational cost of solving both forward and adjoint problems involved in likelihood computation. With numerical experiments, we demonstrate the accuracy and efficiency of DR-KRnet for high-dimensional Bayesian inverse problems.
翻译:我们展示了一种维度降 KRnet 映射方法( DR- KRnet ), 用于应对高维贝内斯反向问题, 其基础是明确构建一个将先前的测量推进到潜空的后方测量的地图。 我们的方法由两个主要部分组成: 数据驱动 VAE 前端和潜伏变量后端的密度近似。 事实上, 初始化一个与先前可用数据相一致的先前分布图可能并非微不足道; 换句话说, 复杂的先前信息往往超越简单的手工艺前端。 我们使用可变自动计算器( VAE) 来估计前一组数据集的基本分布。 我们使用前方数据通过潜伏变量和解码器实现的原始分布。 我们使用前方VAE 提供的解码器在低维潜在变量的后端空间重新配置问题。 特别是, 我们寻找 KRnet 提供的不可忽略的运输图, 以估计潜源变量的后端分布。 此外, 一个高效的物理调节的代孕模型, 正在构建一个高效的代对前方数据进行基本分配的模型, 以降低我们用于前方和前方和前方计算的可能性, 数字计算中涉及的数字计算。</s>