Markov Chain Monte Carlo (MCMC) methods form one of the algorithmic foundations of high-dimensional Bayesian inverse problems. The recent development of likelihood-informed subspace (LIS) methods offer a viable route to designing efficient MCMC methods for exploring high-dimensional posterior distributions via exploiting the intrinsic low-dimensional structure of the underlying inverse problem. However, existing LIS methods and the associated performance analysis often assume that the prior distribution is Gaussian. This assumption is limited for inverse problems aiming to promote sparsity in the parameter estimation, as heavy-tailed priors, e.g., Laplace distribution or the elastic net commonly used in Bayesian LASSO, are often needed in this case. To overcome this limitation, we consider a prior normalization technique that transforms any non-Gaussian (e.g. heavy-tailed) priors into standard Gaussian distributions, which make it possible to implement LIS methods to accelerate MCMC sampling via such transformations. We also rigorously investigate the integration of such transformations with several MCMC methods for high-dimensional problems. Finally, we demonstrate various aspects of our theoretical claims on two nonlinear inverse problems.
翻译:高维贝叶斯反面问题的一种算法基础是Markov Clain Monte Carlo(MCMC) (MCMC) (MMC) (MMC) (MMC) (MMC) (MMC ) (MMC ) (MMC ) (MMC ) (MMC ) ) 方法,它构成了高维贝叶斯反面问题的一种高维了解子空间(LIS) (LIS) (LIS) 方法的最新发展为设计有效的MC MC (MMC) 方法提供了一条可行的途径,通过利用深层反面问题内在的低维结构来探索高维次子分布。然而,现有的LIS 方法和相关的性能分析往往假定先前的分布是高尔斯人(Gaussian) 。这一假设是有限的。 对于旨在促进参数估计的偏移的反面问题,这种反面问题,例如重尾线分布或Bayesian LASSO(LASO) 常用的弹性网(LISO) (LASSO) (LA) (L) (LASSO) (LA) (LA) (LAC) (LAC) (LAC) (LAC) (LACM MIT) (U) (LAC) (LAC) (L) (LAC) (LI) 方法,最近期通常使用的方法,最近期) (LAC) (LAC) (LACM 通常使用的方法是重) (LAC) (LAC) (L) (LAC) (LAC) (LAC) (LAC) (LAC) (LAC) (L) (L) (L) (L) (L) (L) (L) (L) (L) (L) (L) (L) (L) (L) (L) (L) (L) (L) (L) (L) (LAC) (LAC) (LAC) (LAC) (L) (LAC) (LAC) (L) (L) (L) (L) (L) (L) (L) (L) (L) (L) (L) (L) (L) (L) (L) (LSO) (LSO) (L)