Proximal Markov Chain Monte Carlo is a novel construct that lies at the intersection of Bayesian computation and convex optimization, which helped popularize the use of nondifferentiable priors in Bayesian statistics. Existing formulations of proximal MCMC, however, require hyperparameters and regularization parameters to be prespecified. In this work, we extend the paradigm of proximal MCMC through introducing a novel new class of nondifferentiable priors called epigraph priors. As a proof of concept, we place trend filtering, which was originally a nonparametric regression problem, in a parametric setting to provide a posterior median fit along with credible intervals as measures of uncertainty. The key idea is to replace the nonsmooth term in the posterior density with its Moreau-Yosida envelope, which enables the application of the gradient-based MCMC sampler Hamiltonian Monte Carlo. The proposed method identifies the appropriate amount of smoothing in a data-driven way, thereby automating regularization parameter selection. Compared with conventional proximal MCMC methods, our method is mostly tuning free, achieving simultaneous calibration of the mean, scale and regularization parameters in a fully Bayesian framework.
翻译:Proximal Markov Lance Monte Carlo 是位于巴伊西亚计算和 convex优化交叉点的一个新建筑, 有助于在巴伊西亚统计中推广使用不可区别的前科。 但是, 近似MCMC 的现有配方需要预先说明超度参数和正规化参数。 在这项工作中, 我们通过引入新型新型的不可区别的前科, 称为缩略图前科, 扩展了准超度MC 的范范范式。 作为概念的证明, 我们将趋势过滤( 趋势过滤, 最初是一个非参数回归问题)放在一个参数设置中, 以提供与可靠间隔相匹配的后部中位, 作为不确定性的衡量尺度。 我们的主要想法是用其 Moreau- Yosida 封装取代后端密度中的非单词 。 使基于梯度的MC 取样员汉密尔顿· 蒙特卡洛 能够应用基于 MIM 蒙特 卡洛 。 拟议的方法确定了以数据驱动方式实现平稳的适当数量, 从而实现标准化参数选择。 与常规的准准正位MC MC 方法相比, 我们的方法正在完全调整一个免费的平标框架, 。