Regularization is a common tool in variational inverse problems to impose assumptions on the parameters of the problem. One such assumption is sparsity, which is commonly promoted using lasso and total variation-like regularization. Although the solutions to many such regularized inverse problems can be considered as points of maximum probability of well-chosen posterior distributions, samples from these distributions are generally not sparse. In this paper, we present a framework for implicitly defining a probability distribution that combines the effects of sparsity imposing regularization with Gaussian distributions. Unlike continuous distributions, these implicit distributions can assign positive probability to sparse vectors. We study these regularized distributions for various regularization functions including total variation regularization and piecewise linear convex functions. We apply the developed theory to uncertainty quantification for Bayesian linear inverse problems and derive a Gibbs sampler for a Bayesian hierarchical model. To illustrate the difference between our sparsity-inducing framework and continuous distributions, we apply our framework to small-scale deblurring and computed tomography examples.
翻译:常规化是将问题参数的假设强加于人的各种反常问题的一个常见工具。这种假设是聚变,通常使用拉索和完全变异式的正规化来加以推广。虽然许多此类常规化的反常问题的解决方案可以被视为选好后后子分布的最大概率点,但这些分布的样本一般并不稀释。在本文中,我们提出了一个框架,可以隐含地界定一种概率分布,将随机化强制规范的影响与高山分布结合起来。与连续分布不同,这些隐性分布可以给稀散的矢量分配出正概率。我们研究了各种正规化功能的这些常规化分布,包括完全变异的正规化和细线性线性二次曲线功能。我们应用了为巴伊斯线性反问题进行不确定性的量化的理论,并为巴伊斯等级模型得出Gibbs取样器。我们用我们的框架来说明我们松散式教育框架和连续分布之间的差别。我们用我们的框架来说明小规模的分流和计算图象学实例。