Maximum a posteriori (MAP) estimation, like all Bayesian methods, depends on prior assumptions. These assumptions are often chosen to promote specific features in the recovered estimate. The form of the chosen prior determines the shape of the posterior distribution, thus the behavior of the estimator and complexity of the associated optimization problem. Here, we consider a family of Gaussian hierarchical models with generalized gamma hyperpriors designed to promote sparsity in linear inverse problems. By varying the hyperparameters, we move continuously between priors that act as smoothed $\ell_p$ penalties with flexible $p$, smoothing, and scale. We then introduce a predictor-corrector method that tracks MAP solution paths as the hyperparameters vary. Path following allows a user to explore the space of possible MAP solutions and to test the sensitivity of solutions to changes in the prior assumptions. By tracing paths from a convex region to a non-convex region, the user can find local minimizers in strongly sparsity promoting regimes that are consistent with a convex relaxation derived using related prior assumptions. We show experimentally that these solutions. are less error prone than direct optimization of the non-convex problem.
翻译:与所有巴伊西亚方法一样, 后部( MAP) 最大估计取决于先前的假设。 这些假设往往被选择来推广回收的估算中的具体特征。 被选中的先前的预选形式决定了后部分布的形状, 从而决定了顶点的形状, 以及相关优化问题的复杂性 。 这里, 我们考虑由高斯的等级模型组成的一个家族, 其通用的伽马超位模型旨在推动线性反向问题的宽度。 通过不同的超参数, 我们可以在不同的前端之间不断移动。 我们通过使用灵活的 $\ ell_ p$ 罚款, 平滑, 以及 缩放比例。 然后我们引入一种预测- 校准方法, 跟踪后部的解决方案路径, 随着超参数的变化而变化。 后续路径允许用户探索可能的MAP解决方案空间, 并测试对先前假设中的变化的解决方案的敏感度。 通过追踪从等离子区域到非convex 区域, 用户可以发现本地最小化的最小化器, 能够发现与使用相关的假设得出的convex 放松的系统。 我们展示这些解决方案的不易错误。