Modern statistical learning algorithms are capable of amazing flexibility, but struggle with interpretability. One possible solution is sparsity: making inference such that many of the parameters are estimated as being identically 0, which may be imposed through the use of nonsmooth penalties such as the $\ell_1$ penalty. However, the $\ell_1$ penalty introduces significant bias when high sparsity is desired. In this article, we retain the $\ell_1$ penalty, but define learnable penalty weights $\lambda_p$ endowed with hyperpriors. We start the article by investigating the optimization problem this poses, developing a proximal operator associated with the $\ell_1$ norm. We then study the theoretical properties of this variable-coefficient $\ell_1$ penalty in the context of penalized likelihood. Next, we investigate application of this penalty to Variational Bayes, developing a model we call the Sparse Bayesian Lasso which allows for behavior qualitatively like Lasso regression to be applied to arbitrary variational models. In simulation studies, this gives us the Uncertainty Quantification and low bias properties of simulation-based approaches with an order of magnitude less computation. Finally, we apply our methodology to a Bayesian lagged spatiotemporal regression model of internal displacement that occurred during the Iraqi Civil War of 2013-2017.
翻译:现代统计学习算法具有惊人的灵活性,但与可解释性相抗争。 一种可能的解决方案是宽度: 做出这样的推论, 许多参数被估算为相同的 0, 可以通过使用非优度处罚( $\ ell_ 1$) 来实施。 但是, $\ ell_ 1$ 的处罚在高宽度需要时会引入重大偏差。 在本条中, 我们保留$\ ell_ 1$的罚款, 但是定义了可学习的罚款权重 $\ lumbda_ p$ 。 我们从研究最优化问题开始, 开发一个与 $\ ell_ 1$ 标准相联的准运算符。 我们随后在惩罚可能性受罚的情况下, 研究这种可变效益 $\ ell_ 1$ 的处罚的理论属性 。 然而, 我们调查这一刑罚在宽度高的湾内应用, 我们称之为 " spress Bay Bayesian Lasia laso 20 " 的模型, 可以让行为质量回归适用于任意变异模式。 在模拟研究中, 给我们提供了一个不精确的 Qastial imalalalalal imate imalalalal imation imation imational imational lakeal roal roal lagational