A reciprocal LASSO (rLASSO) regularization employs a decreasing penalty function as opposed to conventional penalization methods that use increasing penalties on the coefficients, leading to stronger parsimony and superior model selection relative to traditional shrinkage methods. Here we consider a fully Bayesian formulation of the rLASSO problem, which is based on the observation that the rLASSO estimate for linear regression parameters can be interpreted as a Bayesian posterior mode estimate when the regression parameters are assigned independent inverse Laplace priors. Bayesian inference from this posterior is possible using an expanded hierarchy motivated by a scale mixture of double Pareto or truncated normal distributions. On simulated and real datasets, we show that the Bayesian formulation outperforms its classical cousin in estimation, prediction, and variable selection across a wide range of scenarios while offering the advantage of posterior inference. Finally, we discuss other variants of this new approach and provide a unified framework for variable selection using flexible reciprocal penalties. All methods described in this paper are publicly available as an R package at: https://github.com/himelmallick/BayesRecipe.
翻译:LASSO(rLASSO)的对等正规化使用一种越来越严厉的系数惩罚的常规惩罚方法,而传统的惩罚方法则使用越来越严厉的系数惩罚,从而导致比传统的收缩方法更强烈的松动和更好的模式选择。这里我们考虑对RLASSO问题的完全巴伊西亚配方,这是基于这样一种观察,即当回归参数被赋予独立的Laplace前科时,rLASSO对线性回归参数的估计可以被解释为一种巴伊西亚后退模式的估计数。从这一后退点上推断出,有可能使用由双倍双倍Pareto或松动正常分布的比重结构驱动的扩大的等级。在模拟和真实的数据集上,我们表明,Bayesian配方的配方在估算、预测和各种情景的可变性选择方面超越了其典型表兄,同时提供了后推论的优势。最后,我们讨论了这一新方法的其他变式,并提供了一个使用灵活的对等惩罚进行变量选择的统一框架。本文中描述的所有方法都公开作为R包:http://github.com/Reclikeys.