A reciprocal LASSO (rLASSO) regularization employs a decreasing penalty function as opposed to conventional penalization approaches that use increasing penalties on the coefficients, leading to stronger parsimony and superior model selection relative to traditional shrinkage methods. Here we consider a fully Bayesian formulation of the rLASSO problem, which is based on the observation that the rLASSO estimate for linear regression parameters can be interpreted as a Bayesian posterior mode estimate when the regression parameters are assigned independent inverse Laplace priors. Bayesian inference from this posterior is possible using an expanded hierarchy motivated by a scale mixture of double Pareto or truncated normal distributions. On simulated and real datasets, we show that the Bayesian formulation outperforms its classical cousin in estimation, prediction, and variable selection across a wide range of scenarios while offering the advantage of posterior inference. Finally, we discuss other variants of this new approach and provide a unified framework for variable selection using flexible reciprocal penalties. All methods described in this paper are publicly available as an R package at: https://github.com/himelmallick/BayesRecipe.
翻译:LASSO(rLASSO)的对等正规化使用一种减少惩罚的功能,而不是使用对系数的加重惩罚的常规惩罚方法,从而导致对传统收缩方法采用更强烈的压制和优优等模型选择。这里我们考虑对RLASSO问题的完全巴伊斯配方,其依据是观察到,当回归参数被赋予独立的Laplace前科时,RLASSO对线性回归参数的估计可被解释为巴伊西亚后退模式估计值。从这一后退点上推断出,有可能使用由双双双双双双双双双双双双双双双双双双双双正正正正正正分配的比重等级。关于模拟和真实的数据集,我们表明,Bayesian配方配方在估算、预测和多种情景的可变选择方面超越了其典型表兄,同时提供了后退引的优势。最后,我们讨论了这一新方法的其他变式,并提供了一个使用灵活对等处罚进行变量选择的统一框架。本文中描述的所有方法都公开作为R包:http://githubub.com/relikels/remal/remalslick