The Bayesian Lasso is constructed in the linear regression framework and applies the Gibbs sampling to estimate the regression parameters. This paper develops a new sparse learning model, named the Bayesian Lasso Sparse (BLS) model, that takes the hierarchical model formulation of the Bayesian Lasso. The main difference from the original Bayesian Lasso lies in the estimation procedure; the BLS method uses a learning algorithm based on the type-II maximum likelihood procedure. Opposed to the Bayesian Lasso, the BLS provides sparse estimates of the regression parameters. The BLS method is also derived for nonlinear supervised learning problems by introducing kernel functions. We compare the BLS model to the well known Relevance Vector Machine, the Fast Laplace method, the Byesian Lasso, and the Lasso, on both simulated and real data. The numerical results show that the BLS is sparse and precise, especially when dealing with noisy and irregular dataset.
翻译:Bayesian Lasso 是在线性回归框架中构建的, 并应用 Gibbs 样本来估算回归参数。 本文开发了一个新的稀疏学习模型, 名为 Bayesian Lasso (BLS) 模型, 采用Bayesian Lasso 的等级模型。 与 Bayesian Lasso 的主要区别在于估算程序; Besian Lasso 方法使用基于第二类最大可能性程序的学习算法。 与 Bayesian Lasso 相比, BLS 方法对回归参数的估算很少。 BLS 方法也通过引入内核函数来为非线性监管学习问题推导出。 我们用已知的关联矢量Vctor 机器、 Fast Laplace 方法、 Beysian Lasso 和 Lasso 模型在模拟和真实数据上进行了比较。 数字结果显示 BLS 是稀少和精确的, 特别是在处理噪音和不正常的数据集时。