Sparse Bayesian Learning (SBL) is a powerful framework for attaining sparsity in probabilistic models. Herein, we propose a coordinate ascent algorithm for SBL termed Relevance Matching Pursuit (RMP) and show that, as its noise variance parameter goes to zero, RMP exhibits a surprising connection to Stepwise Regression. Further, we derive novel guarantees for Stepwise Regression algorithms, which also shed light on RMP. Our guarantees for Forward Regression improve on deterministic and probabilistic results for Orthogonal Matching Pursuit with noise. Our analysis of Backward Regression on determined systems culminates in a bound on the residual of the optimal solution to the subset selection problem that, if satisfied, guarantees the optimality of the result. To our knowledge, this bound is the first that can be computed in polynomial time and depends chiefly on the smallest singular value of the matrix. We report numerical experiments using a variety of feature selection algorithms. Notably, RMP and its limiting variant are both efficient and maintain strong performance with correlated features.
翻译:斯普尔斯巴耶斯学习( SBL) 是一个在概率模型中实现宽度的强大框架。 在这里, 我们为SBL提出一个协调的运算法, 称为“ 相关性匹配追寻” (RMP), 并显示, 当它的噪音差异参数到达零时, RMP 显示一个与“ 步进回归” 的连接令人惊讶。 此外, 我们为“ 步进回归” 算法提供新的保障, 这也能为 RMP 提供线索。 我们的“ 前退” 保证改善了“ 以噪声匹配” 的确定性和概率结果。 我们对确定系统中的向后回归的分析最终将锁定于子选择问题的最佳解决方案的剩余部分, 后者如果满足, 就能保证结果的最佳性。 据我们所知, 这约束是第一个可以在多球时计算出来, 并且主要取决于矩阵中最小的单值。 我们用各种特征选择算法报告数字实验。 值得注意的是, RMP 及其限制的变式既有效, 也保持与关联性特征特征的强性。