Best subset selection (BSS) is widely known as the holy grail for high-dimensional variable selection. Nevertheless, the notorious NP-hardness of BSS substantially restricts its practical application and also discourages its theoretical development to some extent, particularly in the current era of big data. In this paper, we investigate the variable selection properties of BSS when its target sparsity is greater than or equal to the true sparsity. Our main message is that BSS is robust against design dependence in terms of achieving model consistency and sure screening, and more importantly, that such robustness can be propagated to the near best subsets that are computationally tangible. Specifically, we introduce an identifiability margin condition that is free of restricted eigenvalues and show that it is sufficient and nearly necessary for BSS to exactly recover the true model. A relaxed version of this condition is also sufficient for BSS to achieve the sure screening property. Moreover, taking optimization error into account, we find that all the established statistical properties for the exact best subset carry over to any near best subset whose residual sum of squares is close enough to that of the best one. In particular, a two-stage fully corrective iterative hard thresholding (IHT) algorithm can provably find a sparse sure screening subset within logarithmic steps; another round of exact BSS within this set can recover the true model. The simulation studies and real data examples show that IHT yields lower false discovery rates and higher true positive rates than the competing approaches including LASSO, SCAD and Sure Independence Screening (SIS), especially under highly correlated design.
翻译:最佳子集选择( BSS) 广为人知, 是高维变量选择的圣杯。 然而, BSS 臭名昭著的NP-硬性在很大程度上限制了其实际应用,也在某种程度上抑制了其理论发展,特别是在当前大数据时代。 在本文中, 当目标宽度大于或等于真实宽度时, 我们调查 BSS 的可变选择属性。 我们的主要信息是, BSS 强于设计依赖性, 在实现模型一致性和肯定筛选方面, 更重要的是, 这种稳健性能可以传播到接近最佳的、 计算上可见的子集。 具体地说, 我们引入一种没有限制的egen值的可识别性差, 并在一定程度上抑制其理论发展, 特别是在当前大数据时代。 放松这一条件的版本对于 BSS 也足以实现真实的筛选属性。 此外, 考虑到最优化的错误, 我们发现, 最精确的子集的所有既定统计属性都延续到任何接近最佳子集, 其方块的残值接近于最接近最精确的精确的子集, 可以计算为最接近于最精确的精确的精确的计算。 。 最接近于最精确的IT 和最准确的SIS 的SIS 的SI 的SI 和最接近于最接近最精确的精确的Sil 级的S 的S 的Sil 级的Sil 的Sil 的Sil, 级级级级级级级 的精确的S 级 级 级级 级 级 级 级 级 级 级, 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级 级