Ordered Weighted $L_{1}$ (OWL) regularized regression is a new regression analysis for high-dimensional sparse learning. Proximal gradient methods are used as standard approaches to solve OWL regression. However, it is still a burning issue to solve OWL regression due to considerable computational cost and memory usage when the feature or sample size is large. In this paper, we propose the first safe screening rule for OWL regression by exploring the order of the primal solution with the unknown order structure via an iterative strategy, which overcomes the difficulties of tackling the non-separable regularizer. It effectively avoids the updates of the parameters whose coefficients must be zero during the learning process. More importantly, the proposed screening rule can be easily applied to standard and stochastic proximal gradient methods. Moreover, we prove that the algorithms with our screening rule are guaranteed to have identical results with the original algorithms. Experimental results on a variety of datasets show that our screening rule leads to a significant computational gain without any loss of accuracy, compared to existing competitive algorithms.
翻译:常规回归( OWL) 是用于高维稀疏学习的一个新的回归分析。 使用准梯度方法作为解决 OWL回归的标准方法。 然而, 仍然是一个解决 OWL回归的棘手问题, 因为在特性或样本大小巨大时, 计算成本和记忆使用相当高。 在本文中, 我们通过一个迭接策略探索原始解决方案与未知排序结构的顺序, 从而探索OWL回归的第一个安全筛选规则, 这克服了处理不可分离的常规化器的困难。 它有效地避免了在学习过程中其系数必须为零的参数的更新。 更重要的是, 拟议的筛选规则可以很容易地适用于标准且随机的准梯度方法。 此外, 我们证明, 带有我们筛选规则的算法可以保证与原始算法具有相同的结果。 各种数据集的实验结果显示, 与现有的竞争性算法相比, 我们的筛选规则可以带来显著的计算收益, 而不丧失任何准确性。