The lasso is the most famous sparse regression and feature selection method. One reason for its popularity is the speed at which the underlying optimization problem can be solved. Sorted L-One Penalized Estimation (SLOPE) is a generalization of the lasso with appealing statistical properties. In spite of this, the method has not yet reached widespread interest. A major reason for this is that current software packages that fit SLOPE rely on algorithms that perform poorly in high dimensions. To tackle this issue, we propose a new fast algorithm to solve the SLOPE optimization problem, which combines proximal gradient descent and proximal coordinate descent steps. We provide new results on the directional derivative of the SLOPE penalty and its related SLOPE thresholding operator, as well as provide convergence guarantees for our proposed solver. In extensive benchmarks on simulated and real data, we show that our method outperforms a long list of competing algorithms.
翻译:Lasso 是最出名的稀疏回归和特征选择方法。 它的普及原因之一是基本优化问题能够解决的速度。 L- One 惩罚性估算( SLOPE) 排序为 L- One 惩罚性估算( SLOPE), 是带有有吸引力的统计属性的Lasso 。 尽管如此, 该方法尚未达到广泛的兴趣。 其主要原因是, 当前适合 SLOPE 的软件软件包依赖高维度差的算法。 解决这个问题, 我们提出一种新的快速算法, 以解决 SLOPE 优化问题, 它将近似梯度梯度下行和准度协调下降步骤结合起来。 我们在 SLOPE 处罚及其相关的 SLOPE 阈值操作器的方向衍生物上提供了新的结果, 并为我们拟议的 SLOPE 提供趋同保证。 在关于模拟和真实数据的广泛基准中, 我们显示我们的方法超越了竞争算法的长列表 。