In linear regression, SLOPE is a new convex analysis method that generalizes the Lasso via the sorted L1 penalty: larger fitted coefficients are penalized more heavily. This magnitude-dependent regularization requires an input of penalty sequence $\lambda$, instead of a scalar penalty as in the Lasso case, thus making the design extremely expensive in computation. In this paper, we propose two efficient algorithms to design the possibly high-dimensional SLOPE penalty, in order to minimize the mean squared error. For Gaussian data matrices, we propose a first order Projected Gradient Descent (PGD) under the Approximate Message Passing regime. For general data matrices, we present a zero-th order Coordinate Descent (CD) to design a sub-class of SLOPE, referred to as the k-level SLOPE. Our CD allows a useful trade-off between the accuracy and the computation speed. We demonstrate the performance of SLOPE with our designs via extensive experiments on synthetic data and real-world datasets.
翻译:在线性回归中, SLOPE 是一种通过分类L1 处罚将Lasso 概括化的新的 convex 分析方法,它通过分类L1 惩罚来概括Lasso : 更适合的系数会受到更严厉的处罚。 这种取决于规模的正规化要求输入罚款序列$\lambda$,而不是像Lasso 案件那样的卡路里罚款,从而使其设计在计算中变得极其昂贵。 在本文中,我们建议两种有效的算法来设计可能高维的 SLOPE 处罚,以尽量减少平均平方差错误。 对于Gaussian 数据矩阵,我们提议在“ 近似消息传递” 制度下将“ 梯子( PGD) ” 设定第一个顺序。 对于一般数据矩阵,我们提出“ 坐标底线” 的零顺序, 以设计 SLOPE 子类, 称为 k 水平 SLOPE 。 我们的CD 允许在精确度和 计算速度之间进行有用的交换。 我们通过合成数据和真实世界数据集的广泛实验来展示我们设计的 SLOPE 。