The Sorted L-One Estimator (SLOPE) is a popular regularization method in regression, which induces clustering of the estimated coefficients. That is, the estimator can have coefficients of identical magnitude. In this paper, we derive an asymptotic distribution of SLOPE for the ordinary least squares, Huber, and Quantile loss functions, and use it to study the clustering behavior in the limit. This requires a stronger type of convergence since clustering properties do not follow merely from the classical weak convergence. We establish asymptotic control of the false discovery rate for the asymptotic orthogonal design of the regressor. We also show how to extend the framework to a broader class of regularizers other than SLOPE.
翻译:排序L-One估计器(SLOPE)是回归中流行的正则化方法,它引导了估计系数的聚类。也就是说,估计器可以具有相同大小的系数。在本文中,我们推导了最小二乘、Huber和Quantile损失函数下排序L-One估计器的渐近分布,并利用它来研究其极限下的聚类行为。这需要比经典弱收敛更强的一种收敛方式,因为聚类属性不仅仅是经典弱收敛所能推导出的。我们建立了渐进正交设计回归器误发现率的渐近控制。我们还展示了如何将该框架扩展到除SLOPE之外的更广泛的正则化器类别。