The Sorted L-One Estimator (SLOPE) is a popular regularization method in regression, which induces clustering of the estimated coefficients. That is, the estimator can have coefficients of identical magnitude. In this paper, we derive an asymptotic distribution of SLOPE for the ordinary least squares, Huber, and Quantile loss functions, and use it to study the clustering behavior in the limit. This requires a stronger type of convergence since clustering properties do not follow merely from the classical weak convergence. For this aim, we utilize the Hausdorff distance, which provides a suitable notion of convergence for the penalty subdifferentials and a bridge toward weak convergence of the clustering pattern. We establish asymptotic control of the false discovery rate for the asymptotic orthogonal design of the regressor. We also show how to extend the framework to a broader class of regularizers other than SLOPE.
翻译:排序L-One估计(SLOPE)是回归中一种常用的正则化方法,它引导估计系数的聚类。也就是说,估计量可以具有相同幅度的系数。在本文中,我们推导出普通最小二乘、Huber和Quantile损失函数的SLOPE的渐近分布,并利用它来研究聚类行为的极限。这需要更强的收敛性类型,因为聚类属性不仅仅是来自经典的弱收敛性。为此,我们利用了Hausdorff距离,这提供了一种适合的惩罚子微分的收敛概念,和弱收敛聚类模式之间的桥梁。我们在渐近正交设计的回归器中建立虚假发现率的渐近控制。我们还展示了如何将该框架扩展到除了SLOPE之外的更广泛的正则化器类别。