A recently proposed SLOPE estimator (arXiv:1407.3824) has been shown to adaptively achieve the minimax $\ell_2$ estimation rate under high-dimensional sparse linear regression models (arXiv:1503.08393). Such minimax optimality holds in the regime where the sparsity level $k$, sample size $n$, and dimension $p$ satisfy $k/p \rightarrow 0$, $k\log p/n \rightarrow 0$. In this paper, we characterize the estimation error of SLOPE under the complementary regime where both $k$ and $n$ scale linearly with $p$, and provide new insights into the performance of SLOPE estimators. We first derive a concentration inequality for the finite sample mean square error (MSE) of SLOPE. The quantity that MSE concentrates around takes a complicated and implicit form. With delicate analysis of the quantity, we prove that among all SLOPE estimators, LASSO is optimal for estimating $k$-sparse parameter vectors that do not have tied non-zero components in the low noise scenario. On the other hand, in the large noise scenario, the family of SLOPE estimators are sub-optimal compared with bridge regression such as the Ridge estimator.
翻译:最近提出的SLOPE估计值(arXiv:1407.3824)显示,在高维稀薄线性回归模型(arXiv:1503.08393)下,这种微小最大最佳性能能够适应地达到迷你最大估计率($@ ell_2美元)(arXiv:1407.3824),在高维稀薄线性回归模型(arXiv:1503.08393)下,这种微小最大最佳性能存在于空间水平为1美元、样本大小为1美元、尺寸为1美元/p\rightrow 0美元、美元/rightrow p/n/n\rightrow 0美元的制度中。在对数量进行精细分析后,我们证明在补充制度下,对SLOPE的估计误差值为1美元和1美元为2美元,对SLOPE天平标值的标值为1美元和1美元,对SLOPE天平面标值为1美元,对SLOPE天平面的顶级比例为1美元,对Simal-Rial-Simal-Simal sider 的另一种假设中,对Simal-Simal 的另一种低层设想进行对比。