We consider a high-dimensional linear regression problem. Unlike many papers on the topic, we do not require sparsity of the regression coefficients; instead, our main structural assumption is a decay of eigenvalues of the covariance matrix of the data. We propose a new family of estimators, called the canonical thresholding estimators, which pick largest regression coefficients in the canonical form. The estimators admit an explicit form and can be linked to LASSO and Principal Component Regression (PCR). A theoretical analysis for both fixed design and random design settings is provided. Obtained bounds on the mean squared error and the prediction error of a specific estimator from the family allow to clearly state sufficient conditions on the decay of eigenvalues to ensure convergence. In addition, we promote the use of the relative errors, strongly linked with the out-of-sample $R^2$. The study of these relative errors leads to a new concept of joint effective dimension, which incorporates the covariance of the data and the regression coefficients simultaneously, and describes the complexity of a linear regression problem. Some minimax lower bounds are established to showcase the optimality of our procedure. Numerical simulations confirm good performance of the proposed estimators compared to the previously developed methods.
翻译:我们考虑的是高维的线性回归问题。与许多关于这个主题的文件不同,我们并不要求回归系数的宽度;相反,我们的主要结构性假设是数据共变矩阵的偏差值衰减。我们建议建立一个新的测算器体系,称为Canonic 临界点估计器,该体系以卡通形式选择最大的回归系数。估计器接受一种明确的形式,并且可以与LASOS和主元回归(PCR)联系起来。我们提供了固定设计和随机设计设置的理论分析。获得了平均正方形错误的界限,以及家庭某个特定估测器的预测错误,从而可以清楚地说明有关数据价值衰减的足够条件,以确保趋同。此外,我们提倡使用相对错误,这与标外的 $R%2 密切相关。这些相对错误的研究导致一个新的联合有效概念,它同时包含数据和回归系数的变量,并描述了一个家庭某个特定估测算器的预测误差,从而可以清楚地说明一个比我们以前确定的最佳轨迹性回归过程的复杂程度。一些微的模拟方法与我们所拟订的模型的模拟程序比较了。