In this paper, we leverage over-parameterization to design regularization-free algorithms for the high-dimensional single index model and provide theoretical guarantees for the induced implicit regularization phenomenon. Specifically, we study both vector and matrix single index models where the link function is nonlinear and unknown, the signal parameter is either a sparse vector or a low-rank symmetric matrix, and the response variable can be heavy-tailed. To gain a better understanding of the role played by implicit regularization without excess technicality, we assume that the distribution of the covariates is known a priori. For both the vector and matrix settings, we construct an over-parameterized least-squares loss function by employing the score function transform and a robust truncation step designed specifically for heavy-tailed data. We propose to estimate the true parameter by applying regularization-free gradient descent to the loss function. When the initialization is close to the origin and the stepsize is sufficiently small, we prove that the obtained solution achieves minimax optimal statistical rates of convergence in both the vector and matrix cases. In addition, our experimental results support our theoretical findings and also demonstrate that our methods empirically outperform classical methods with explicit regularization in terms of both $\ell_2$-statistical rate and variable selection consistency.
翻译:在本文中,我们利用超度参数来设计高维单一指数模型的无正规化算法,并为诱导的隐性规范化现象提供理论保障。具体地说,我们研究了链接功能为非线性和未知的矢量和矩阵单一指数模型,信号参数要么是稀疏的矢量,要么是低阶对称矩阵,反应变量可能是重尾的。为了更好地了解隐性规范化的作用,而不过度技术性,我们假设共变体的分布是先验的。对于矢量和矩阵设置,我们通过使用分数函数变换和专为重尾数据设计的稳健的脱轨步骤,构建了超度最小损失值的最小指数功能。我们提议通过对损失函数应用无正规化梯度下降法来估计真实参数。当初始化接近源点和分级度足够小时,我们证明获得的解决方案在矢量和矩阵案例中都实现了最小最佳的统计趋同率。此外,我们的实验结果支持我们的理论结果,也证明我们以典型的正统性标准格式形式,我们以典型的正统性方法来评估。