Recent empirical and theoretical studies have shown that many learning algorithms -- from linear regression to neural networks -- can have test performance that is non-monotonic in quantities such the sample size and model size. This striking phenomenon, often referred to as "double descent", has raised questions of if we need to re-think our current understanding of generalization. In this work, we study whether the double-descent phenomenon can be avoided by using optimal regularization. Theoretically, we prove that for certain linear regression models with isotropic data distribution, optimally-tuned $\ell_2$ regularization achieves monotonic test performance as we grow either the sample size or the model size. We also demonstrate empirically that optimally-tuned $\ell_2$ regularization can mitigate double descent for more general models, including neural networks. Our results suggest that it may also be informative to study the test risk scalings of various algorithms in the context of appropriately tuned regularization.
翻译:最近的经验和理论研究表明,从线性回归到神经网络,许多学习算法 -- -- 从线性回归到神经网络 -- -- 的测试性能可以是非分子性的测试性能,其数量与样本大小和模型大小一样。这种惊人的现象,通常被称为“双向下降 ”, 提出了我们是否需要重新思考目前对一般化的理解的问题。 在这项工作中,我们研究的是,使用最佳的正规化是否可以避免双白现象。从理论上讲,我们证明对于某些具有异向数据分布的线性回归模型来说,通过优化调控$@ell_2$的正规化,在我们发展样本大小或模型大小时,可以实现单调测试性性性能。我们也从经验上证明,最佳调控调$\ell_2$的正规化可以减轻包括神经网络在内的更通用模型的双向下降。我们的结果表明,在适当调整正规化的背景下,研究各种算法的测试风险缩放也可能是有用的。