This paper studies generalization properties of random features (RF) regression in high dimensions optimized by stochastic gradient descent (SGD). In this regime, we derive precise non-asymptotic error bounds of RF regression under both constant and adaptive step-size SGD setting, and observe the double descent phenomenon both theoretically and empirically. Our analysis shows how to cope with multiple randomness sources of initialization, label noise, and data sampling (as well as stochastic gradients) with no closed-form solution, and also goes beyond the commonly-used Gaussian/spherical data assumption. Our theoretical results demonstrate that, with SGD training, RF regression still generalizes well for interpolation learning, and is able to characterize the double descent behavior by the unimodality of variance and monotonic decrease of bias. Besides, we also prove that the constant step-size SGD setting incurs no loss in convergence rate when compared to the exact minimal-norm interpolator, as a theoretical justification of using SGD in practice.
翻译:本文研究随机特征(RF)回归在高维范围内的概括性特性,这些特征通过随机梯度梯度下降优化。 在这个制度下,我们在恒定和适应性级SGD设置下,得出了RF回归的精确非被动性误差界限,并在理论上和经验上观察了双向下降现象。我们的分析表明,如何应对启动、标签噪音和数据抽样(以及随机性梯度梯度)的多种随机性来源,而没有封闭式解决方案,而且超出了常用的高频/球状数据假设。 我们的理论结果表明,在SGD培训下,RF回归仍然对内插学习进行概括化,并且能够通过差异的单一形式和偏见的单向下降来描述双向下降的双向下降行为。 此外,我们还证明,与精确的最低限度干涉器相比,SGD固定的步度设定在实际使用SGD的理论理由方面不会导致趋同率下降。