We study generalization properties of random features (RF) regression in high dimensions optimized by stochastic gradient descent (SGD) in under-/over-parameterized regime. In this work, we derive precise non-asymptotic error bounds of RF regression under both constant and polynomial-decay step-size SGD setting, and observe the double descent phenomenon both theoretically and empirically. Our analysis shows how to cope with multiple randomness sources of initialization, label noise, and data sampling (as well as stochastic gradients) with no closed-form solution, and also goes beyond the commonly-used Gaussian/spherical data assumption. Our theoretical results demonstrate that, with SGD training, RF regression still generalizes well for interpolation learning, and is able to characterize the double descent behavior by the unimodality of variance and monotonic decrease of bias. Besides, we also prove that the constant step-size SGD setting incurs no loss in convergence rate when compared to the exact minimum-norm interpolator, as a theoretical justification of using SGD in practice.
翻译:在这项工作中,我们从恒定和多元衰减分级的SGD设置中得出了RF回归的精确非被动误差边框,并在理论上和实验上观察双下降现象。我们的分析表明,如何应对没有封闭式解决办法的初始化、标签噪音和数据取样(以及随机梯度)的多种随机源(以及随机梯度梯度),并超越了常用的高斯/球体数据假设。我们的理论结果表明,在SGD培训中,RF回归仍然非常概括地用于内推学习,并且能够用差异的不单一形式和偏见的单一减少来描述双重下降行为。此外,我们还证明,与精确的最低限度内插器相比,固定的SGD规模设定不会造成合并率损失,作为在实践中使用SGD的理论理由。