We study generalization properties of random features (RF) regression in high dimensions optimized by stochastic gradient descent (SGD). In this regime, we derive precise non-asymptotics error bounds of RF regression under both constant and polynomial-decay step-size SGD setting, and observe the double descent phenomenon both theoretically and empirically. Our analysis shows how to cope with multiple randomness sources of initialization, label noise, and data sampling (as well as stochastic gradients) with no closed-form solution, and also goes beyond the commonly-used Gaussian/spherical data assumption. Our theoretical results demonstrate that, with SGD training, RF regression still generalizes well for interpolation learning, and is able to characterize the double descent behavior by the unimodality of variance and monotonic decrease of bias. Besides, we also prove that the constant step-size SGD setting incurs no loss in convergence rate when compared to the exact minimum-norm interpolator, as a theoretical justification of using SGD in practice.
翻译:我们用随机特征(RF)回归的概括性特性,通过随机梯度梯度梯度下降优化高维度(SGD) 。 在这个制度下,我们在恒定和多元衰减级SGD级设置下,得出了RF回归的精确非随机误差界限,并在理论上和实验上观察了双下降现象。我们的分析表明,如何应对启动、标签噪声和数据取样(以及随机梯度梯度)的多重随机源,而没有封闭式解决方案,而且超出了常用的高斯/球体数据假设。我们的理论结果表明,在SGD培训下,RF回归仍然对内插学习进行了概括化,并且能够通过差异的单一形式和偏差的单向性下降来描述双位回归行为。此外,我们还证明,与精确的最低限度内插器相比,SGD在实践中使用的理论理由上不会导致SGD的趋同率下降。