Stochastic gradient descent (SGD) has been demonstrated to generalize well in many deep learning applications. In practice, one often runs SGD with a geometrically decaying stepsize, i.e., a constant initial stepsize followed by multiple geometric stepsize decay, and uses the last iterate as the output. This kind of SGD is known to be nearly minimax optimal for classical finite-dimensional linear regression problems (Ge et al., 2019), and provably outperforms SGD with polynomially decaying stepsize in terms of the statistical minimax rates. However, a sharp analysis for the last iterate of SGD with decaying step size in the overparameterized setting is still open. In this paper, we provide problem-dependent analysis on the last iterate risk bounds of SGD with decaying stepsize, for (overparameterized) linear regression problems. In particular, for SGD with geometrically decaying stepsize (or tail geometrically decaying stepsize), we prove nearly matching upper and lower bounds on the excess risk. Our results demonstrate the generalization ability of SGD for a wide class of overparameterized problems, and can recover the minimax optimal results up to logarithmic factors in the classical regime. Moreover, we provide an excess risk lower bound for SGD with polynomially decaying stepsize and illustrate the advantage of geometrically decaying stepsize in an instance-wise manner, which complements the minimax rate comparison made in previous work.
翻译:在实践上,人们经常对SGD进行几何式衰减级步骤式的精确分析,也就是说,先不断的初始步骤式,然后是多个几何级级衰减,然后是最后一个迭代值作为输出。这种SGD已知对于典型的有限度线性回归问题(Ge等人,2019年)来说几乎是最小化的最佳方法,并且明显地优于SGD,在统计微缩率方面,多度递减步骤式的SGD。然而,对SGD上一级和跨度递减步骤式的精确分析仍然是开放的。在本文中,我们对SGDD最后一级风险范围进行的问题性分析,对传统有限度线性线性回归问题(Ge等人,2019年)来说几乎是最小度回归式的微度最佳最佳最佳最佳最佳最佳最佳方法。我们证明,SGDT的上下级递减步骤式系统在超度风险上与下级缩缩缩缩缩缩缩,我们的结果可以展示Squalalalalalimalalalalalalalalalalalalal la imalma 工作结果。