Recent progress was made in characterizing the generalization error of gradient methods for general convex loss by the learning theory community. In this work, we focus on how training longer might affect generalization in smooth stochastic convex optimization (SCO) problems. We first provide tight lower bounds for general non-realizable SCO problems. Furthermore, existing upper bound results suggest that sample complexity can be improved by assuming the loss is realizable, i.e. an optimal solution simultaneously minimizes all the data points. However, this improvement is compromised when training time is long and lower bounds are lacking. Our paper examines this observation by providing excess risk lower bounds for gradient descent (GD) and stochastic gradient descent (SGD) in two realizable settings: 1) realizable with $T = O(n)$, and (2) realizable with $T = \Omega(n)$, where $T$ denotes the number of training iterations and $n$ is the size of the training dataset. These bounds are novel and informative in characterizing the relationship between $T$ and $n$. In the first small training horizon case, our lower bounds almost tightly match and provide the first optimal certificates for the corresponding upper bounds. However, for the realizable case with $T = \Omega(n)$, a gap exists between the lower and upper bounds. We provide a conjecture to address this problem, that the gap can be closed by improving upper bounds, which is supported by our analyses in one-dimensional and linear regression scenarios.
翻译:近期,学习理论界在刻画梯度方法在一般凸损失函数下的泛化误差方面取得了一定进展。本文关注的是,更长的训练时间如何影响平滑随机凸优化问题的泛化性能。我们首先提供了非现实平滑随机凸优化问题的紧密泛化下界。此外,现有的上界结果表明,假设损失函数是可行的(即最优解同时最小化了所有数据点),可以改进样本复杂度。然而,当训练时间很长且下界缺乏时,这种改进会受到影响。我们通过提供梯度下降(GD)和随机梯度下降(SGD)的过量风险下界来研究这一观察结果,这在两个可实现的设置中是新颖且信息丰富的:1)可实现的,$T=O(n)$;2)可实现的,$T=\Omega(n)$,其中$T$表示训练迭代次数,$n$表示训练数据集的大小。这些下界在表征$T$和$n$之间的关系方面是新颖且详细的。在第一个小训练范围的情况下,我们的下界几乎与相应上界紧密匹配,并且提供了相应上界的第一个最优证书。然而,在$T = \Omega(n)$的可实现情况下,上下界之间存在差距。我们提出了一种猜想,解决这个问题,即通过改进上界来缩小这个差距,这得到了我们在一维和线性回归场景中的分析支持。