Recently, there has been much interest in studying the convergence rates of without-replacement SGD, and proving that it is faster than with-replacement SGD in the worst case. However, known lower bounds ignore the problem's geometry, including its condition number, whereas the upper bounds explicitly depend on it. Perhaps surprisingly, we prove that when the condition number is taken into account, without-replacement SGD \emph{does not} significantly improve on with-replacement SGD in terms of worst-case bounds, unless the number of epochs (passes over the data) is larger than the condition number. Since many problems in machine learning and other areas are both ill-conditioned and involve large datasets, this indicates that without-replacement does not necessarily improve over with-replacement sampling for realistic iteration budgets. We show this by providing new lower and upper bounds which are tight (up to log factors), for quadratic problems with commuting quadratic terms, precisely quantifying the dependence on the problem parameters.
翻译:最近,人们非常关注研究无替换 SGD 的趋同率,并证明在最坏的情况下,它比替换 SGD 的速度快。然而,已知的下限忽略了问题的几何,包括条件号,而上限则明确依赖它。也许令人惊讶的是,我们证明,如果考虑到条件数,而不替换 SGD \emph{does,则在最坏的情况下,取代 SGD 的 SGD 的趋同率不会显著改善,除非路段(超过数据)的数目大于条件号。由于机器学习和其他领域的许多问题都条件差,而且涉及大量数据集,这表明不替换不一定会随着更替抽样而得到改善,用于现实的循环预算。我们通过提供新的下限和上限来显示这一点,这些下限很紧(高于逻辑系数),因为有交替二次条件的二次问题,精确地量化对问题参数的依赖性。