Stochastic gradient descent (SGD) has become the most attractive optimization method in training large-scale deep neural networks due to its simplicity, low computational cost in each updating step, and good performance. Standard excess risk bounds show that SGD only needs to take one pass over the training data and more passes could not help to improve the performance. Empirically, it has been observed that SGD taking more than one pass over the training data (multi-pass SGD) has much better excess risk bound performance than the SGD only taking one pass over the training data (one-pass SGD). However, it is not very clear that how to explain this phenomenon in theory. In this paper, we provide some theoretical evidences for explaining why multiple passes over the training data can help improve performance under certain circumstance. Specifically, we consider smooth risk minimization problems whose objective function is non-convex least squared loss. Under Polyak-Lojasiewicz (PL) condition, we establish faster convergence rate of excess risk bound for multi-pass SGD than that for one-pass SGD.
翻译:标准超重风险界限表明,标准超重风险界限表明,标准超重风险界限只需要一次通过培训数据,而更多的通行证无助于改善工作表现。 偶然地发现,SGD多一次通过培训数据(多通过SGD)的超重风险约束性能要好于SGD只通过一次培训数据(一次性SGD)的超重风险约束性能。然而,不清楚如何在理论上解释这一现象。在本文件中,我们提供了一些理论证据,解释为什么多次通过培训数据可以在某些情况下帮助改善工作表现。具体地说,我们考虑将目标功能为非convex最低平方损失的风险最小化问题。在Polyak-Lojasiewicz(PL)条件下,我们为多发SGD设定了比一次通用SGD超重风险的更快的趋同率。