Recent advances in the theoretical understanding of SGD led to a formula for the optimal batch size minimizing the number of effective data passes, i.e., the number of iterations times the batch size. However, this formula is of no practical value as it depends on the knowledge of the variance of the stochastic gradients evaluated at the optimum. In this paper we design a practical SGD method capable of learning the optimal batch size adaptively throughout its iterations for strongly convex and smooth functions. Our method does this provably, and in our experiments with synthetic and real data robustly exhibits nearly optimal behaviour; that is, it works as if the optimal batch size was known a-priori. Further, we generalize our method to several new batch strategies not considered in the literature before, including a sampling suitable for distributed implementations.
翻译:最近在理论上对SGD的理解方面的进展导致一个最佳批量数量减少有效数据通过数量的公式,即迭代次数是批量大小的倍数,然而,这个公式没有实际价值,因为它取决于对最佳评估的随机梯度差异的了解。在这份文件中,我们设计了一个实用的SGD方法,能够在整个迭代中从中学习适应性最佳批量大小,以适应很强的混凝土和平稳的功能。我们的方法可以做到这一点,我们用合成和真实数据进行实验,有力地展示了接近最佳的行为;也就是说,它的工作效果似乎最佳批量规模已被人们所知为优先。此外,我们把我们的方法概括为文献中未曾考虑过的几种新批量战略,包括适合于分布式执行的抽样。