Numerical evaluations have definitively shown that, for deep learning optimizers such as stochastic gradient descent, momentum, and adaptive methods, the number of steps needed to train a deep neural network halves for each doubling of the batch size and that there is a region of diminishing returns beyond the critical batch size. In this paper, we determine the actual critical batch size by using the global minimizer of the stochastic first-order oracle (SFO) complexity of the optimizer. To prove the existence of the actual critical batch size, we set the lower and upper bounds of the SFO complexity and prove that there exist critical batch sizes in the sense of minimizing the lower and upper bounds. This proof implies that, if the SFO complexity fits the lower and upper bounds, then the existence of these critical batch sizes demonstrates the existence of the actual critical batch size. We also discuss the conditions needed for the SFO complexity to fit the lower and upper bounds and provide numerical results that support our theoretical results.
翻译:数字评估明确显示,对于深层学习优化剂,例如随机梯度下降、动力和适应方法,训练深神经网络所需的步骤数量在每翻一倍批量规模时会减半,并且有一个递减回报超过关键批量规模的区域。在本文件中,我们通过使用全球最小化的优化器的随机头级或末级(SFO)复杂性来确定实际关键批量规模。为了证明实际关键批量规模的存在,我们设置了SFO复杂性的下层和上层界限,并证明存在临界批量大小,以尽量减少下层和上层界限。这一证据意味着,如果SFO复杂度适合下层和上层界限,那么这些关键批量规模的存在将表明实际关键批量规模的存在。我们还讨论了SFO复杂度适合下层和上层所需的条件,并提供支持我们理论结果的数字结果。