We propose to optimize neural networks with a uniformly-distributed random learning rate. The associated stochastic gradient descent algorithm can be approximated by continuous stochastic equations and analyzed within the Fokker-Planck formalism. In the small learning rate regime, the training process is characterized by an effective temperature which depends on the average learning rate, the mini-batch size and the momentum of the optimization algorithm. By comparing the random learning rate protocol with cyclic and constant protocols, we suggest that the random choice is generically the best strategy in the small learning rate regime, yielding better regularization without extra computational cost. We provide supporting evidence through experiments on both shallow, fully-connected and deep, convolutional neural networks for image classification on the MNIST and CIFAR10 datasets.
翻译:我们建议优化神经网络,采用统一分布的随机学习率。相关的随机梯度梯度下降算法可以通过连续的随机方程进行近似,并在Fokker-Planck正规主义中进行分析。在小型学习率制度中,培训过程的特点是有效的温度,这取决于平均学习率、微型批量规模和优化算法的势头。通过将随机学习率协议与循环和常规协议进行比较,我们建议随机选择是小型学习率制度中的通用最佳策略,在不增加计算费用的情况下实现更好的正规化。我们通过浅度、完全连接性和深度的革命性神经网络实验,为MNIST和CIFAR10数据集的图像分类提供证据。