Recent results suggest that reinitializing a subset of the parameters of a neural network during training can improve generalization, particularly for small training sets. We study the impact of different reinitialization methods in several convolutional architectures across 12 benchmark image classification datasets, analyzing their potential gains and highlighting limitations. We also introduce a new layerwise reinitialization algorithm that outperforms previous methods and suggest explanations of the observed improved generalization. First, we show that layerwise reinitialization increases the margin on the training examples without increasing the norm of the weights, hence leading to an improvement in margin-based generalization bounds for neural networks. Second, we demonstrate that it settles in flatter local minima of the loss surface. Third, it encourages learning general rules and discourages memorization by placing emphasis on the lower layers of the neural network. Our takeaway message is that the accuracy of convolutional neural networks can be improved for small datasets using bottom-up layerwise reinitialization, where the number of reinitialized layers may vary depending on the available compute budget.
翻译:最近的结果显示,在培训期间重新启用神经网络的一组参数可以改进一般化,特别是小型培训组。我们研究了在12个基准图像分类数据集的若干革命结构中不同重新启用方法的影响,分析其潜在收益并突出局限性。我们还引入了一个新的层次性重新启用算法,该算法优于以往的方法,并对观察到的改进的概括化提出了解释。首先,我们表明,层性重新启用可以增加培训范例的差值,而不会增加重量规范,从而导致神经网络基于边际的一般化界限的改善。第二,我们证明它以偏向损失表面的当地微型界限为主。第三,它鼓励学习一般规则,并通过强调神经网络的下层来劝阻记忆化。我们接收的信息是,对于使用下层重新启用的小数据集来说,革命性神经网络的精度是可以提高的。在那里,重新重新启用的层数可能随现有计算预算的不同而有所不同。