Ensembles of neural networks are known to be much more robust and accurate than individual networks. However, training multiple deep networks for model averaging is computationally expensive. In this paper, we propose a method to obtain the seemingly contradictory goal of ensembling multiple neural networks at no additional training cost. We achieve this goal by training a single neural network, converging to several local minima along its optimization path and saving the model parameters. To obtain repeated rapid convergence, we leverage recent work on cyclic learning rate schedules. The resulting technique, which we refer to as Snapshot Ensembling, is simple, yet surprisingly effective. We show in a series of experiments that our approach is compatible with diverse network architectures and learning tasks. It consistently yields lower error rates than state-of-the-art single models at no additional training cost, and compares favorably with traditional network ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain error rates of 3.4% and 17.4% respectively.
翻译:众所周知,神经网络的组合比单个网络更加强大和准确。 然而,为模型平均化而培训多个深层网络的计算成本很高。 在本文中,我们提出一种方法,在不增加培训费用的情况下,实现将多个神经网络组合起来的似乎相互矛盾的目标。我们通过培训单一神经网络,在优化路径上与几个本地微型网络相融合,并保存模型参数来实现这一目标。为了实现重复的快速趋同,我们利用最近关于循环学习率时间表的工作。由此产生的技术,我们称之为“快照组合”,是简单而令人惊讶的有效。我们在一系列实验中显示,我们的方法与不同的网络架构和学习任务相容。它始终产生出错率低于最先进的单一模型,而没有增加培训费用,并且与传统的网络组合相比,出错率也一直较低。 在CFAR-10和CIFAR-100上,我们的DenseNet Snapshot Ensmbles分别获得3.4%和17.4%的误差率率。