Ensembling deep learning models is a shortcut to promote its implementation in new scenarios, which can avoid tuning neural networks, losses and training algorithms from scratch. However, it is difficult to collect sufficient accurate and diverse models through once training. This paper proposes Auto-Ensemble (AE) to collect checkpoints of deep learning model and ensemble them automatically by adaptive learning rate scheduling algorithm. The advantage of this method is to make the model converge to various local optima by scheduling the learning rate in once training. When the number of lo-cal optimal solutions tends to be saturated, all the collected checkpoints are used for ensemble. Our method is universal, it can be applied to various scenarios. Experiment results on multiple datasets and neural networks demonstrate it is effective and competitive, especially on few-shot learning. Besides, we proposed a method to measure the distance among models. Then we can ensure the accuracy and diversity of collected models.
翻译:深层学习模式是推动在新情景中实施这一模式的捷径,可以避免从零开始调整神经网络、损失和培训算法,然而,很难通过一次培训收集足够准确和多样的模型。本文件提出“自动强化”来收集深层学习模式的检查站,并通过适应性学习率列表算法将其自动组合起来。这种方法的优势在于将学习率安排在培训之后,使该模式与各种本地选择相融合。当单项培训中倾向于饱和时,所有收集到的最佳解决方案都用于共通性。我们的方法是通用的,可以应用于各种情景。多套数据集和神经网络的实验结果表明,它有效和竞争性,特别是在几张镜头的学习上。此外,我们提出了衡量模型之间距离的方法。然后我们可以确保所收集模型的准确性和多样性。