In the low-data regime, it is difficult to train good supervised models from scratch. Instead practitioners turn to pre-trained models, leveraging transfer learning. Ensembling is an empirically and theoretically appealing way to construct powerful predictive models, but the predominant approach of training multiple deep networks with different random initialisations collides with the need for transfer via pre-trained weights. In this work, we study different ways of creating ensembles from pre-trained models. We show that the nature of pre-training itself is a performant source of diversity, and propose a practical algorithm that efficiently identifies a subset of pre-trained models for any downstream dataset. The approach is simple: Use nearest-neighbour accuracy to rank pre-trained models, fine-tune the best ones with a small hyperparameter sweep, and greedily construct an ensemble to minimise validation cross-entropy. When evaluated together with strong baselines on 19 different downstream tasks (the Visual Task Adaptation Benchmark), this achieves state-of-the-art performance at a much lower inference budget, even when selecting from over 2,000 pre-trained models. We also assess our ensembles on ImageNet variants and show improved robustness to distribution shift.
翻译:在低数据系统中,很难从零开始训练良好的监管模型。 实践者则转向培训前的模型, 利用转移学习。 整合是一种在经验上和理论上都具有吸引力的方法, 用来构建强大的预测模型, 但以不同随机初始化方式培训多个深层网络的主要方法, 与通过预先培训的重量转移的需要相交。 在这项工作中, 我们研究从预培训模型中创建各种组合的方法。 我们发现, 预培训本身的性质是多样化的极佳来源, 并提议一种实用的算法, 有效地为任何下游数据集确定一组预先培训的模型。 这种方法很简单: 使用最近的邻里精确度来排位预培训模型, 微调最佳的网络, 进行小的超光度扫描, 并贪婪地构建一个共同点, 最大限度地减少认证跨作物。 当我们与19项不同下游任务( 视觉任务适应基准) 的强有力基线一起评估时, 这在低得多的推价预算下取得最先进的业绩, 即使在从2 000多个预培训前模型中选择更稳健的模型时 。 我们还评估了我们的图像配置。