The impressive performances of deep learning architectures is associated to massive increase of models complexity. Millions of parameters need be tuned, with training and inference time scaling accordingly. But is massive fine-tuning necessary? In this paper, focusing on image classification, we consider a simple transfer learning approach exploiting pretrained convolutional features as input for a fast kernel method. We refer to this approach as top-tuning, since only the kernel classifier is trained. By performing more than 2500 training processes we show that this top-tuning approach provides comparable accuracy w.r.t. fine-tuning, with a training time that is between one and two orders of magnitude smaller. These results suggest that top-tuning provides a useful alternative to fine-tuning in small/medium datasets, especially when training efficiency is crucial.
翻译:深层学习结构的令人印象深刻的绩效与模型复杂性的大幅增长相关联。 数百万参数需要调整, 并相应地进行培训和推算时间。 但大规模微调是必要的吗? 在本文中,我们考虑一种简单的转移学习方法,利用预先训练的革命性特征作为快速内核方法的投入。 我们称这一方法为顶调,因为只有内核分类器才受过培训。 通过实施2500多个培训流程,我们发现这种顶调方法提供了类似的准确性w.r.t.微调,而培训时间在一到两个数量级之间,规模较小。这些结果表明,顶调为小型/中等数据集的微调提供了有用的替代方法,特别是在培训效率至关重要的情况下。