As a dominant paradigm, fine-tuning a pre-trained model on the target data is widely used in many deep learning applications, especially for small data sets. However, recent studies have empirically shown that training from scratch has the final performance that is no worse than this pre-training strategy once the number of training iterations is increased in some vision tasks. In this work, we revisit this phenomenon from the perspective of generalization analysis which is popular in learning theory. Our result reveals that the final prediction precision may have a weak dependency on the pre-trained model especially in the case of large training iterations. The observation inspires us to leverage pre-training data for fine-tuning, since this data is also available for fine-tuning. The generalization result of using pre-training data shows that the final performance on a target task can be improved when the appropriate pre-training data is included in fine-tuning. With the insight of the theoretical finding, we propose a novel selection strategy to select a subset from pre-training data to help improve the generalization on the target task. Extensive experimental results for image classification tasks on 8 benchmark data sets verify the effectiveness of the proposed data selection based fine-tuning pipeline.
翻译:作为一种主导模式,在许多深层学习应用中,特别是在小型数据集中,对目标数据预先培训的模型进行微调被广泛使用。然而,最近的研究表明,从零开始的培训在一些愿景任务中培训重复次数增加后,其最终业绩并不比培训前战略差。在这项工作中,我们从学习理论中流行的概括分析角度重新审视这一现象。我们的结果显示,最终预测精确度对培训前模型的依赖性可能较弱,特别是在大型培训迭代的情况下。观测激励我们利用培训前数据进行微调,因为这一数据也可用于微调。使用培训前数据的总体结果显示,在将适当的培训前数据纳入微调时,目标任务的最后业绩是可以改进的。根据理论发现,我们提出一个新的选择战略,从培训前数据中选择一个子,以帮助改进目标任务的一般化。8个基准数据集的图像分类任务的广泛实验结果将核查拟议的基于微调的数据选择的有效性。