Previous studies observed that finetuned models may be better base models than the vanilla pretrained model. Such a model, finetuned on some source dataset, may provide a better starting point for a new finetuning process on a desired target dataset. Here, we perform a systematic analysis of this \emph{intertraining} scheme, over a wide range of English classification tasks. Surprisingly, our analysis suggests that the potential intertraining gain can be analyzed \emph{independently} for the target dataset under consideration, and for a base model being considered as a starting point. This is in contrast to current perception that the alignment between the target dataset and the source dataset used to generate the base model is a major factor in determining intertraining success. We analyze different aspects that contribute to each. Furthermore, we leverage our analysis to propose a practical and efficient approach to determine if and how to select a base model in real-world settings. Last, we release an updating ranking of best models in the HuggingFace hub per architecture\anonm{remove this link: https://ibm.github.io/model-recycling/.
翻译:先前的研究发现,微调模型可能是比香草预培训模型更好的基础模型。 这样的模型,在某种源数据集上进行微调,可以提供一个更好的起点, 在一个理想的目标数据集上进行新的微调进程。 在这里, 我们系统地分析该\ emph{ intertraingh} 计划, 涉及广泛的英国分类任务。 令人惊讶的是, 我们的分析表明, 潜在的跨培训收益可以用来分析正在审议的目标数据集, 以及作为起点考虑的基准模型。 这与目前关于目标数据集与用于生成基准模型的源数据集之间的对齐是决定跨培训成功的一个主要因素的看法形成对比。 我们分析每个方案的不同方面。 此外, 我们利用我们的分析来提出一个实用有效的方法, 以确定是否以及如何选择现实世界环境中的基础模型。 最后, 我们发布一份最新的最佳模型的排名, 在Hugging Face 枢纽/ annem {remove this link: https://ibm.github.io/ model-recylling.