Transfer learning aims to exploit pre-trained models for more efficient follow-up training on wide range of downstream tasks and datasets, enabling successful training also on small data. Recent line of work posits strong benefits for model generalization and transfer when model size, data size, and compute budget are increased for the pre-training. It remains however still largely unclear whether the observed transfer improvement due to increase in scale also holds when source and target data distributions are far apart from each other. In this work we conduct large-scale pre-training on large source datasets of either natural (ImageNet-21k/1k) or medical chest X-Ray images and compare full and few-shot transfer using different target datasets from both natural and medical imaging domains. Our observations provide evidence that while pre-training and transfer on closely related datasets do show clear benefit of increasing model and data size during pre-training, such benefits are not clearly visible when source and target datasets are further apart. These observations hold across both full and few-shot transfer and indicate that scaling laws pointing to improvement of generalization and transfer with increasing model and data size are incomplete and should be revised by taking into account the type and proximity of the source and target data, to correctly predict the effect of model and data scale during pre-training on transfer. Remarkably, in full shot transfer to a large X-Ray chest imaging target (PadChest), the largest model pre-trained on ImageNet-21k slightly outperforms best models pre-trained on large X-Ray chest imaging data. This indicates possibility to obtain high quality models for domain-specific transfer even without access to large domain-specific data, by pre-training instead on comparably very large, generic source data.
翻译:在这项工作中,我们进行了大规模的前期培训,对各种下游任务和数据集进行更高效的后续培训,从而也能够对小型数据进行成功培训。最近的一行工作显示,在培训前,模型规模、数据规模和计算预算增加时,模型一般化和转移将给培训前阶段带来很大的好处。然而,由于规模扩大而观察到的转让改进是否同样存在,当源和目标数据分布相去甚远时,这种效益是否也明显存在。在这项工作中,我们进行了大规模培训,对大型源性一般数据集,无论是天然(IMageNet-21k/1k)还是医用胸胸胸X-Ray图像,并用来自自然和医学成像领域的不同目标数据集进行全方位和零星全方位的比较。我们的观察显示,尽管培训前期和密切相关的数据集转移表明,在培训前阶段,由于源和目标数据集分布更加相异,这种效益并不明显。 这些观测在全程和几部进行,表明,随着具体模型和数据规模的扩大而改进和转移,在大规模目标模型和最接近的模型中,应进行更精确地修正。