Transfer learning aims to exploit pre-trained models for more efficient follow-up training on wide range of downstream tasks and datasets, enabling successful training also on small data. Recent line of work posits strong benefits for model generalization and transfer when model size, data size, and compute budget are increased for the pre-training. It remains however still largely unclear whether the observed transfer improvement due to increase in scale also holds when source and target data distributions are far apart from each other. In this work we conduct large-scale pre-training on large source datasets of either natural (ImageNet-21k/1k) or medical chest X-Ray images and compare full and few-shot transfer using different target datasets from both natural and medical imaging domains. Our observations provide evidence that while pre-training and transfer on closely related datasets do show clear benefit of increasing model and data size during pre-training, such benefits are not clearly visible when source and target datasets are further apart. These observations hold across both full and few-shot transfer and indicate that scaling laws hinting improvement of generalization and transfer with increasing model and data size are incomplete and should also take into account the degree of how distinct the source and target data distributions are, to correctly predict effect of model size and data size variation during pre-training on transfer. (Repository for reproducing the experiments will be made available.)
翻译:在这项工作中,我们对天然(ImageNet-21k/1k)或医用胸前X-Ray图像的大型源数据集进行大规模预培训,并利用来自自然和医学成像领域的不同目标数据集对完整和少量转让进行比较,我们的意见表明,在培训前培训前和密切相关数据集的转让表明,在培训前增加模型和数据规模的好处明显可见,但当源和目标数据集进一步分开时,这种益处并不明显。