Transfer learning is a standard technique to improve performance on tasks with limited data. However, for medical imaging, the value of transfer learning is less clear. This is likely due to the large domain mismatch between the usual natural-image pre-training (e.g. ImageNet) and medical images. However, recent advances in transfer learning have shown substantial improvements from scale. We investigate whether modern methods can change the fortune of transfer learning for medical imaging. For this, we study the class of large-scale pre-trained networks presented by Kolesnikov et al. on three diverse imaging tasks: chest radiography, mammography, and dermatology. We study both transfer performance and critical properties for the deployment in the medical domain, including: out-of-distribution generalization, data-efficiency, sub-group fairness, and uncertainty estimation. Interestingly, we find that for some of these properties transfer from natural to medical images is indeed extremely effective, but only when performed at sufficient scale.
翻译:然而,对于医学成像而言,转移学习的价值不那么明显,这很可能是由于通常的自然图像培训前(如图像网络)和医学图像之间在领域上的不匹配。然而,最近在转移学习方面的进展从规模上显示出了很大的改进。我们调查现代方法是否能够改变医疗成像转移学习的财富。我们研究科列斯尼科夫等人介绍的大规模培训前网络在三种不同的成像任务:乳房X射线、乳房造影和皮肤学上的价值。我们研究医疗领域部署的转移性能和关键特性,包括:分配外的一般化、数据效率、分组公平性和不确定性估计。有趣的是,我们发现这些特性从自然图象向医学图象的转移确实非常有效,但只有在足够规模的情况下才能进行。