Self-supervised learning has enabled significant improvements on natural image benchmarks. However, there is less work in the medical imaging domain in this area. The optimal models have not yet been determined among the various options. Moreover, little work has evaluated the current applicability limits of novel self-supervised methods. In this paper, we evaluate a range of current contrastive self-supervised methods on out-of-distribution generalization in order to evaluate their applicability to medical imaging. We show that self-supervised models are not as robust as expected based on their results in natural imaging benchmarks and can be outperformed by supervised learning with dropout. We also show that this behavior can be countered with extensive augmentation. Our results highlight the need for out-of-distribution generalization standards and benchmarks to adopt the self-supervised methods in the medical imaging community.
翻译:自我监督的学习使自然图像基准有了显著改善。然而,在这方面医学图像领域的工作较少。最佳模型尚未在各种选项中确定。此外,很少有工作评价了新自监督方法目前的适用性限度。在本文件中,我们评价了目前一系列关于传播外普遍化的自监督的对比性方法,以评价其对医疗图像的适用性。我们显示,自监督模型根据其自然图像基准结果不如预期强,而且可以通过有监督的辍学学习来完成。我们还表明,这种行为可以被大规模增强。我们的结果突出表明,在医疗成像界采用自我监督方法需要超越分配的一般标准和基准。