Recent advancements in self-supervised learning have demonstrated that effective visual representations can be learned from unlabeled images. This has led to increased interest in applying self-supervised learning to the medical domain, where unlabeled images are abundant and labeled images are difficult to obtain. However, most self-supervised learning approaches are modeled as image level discriminative or generative proxy tasks, which may not capture the finer level representations necessary for dense prediction tasks like multi-organ segmentation. In this paper, we propose a novel contrastive learning framework that integrates Localized Region Contrast (LRC) to enhance existing self-supervised pre-training methods for medical image segmentation. Our approach involves identifying Super-pixels by Felzenszwalb's algorithm and performing local contrastive learning using a novel contrastive sampling loss. Through extensive experiments on three multi-organ segmentation datasets, we demonstrate that integrating LRC to an existing self-supervised method in a limited annotation setting significantly improves segmentation performance. Moreover, we show that LRC can also be applied to fully-supervised pre-training methods to further boost performance.
翻译:最近,自我监督学习的进展表明可以从未标记图像中学习有效的视觉表示。这引起了将自我监督学习应用于医学领域的兴趣,其中未标记的图像很多而标记的图像很难获得。然而,大多数自我监督学习方法被建模为图像级别的判别或生成代理任务,这可能无法捕捉密集预测任务(如多器官分割)所需的更精细的表示。在本文中,我们提出了一种新颖的对比学习框架,将局部区域对比(LRC)集成到现有的医学图像分割的自我监督预训练方法中以增强其效果。我们的方法涉及通过Felzenszwalb算法识别超像素,并使用新颖的对比采样损失执行局部对比学习。通过对三个多器官分割数据集进行广泛的实验,我们证明了在有限标记设置中将LRC集成到现有的自我监督方法中显着提高分割性能。此外,我们还展示了可以将LRC应用于全监督预训练方法以进一步提高性能。