A key requirement for the success of supervised deep learning is a large labeled dataset - a condition that is difficult to meet in medical image analysis. Self-supervised learning (SSL) can help in this regard by providing a strategy to pre-train a neural network with unlabeled data, followed by fine-tuning for a downstream task with limited annotations. Contrastive learning, a particular variant of SSL, is a powerful technique for learning image-level representations. In this work, we propose strategies for extending the contrastive learning framework for segmentation of volumetric medical images in the semi-supervised setting with limited annotations, by leveraging domain-specific and problem-specific cues. Specifically, we propose (1) novel contrasting strategies that leverage structural similarity across volumetric medical images (domain-specific cue) and (2) a local version of the contrastive loss to learn distinctive representations of local regions that are useful for per-pixel segmentation (problem-specific cue). We carry out an extensive evaluation on three Magnetic Resonance Imaging (MRI) datasets. In the limited annotation setting, the proposed method yields substantial improvements compared to other self-supervision and semi-supervised learning techniques. When combined with a simple data augmentation technique, the proposed method reaches within 8% of benchmark performance using only two labeled MRI volumes for training, corresponding to only 4% (for ACDC) of the training data used to train the benchmark.
翻译:监督深层学习成功的一个关键要求是大型标签数据集,这是医学图像分析中难以满足的一个条件。在这方面,自我监督学习(SSL)可以提供帮助,办法是提供战略,预先培训一个带有无标签数据的神经网络,随后对下游任务进行微调,但说明有限。 反向学习是学习图象分解的有力技术。在这项工作中,我们提出战略,扩大对比学习框架,在半监督且注释有限的环境中分解体积医学图像,利用特定领域和特定问题的提示。具体地说,我们提议:(1) 采用新的对比战略,利用体积医学图像的结构相似性(多特定提示),然后对下游任务进行微调。 反向不同的学习是,学习每个像素分解(具体提示)有用的地方特征。我们只对三种磁力再共振成像成像(MRI)数据集进行了广泛的评价。在有限的说明中,利用特定领域和特定问题的提示,我们提出了新的对比性战略,即利用一个简单的升级方法,将提高质量的方法提高到了其他标准。我们只对4项进行了相应的评价。