The success of deep learning heavily depends on the availability of large labeled training sets. However, it is hard to get large labeled datasets in medical image domain because of the strict privacy concern and costly labeling efforts. Contrastive learning, an unsupervised learning technique, has been proved powerful in learning image-level representations from unlabeled data. The learned encoder can then be transferred or fine-tuned to improve the performance of downstream tasks with limited labels. A critical step in contrastive learning is the generation of contrastive data pairs, which is relatively simple for natural image classification but quite challenging for medical image segmentation due to the existence of the same tissue or organ across the dataset. As a result, when applied to medical image segmentation, most state-of-the-art contrastive learning frameworks inevitably introduce a lot of false-negative pairs and result in degraded segmentation quality. To address this issue, we propose a novel positional contrastive learning (PCL) framework to generate contrastive data pairs by leveraging the position information in volumetric medical images. Experimental results on CT and MRI datasets demonstrate that the proposed PCL method can substantially improve the segmentation performance compared to existing methods in both semi-supervised setting and transfer learning setting.
翻译:深层学习的成功很大程度上取决于是否有大型标签式培训组。然而,由于严格的隐私关切和昂贵的标签工作,很难在医疗图像领域获得大标签数据集。 对比学习是一种不受监督的学习技术,在从未贴标签的数据中学习图像层面的表达方式方面被证明是强大的。 然后,学习的编码器可以转让或微调,以提高标签有限的下游任务的绩效。对比学习中的一个关键步骤是生成对比式数据配对,这在自然图像分类方面相对简单,但在医学图像分割方面则相当困难。结果,在应用医学图像分割时,大多数最先进的对比性学习框架必然会引入大量虚假的对应方,并导致分化质量的退化。为了解决这一问题,我们提出了一个新的定位式对比性学习(PCL)框架,通过利用体积医学图像中的定位信息生成对比式数据配对。 CT和MRI数据集的实验性结果显示,在对医学图像进行分解时,大多数最先进的对比式学习框架可以大大地改进现有的分化方法。