The success of deep learning methods in medical image segmentation tasks heavily depends on a large amount of labeled data to supervise the training. On the other hand, the annotation of biomedical images requires domain knowledge and can be laborious. Recently, contrastive learning has demonstrated great potential in learning latent representation of images even without any label. Existing works have explored its application to biomedical image segmentation where only a small portion of data is labeled, through a pre-training phase based on self-supervised contrastive learning without using any labels followed by a supervised fine-tuning phase on the labeled portion of data only. In this paper, we establish that by including the limited label in formation in the pre-training phase, it is possible to boost the performance of contrastive learning. We propose a supervised local contrastive loss that leverages limited pixel-wise annotation to force pixels with the same label to gather around in the embedding space. Such loss needs pixel-wise computation which can be expensive for large images, and we further propose two strategies, downsampling and block division, to address the issue. We evaluate our methods on two public biomedical image datasets of different modalities. With different amounts of labeled data, our methods consistently outperform the state-of-the-art contrast-based methods and other semi-supervised learning techniques.
翻译:在医学图像分割任务中,深层次学习方法的成功在很大程度上取决于大量有标签的数据,以监督培训。另一方面,生物医学图像的说明需要领域知识,而且可能耗费大量精力。最近,对比性学习在学习图像的潜在代表性方面显示出巨大的潜力,即使没有任何标签。现有作品探索了生物医学图像分割应用,其中只有一小部分数据通过基于自我监督的对比学习的培训前阶段,而无需使用任何标签,随后还要有监督的微调阶段,仅对标记的数据部分进行监管。在本文中,我们通过在培训前阶段的形成中加入有限标签来确立这一点,有可能提高对比性学习的绩效。我们提出了一种受监督的本地对比性损失,它利用了有限的像素智慧的注释,将具有同一标签的像素与在嵌入空间周围聚集在一起。这种损失需要采用比素方法,而对于大型图像来说成本可能比较昂贵,我们进一步提出了两种战略,即下标和块分割,以解决这一问题。我们用两种不同的方法来评估我们公共生物医学图像结构的对比性方法,我们用不同的方法来评估不同的方法。我们用不同的方法来评估。