Deep learning has demonstrated significant improvements in medical image segmentation using a sufficiently large amount of training data with manual labels. Acquiring well-representative labels requires expert knowledge and exhaustive labors. In this paper, we aim to boost the performance of semi-supervised learning for medical image segmentation with limited labels using a self-ensembling contrastive learning technique. To this end, we propose to train an encoder-decoder network at image-level with small amounts of labeled images, and more importantly, we learn latent representations directly at feature-level by imposing contrastive loss on unlabeled images. This method strengthens intra-class compactness and inter-class separability, so as to get a better pixel classifier. Moreover, we devise a student encoder for online learning and an exponential moving average version of it, called teacher encoder, to improve the performance iteratively in a self-ensembling manner. To construct contrastive samples with unlabeled images, two sampling strategies that exploit structure similarity across medical images and utilize pseudo-labels for construction, termed region-aware and anatomical-aware contrastive sampling, are investigated. We conduct extensive experiments on an MRI and a CT segmentation dataset and demonstrate that in a limited label setting, the proposed method achieves state-of-the-art performance. Moreover, the anatomical-aware strategy that prepares contrastive samples on-the-fly using pseudo-labels realizes better contrastive regularization on feature representations.
翻译:深入的学习表明,使用大量手工标签的培训数据,医学图像分解在医学图像分解方面有了显著的改善。 获得具有良好代表性的标签需要专家的知识和详尽的劳动。 在本文中,我们的目标是利用一种自我混合的对比学习技术,提高半监督的医学图像分解学习的性能,使用有限的标签,使用一种自我混合的对比学习技术,用少量贴标签的图像在图像层次上培养一个编码器解码器网络,更重要的是,我们通过对未贴标签的图像进行对比性损失来直接了解特征层面的潜在表现。 这种方法可以加强分类内部的紧凑性和分类间分离性,从而获得更好的像素分类分析器。 此外,我们设计了一个学生编码器,用于在线学习,并有一个指数化的移动平均版本,称为教师编码器,以便以自我混合的方式提高性能。 以未贴标签的图像制作对比性样本,用两种抽样策略来利用类似医学图像的结构,并使用假标签进行构造,称为区域-觉和剖面的分解,我们用一种比较性分析方法来进行广泛的对比分析。