Contrastive Learning (CL) is a recent representation learning approach, which encourages inter-class separability and intra-class compactness in learned image representations. Since medical images often contain multiple semantic classes in an image, using CL to learn representations of local features (as opposed to global) is important. In this work, we present a novel semi-supervised 2D medical segmentation solution that applies CL on image patches, instead of full images. These patches are meaningfully constructed using the semantic information of different classes obtained via pseudo labeling. We also propose a novel consistency regularization (CR) scheme, which works in synergy with CL. It addresses the problem of confirmation bias, and encourages better clustering in the feature space. We evaluate our method on four public medical segmentation datasets and a novel histopathology dataset that we introduce. Our method obtains consistent improvements over state-of-the-art semi-supervised segmentation approaches for all datasets.
翻译:对比性学习(CL)是最近的一种代表性学习方法,它鼓励阶级间分离和在学习的图像展示中形成阶级内紧凑关系。由于医学图像往往包含图像中的多个语义类,因此使用 CL 学习当地特征(相对于全球)很重要。在这项工作中,我们提出了一个新型的半监督的2D医学分解解决方案,在图像补丁上应用CL,而不是完整的图像。这些补丁是使用通过假标签获得的不同类别语义信息进行有意义的构建的。我们还提出了一个新的一致性规范(CR)计划,该计划与CL协同发挥作用。它解决了确认偏差问题,并鼓励更好地在特征空间进行分组。我们评估了四个公共医学分解数据集的方法和我们引入的新颖的病理学数据集。我们的方法在对所有数据集采用的最新半监督分解方法上得到了一致的改进。