Unsupervised pre-training has been proven as an effective approach to boost various downstream tasks given limited labeled data. Among various methods, contrastive learning learns a discriminative representation by constructing positive and negative pairs. However, it is not trivial to build reasonable pairs for a segmentation task in an unsupervised way. In this work, we propose a novel unsupervised pre-training framework that avoids the drawback of contrastive learning. Our framework consists of two principles: unsupervised over-segmentation as a pre-train task using mutual information maximization and boundary-aware preserving learning. Experimental results on two benchmark medical segmentation datasets reveal our method's effectiveness in improving segmentation performance when few annotated images are available.
翻译:由于标签数据有限,未经监督的训练前任务已被证明是推动各种下游任务的有效方法。在各种方法中,对比式学习通过建立正对和负对等来学习歧视性代表。然而,以不受监督的方式为分治任务建立合理的对子并非微不足道。在这项工作中,我们提出了一个新的未经监督的训练前框架,避免了对比性学习的倒退。我们的框架包括两个原则:利用相互信息最大化和边界意识保存学习,将无监督的过度分化作为培训前任务。两个基准医疗分治数据集的实验结果显示,当很少有附加说明的图像可用时,我们的方法在改善分治表现方面的有效性。