Medical image segmentation has been widely recognized as a pivot procedure for clinical diagnosis, analysis, and treatment planning. However, the laborious and expensive annotation process lags down the speed of further advances. Contrastive learning-based weight pre-training provides an alternative by leveraging unlabeled data to learn a good representation. In this paper, we investigate how contrastive learning benefits the general supervised medical segmentation tasks. To this end, patch-dragsaw contrastive regularization (PDCR) is proposed to perform patch-level tugging and repulsing with the extent controlled by a continuous affinity score. And a new structure dubbed uncertainty-aware feature selection block (UAFS) is designed to perform the feature selection process, which can handle the learning target shift caused by minority features with high uncertainty. By plugging the proposed 2 modules into the existing segmentation architecture, we achieve state-of-the-art results across 8 public datasets from 6 domains. Newly designed modules further decrease the amount of training data to a quarter while achieving comparable, if not better, performances. From this perspective, we take the opposite direction of the original self/un-supervised contrastive learning by further excavating information contained within the label.
翻译:医学图象分解已被公认为临床诊断、分析和治疗规划的枢纽程序,然而,艰苦和昂贵的批注过程会减慢进一步进展的速度。 以学习为基础的对比性加权前训练通过利用未贴标签的数据来学习良好的代表性,提供了一种替代办法。 在本文件中,我们调查对比性学习如何有利于一般监督的医疗分解任务。 为此,建议补丁拖拉锯对比性规范(PDCR)在连续的亲近得分控制范围内进行补丁级拖网和反射。 新的结构称为不确定性特征选择块(UAAFS)是为了执行特征选择过程,它可以处理由高度不确定的少数特征造成的学习目标变化。 通过将拟议的2个模块插进现有的分解结构,我们实现了6个领域的8个公共数据集的最新结果。新设计的模块将培训数据进一步降低到四分之一,同时实现可比较性(如果不是更好的话)业绩。 从这个角度出发,我们从原始的自我/不统一标签中走反方向。