Medical image segmentation methods typically rely on numerous dense annotated images for model training, which are notoriously expensive and time-consuming to collect. To alleviate this burden, weakly supervised techniques have been exploited to train segmentation models with less expensive annotations. In this paper, we propose a novel point-supervised contrastive variance method (PSCV) for medical image semantic segmentation, which only requires one pixel-point from each organ category to be annotated. The proposed method trains the base segmentation network by using a novel contrastive variance (CV) loss to exploit the unlabeled pixels and a partial cross-entropy loss on the labeled pixels. The CV loss function is designed to exploit the statistical spatial distribution properties of organs in medical images and their variance distribution map representations to enforce discriminative predictions over the unlabeled pixels. Experimental results on two standard medical image datasets demonstrate that the proposed method outperforms the state-of-the-art weakly supervised methods on point-supervised medical image semantic segmentation tasks.
翻译:医学图象分解方法通常依赖大量密集的附加说明的模型培训图象,这种模型培训费用昂贵,耗时费时。为了减轻这一负担,已经利用了监督不力的技术来训练费用较低的分解模型。在本文中,我们提出了一种新的由点监督的对比差异法(PSCV),用于医学图象分解,它只要求每个器官类别有一个像素点作附加说明。拟议方法使用新的对比差异(CV)损失来训练基础分解网络,以利用未贴标签的像素和标签的像素的局部跨体损失。CV损失功能旨在利用医学图象器官的统计空间分布特性及其差异分布图示,以对未贴标签的像素进行歧视预测。两个标准医学图象数据集的实验结果显示,拟议的方法超越了在点监控的医疗图象分解任务上最先进的、受监督薄弱的方法。