Since radiologists have different training and clinical experiences, they may provide various segmentation annotations for a lung nodule. Conventional studies choose a single annotation as the learning target by default, but they waste valuable information of consensus or disagreements ingrained in the multiple annotations. This paper proposes an Uncertainty-Guided Segmentation Network (UGS-Net), which learns the rich visual features from the regions that may cause segmentation uncertainty and contributes to a better segmentation result. With an Uncertainty-Aware Module, this network can provide a Multi-Confidence Mask (MCM), pointing out regions with different segmentation uncertainty levels. Moreover, this paper introduces a Feature-Aware Attention Module to enhance the learning of the nodule boundary and density differences. Experimental results show that our method can predict the nodule regions with different uncertainty levels and achieve superior performance in LIDC-IDRI dataset.
翻译:由于放射学家有不同的培训和临床经验,他们可以为肺结核提供不同的分解说明。常规研究选择单一注解作为默认的学习目标,但浪费了多个注解中包含的共识或分歧的宝贵信息。本文建议建立一个不确定的、有指导的分解网络(UGS-Net),从各个区域学习可能导致分解不确定性的丰富视觉特征,有助于更好的分解结果。有了不确定的软件模块,这个网络可以提供一个多分解遮罩(MCM),指出不同分解不确定程度的区域。此外,本文还引入了一个地物软件注意模块,以加强对结核边界和密度差异的学习。实验结果表明,我们的方法可以预测不同不确定程度的结核区域,并在LIDDC-IDRI数据集中取得优异的性能。