Recent works in medical image segmentation have actively explored various deep learning architectures or objective functions to encode high-level features from volumetric data owing to limited image annotations. However, most existing approaches tend to ignore cross-volume global context and define context relations in the decision space. In this work, we propose a novel voxel-level Siamese representation learning method for abdominal multi-organ segmentation to improve representation space. The proposed method enforces voxel-wise feature relations in the representation space for leveraging limited datasets more comprehensively to achieve better performance. Inspired by recent progress in contrastive learning, we suppressed voxel-wise relations from the same class to be projected to the same point without using negative samples. Moreover, we introduce a multi-resolution context aggregation method that aggregates features from multiple hidden layers, which encodes both the global and local contexts for segmentation. Our experiments on the multi-organ dataset outperformed the existing approaches by 2% in Dice score coefficient. The qualitative visualizations of the representation spaces demonstrate that the improvements were gained primarily by a disentangled feature space.
翻译:最近医学图像分割工程积极探索了各种深层次的学习结构或客观功能,以便从体积数据中编码高层次的特征,因为图像说明有限。然而,大多数现有方法往往忽视跨量的全球背景,并界定决策空间的上下文关系。在这项工作中,我们提议了一种新型的Voxel-Siamse 代表式学习方法,用于腹部多机分解,以改善代表空间。拟议方法在代表空间中强化了Voxel-with特征关系,以便更全面地利用有限的数据集实现更好的性能。在对比性学习的最新进展的启发下,我们压制了同一类的Voxel-with关系,不使用负面样本预测到同一点。此外,我们引入了一种多分辨率背景组合方法,将多层的隐性层综合起来,将全球和局部的分解环境编码起来。我们在多机组数据集方面的实验在狄氏分分系数中将现有方法比2%高得多。代表空间的质量可视化表明,改进主要是通过不相交错的特征空间取得的。