Domain shift has been a long-standing issue for medical image segmentation. Recently, unsupervised domain adaptation (UDA) methods have achieved promising cross-modality segmentation performance by distilling knowledge from a label-rich source domain to a target domain without labels. In this work, we propose a multi-scale self-ensembling based UDA framework for automatic segmentation of two key brain structures i.e., Vestibular Schwannoma (VS) and Cochlea on high-resolution T2 images. First, a segmentation-enhanced contrastive unpaired image translation module is designed for image-level domain adaptation from source T1 to target T2. Next, multi-scale deep supervision and consistency regularization are introduced to a mean teacher network for self-ensemble learning to further close the domain gap. Furthermore, self-training and intensity augmentation techniques are utilized to mitigate label scarcity and boost cross-modality segmentation performance. Our method demonstrates promising segmentation performance with a mean Dice score of 83.8% and 81.4% and an average asymmetric surface distance (ASSD) of 0.55 mm and 0.26 mm for the VS and Cochlea, respectively in the validation phase of the crossMoDA 2022 challenge.
翻译:领域转移一直是医学图像分割中长期存在的问题。最近,无监督领域自适应(UDA)方法通过在没有标签的情况下从标注丰富的源域到目标域中提取知识,实现了令人满意的横向性分割性能。在这项工作中,我们提出了一种基于多尺度自集成的 UDA 框架,用于高分辨率的 T2 图像上的两个关键脑结构(即前庭神经瘤和耳蜗)的自动分割。首先,设计了一个分割增强的对比无配对图像转换模块,用于从源 T1 到目标 T2 的图像级领域自适应。接下来,引入多尺度深度监督和一致性正则化到教师平均网络中进行自集成学习,进一步缩小领域差距。此外,利用自训练和强化强度增强技术来减轻标签稀缺性,提高横向性分割性能。我们的方法在 CrossMoDA2022 挑战的验证阶段中展示了令人满意的分割性能,其中前庭神经瘤和耳蜗的平均 Dice 得分分别为 83.8% 和 81.4%,平均不对称表面距离(ASSD)分别为 0.55 毫米和 0.26 毫米。