Magnetic resonance images (MRIs) are widely used to quantify vestibular schwannoma and the cochlea. Recently, deep learning methods have shown state-of-the-art performance for segmenting these structures. However, training segmentation models may require manual labels in target domain, which is expensive and time-consuming. To overcome this problem, domain adaptation is an effective way to leverage information from source domain to obtain accurate segmentations without requiring manual labels in target domain. In this paper, we propose an unsupervised learning framework to segment the VS and cochlea. Our framework leverages information from contrast-enhanced T1-weighted (ceT1-w) MRIs and its labels, and produces segmentations for T2-weighted MRIs without any labels in the target domain. We first applied a generator to achieve image-to-image translation. Next, we ensemble outputs from an ensemble of different models to obtain final segmentations. To cope with MRIs from different sites/scanners, we applied various 'online' augmentations during training to better capture the geometric variability and the variability in image appearance and quality. Our method is easy to build and produces promising segmentations, with a mean Dice score of 0.7930 and 0.7432 for VS and cochlea respectively in the validation set.
翻译:磁共振图像(MRIS) 被广泛用于量化外皮沙丘瘤和cochlea 。 最近, 深层次的学习方法展示了这些结构的分解最先进的性能。 但是, 培训分解模型可能需要在目标领域使用手工标签, 成本昂贵且耗时。 为了解决这一问题, 域适应是一种有效的方法, 来利用源域信息获取准确的分层, 而不需要目标域的手工标签 。 在本文中, 我们建议为 VS 和 Cochlea 部分建立一个不受监督的学习框架 。 我们的框架 利用了对比增强的 T1 加权(ceT1- ww) 的MMIS 及其标签, 并且为T2 加权的MIS 及其标签制作了分块, 而没有在目标领域设置任何标签 。 我们首先使用一个生成者来实现图像到图像翻译。 下一步, 我们从不同模型的组合获得最终的分层。 为了适应来自不同站点/ Canner 的 MMS, 我们应用了各种“ 在线” 增强的 T1 (celine) AS Equal Equal grational) asqual dequal deal deal deal deal dealation gradustration 。