Deep learning models have obtained state-of-the-art results for medical image analysis. However, when these models are tested on an unseen domain there is a significant performance degradation. In this work, we present an unsupervised Cross-Modality Adversarial Domain Adaptation (C-MADA) framework for medical image segmentation. C-MADA implements an image- and feature-level adaptation method in a sequential manner. First, images from the source domain are translated to the target domain through an un-paired image-to-image adversarial translation with cycle-consistency loss. Then, a U-Net network is trained with the mapped source domain images and target domain images in an adversarial manner to learn domain-invariant feature representations. Furthermore, to improve the networks segmentation performance, information about the shape, texture, and con-tour of the predicted segmentation is included during the adversarial train-ing. C-MADA is tested on the task of brain MRI segmentation, obtaining competitive results.
翻译:深层学习模型在医学图像分析方面获得了最先进的成果。然而,当这些模型在一个看不见的域进行测试时,其性能严重退化。在这项工作中,我们提出了一个医学图像分割的不受监督的跨模式对流域适应(C-MADA)框架。C-MADA以相继的方式采用了图像和特征级适应方法。首先,源域的图像通过一个不记名的图像到图像的对称翻译转换到目标域,并带有循环一致性损失。然后,一个U-Net网络以对称方式用绘图源域域图和目标域图象培训,以学习域变量特征显示。此外,在对抗列车期间,还纳入了网络分割性功能、关于预测的形状、纹理和轮廓的信息。C-MADA在脑MRI分割任务上进行了测试,取得了竞争性结果。