Various deep learning models have been developed to segment anatomical structures from medical images, but they typically have poor performance when tested on another target domain with different data distribution. Recently, unsupervised domain adaptation methods have been proposed to alleviate this so-called domain shift issue, but most of them are designed for scenarios with relatively small domain shifts and are likely to fail when encountering a large domain gap. In this paper, we propose DCDA, a novel cross-modality unsupervised domain adaptation framework for tasks with large domain shifts, e.g., segmenting retinal vessels from OCTA and OCT images. DCDA mainly consists of a disentangling representation style transfer (DRST) module and a collaborative consistency learning (CCL) module. DRST decomposes images into content components and style codes and performs style transfer and image reconstruction. CCL contains two segmentation models, one for source domain and the other for target domain. The two models use labeled data (together with the corresponding transferred images) for supervised learning and perform collaborative consistency learning on unlabeled data. Each model focuses on the corresponding single domain and aims to yield an expertized domain-specific segmentation model. Through extensive experiments on retinal vessel segmentation, our framework achieves Dice scores close to target-trained oracle both from OCTA to OCT and from OCT to OCTA, significantly outperforming other state-of-the-art methods.
翻译:开发了各种深层次的学习模型,从医学图像中分离解剖结构,但在用不同的数据分布在另一个目标领域进行测试时,这些模型的性能通常差。最近,为缓解这一所谓的域转移问题,提出了不受监督的域适应方法,但大多数都是针对相对小的域转移的情景设计的,在遇到巨大的域差距时可能会失败。在本文中,我们建议DDA,一个全新的、不受监督的域变化任务跨模式适应框架,例如,OCTA和OCT图像的视网膜容器分解。DCDA主要包括一个脱钩的代表风格传输模块和一个协作一致性学习模块。DRST将图像分解成内容和样式代码,并进行样式转换和图像重建。CDCL包含两个分解模型,一个是源域域,另一个是源域,另一个是目标域域。两种模型(连同相应的转移图像)使用标签数据,在未贴标签的数据上进行学习并进行协作一致性学习。DCA,每个模型都侧重于相应的单一域域图案模式,另一个是专家分块。