Domain shift happens in cross-domain scenarios commonly because of the wide gaps between different domains: when applying a deep learning model well-trained in one domain to another target domain, the model usually performs poorly. To tackle this problem, unsupervised domain adaptation (UDA) techniques are proposed to bridge the gap between different domains, for the purpose of improving model performance without annotation in the target domain. Particularly, UDA has a great value for multimodal medical image analysis, where annotation difficulty is a practical concern. However, most existing UDA methods can only achieve satisfactory improvements in one adaptation direction (e.g., MRI to CT), but often perform poorly in the other (CT to MRI), limiting their practical usage. In this paper, we propose a bidirectional UDA (BiUDA) framework based on disentangled representation learning for equally competent two-way UDA performances. This framework employs a unified domain-aware pattern encoder which not only can adaptively encode images in different domains through a domain controller, but also improve model efficiency by eliminating redundant parameters. Furthermore, to avoid distortion of contents and patterns of input images during the adaptation process, a content-pattern consistency loss is introduced. Additionally, for better UDA segmentation performance, a label consistency strategy is proposed to provide extra supervision by recomposing target-domain-styled images and corresponding source-domain annotations. Comparison experiments and ablation studies conducted on two public datasets demonstrate the superiority of our BiUDA framework to current state-of-the-art UDA methods and the effectiveness of its novel designs. By successfully addressing two-way adaptations, our BiUDA framework offers a flexible solution of UDA techniques to the real-world scenario.
翻译:由于不同领域之间存在巨大差距,通常在跨域假设中发生域变换,通常是因为不同领域之间存在巨大差距:当将一个深层次学习模型在一个领域受过良好训练的URI到另一个目标领域时,该模型通常表现不佳。为了解决这一问题,我们提议了一个无监督域适应技术来缩小不同领域之间的差距,目的是为了改善模型性能,而不必在目标领域作出说明。特别是,UDA对于多式医学图像分析有着巨大的价值,在其中说明困难是一个实际问题。然而,大多数现有的UDA方法只能在一个适应方向(例如,MRI到CT)实现令人满意的改进,但在另一个领域(CT to MRI)往往表现不佳,但是在另一个领域(CT to MRI)则限制其实际使用。为解决这一问题,我们提议了一个双向的UDA(BUDA)双向表示学习。这个框架使用一个统一的域认知模式,不仅可以通过域控制器对不同领域的灵活图像进行调控,而且通过消除冗余参数来提高模型的效率。此外,要避免BODA(B)当前图像的变正轨图解)的当前图象和图则在适应过程中进行双向数据变。