An organ segmentation method that can generalize to unseen contrasts and scanner settings can significantly reduce the need for retraining of deep learning models. Domain Generalization (DG) aims to achieve this goal. However, most DG methods for segmentation require training data from multiple domains during training. We propose a novel adversarial domain generalization method for organ segmentation trained on data from a \emph{single} domain. We synthesize the new domains via learning an adversarial domain synthesizer (ADS) and presume that the synthetic domains cover a large enough area of plausible distributions so that unseen domains can be interpolated from synthetic domains. We propose a mutual information regularizer to enforce the semantic consistency between images from the synthetic domains, which can be estimated by patch-level contrastive learning. We evaluate our method for various organ segmentation for unseen modalities, scanning protocols, and scanner sites.
翻译:可将器官分解方法概括为看不见的对比和扫描器设置,可以大大减少对深层学习模型进行再培训的需要。DG(DG)旨在实现这一目标。但是,DG(DG)大多数分解方法都需要在培训期间从多个领域获得培训数据。我们提出了一个新的对抗性区域分解方法,用于根据来自\emph{single}域的数据培训的器官分解方法。我们通过学习对立域合成器(ADS)来综合新的领域,并假定合成域覆盖了足够大面积的貌似分布区,以便从合成域中将看不见区域相互推断出来。我们建议了一个相互的信息分解器,以实施合成域图像之间的语义一致性,这可以通过近距离的对比性学习来估算。我们评估了我们各种组织分解方法的视模式、扫描协议和扫描站点。