The success of deep learning has set new benchmarks for many medical image analysis tasks. However, deep models often fail to generalize in the presence of distribution shifts between training (source) data and test (target) data. One method commonly employed to counter distribution shifts is domain adaptation: using samples from the target domain to learn to account for shifted distributions. In this work we propose an unsupervised domain adaptation approach that uses graph neural networks and, disentangled semantic and domain invariant structural features, allowing for better performance across distribution shifts. We propose an extension to swapped autoencoders to obtain more discriminative features. We test the proposed method for classification on two challenging medical image datasets with distribution shifts - multi center chest Xray images and histopathology images. Experiments show our method achieves state-of-the-art results compared to other domain adaptation methods.
翻译:深层学习的成功为许多医学图像分析任务确定了新的基准。 但是,深层模型往往无法在培训(源)数据和测试(目标)数据之间的分布变化中加以概括。 对抗分布转移通常使用的一种方法是领域适应:使用目标域的样本来学习对转移分布进行核算。 在这项工作中,我们建议了一种不受监督的域适应方法,该方法使用图形神经网络,并分解了变化性结构特征的语义和域域名,使分布变化性结构功能得以更好的运行。我们建议扩大自动编码器的交换,以获得更具歧视性的特征。我们测试了两种具有挑战性的医学图像数据集的分类方法,即多中胸X射线图像和组织病理图象。实验显示我们的方法与其他领域适应方法相比取得了最新的结果。