Unsupervised domain adaptation seeks to mitigate the distribution discrepancy between source and target domains, given labeled samples of the source domain and unlabeled samples of the target domain. Generative adversarial networks (GANs) have demonstrated significant improvement in domain adaptation by producing images which are domain specific for training. However, most of the existing GAN based techniques for unsupervised domain adaptation do not consider semantic information during domain matching, hence these methods degrade the performance when the source and target domain data are semantically different. In this paper, we propose an end-to-end novel semantic consistent generative adversarial network (SCGAN). This network can achieve source to target domain matching by capturing semantic information at the feature level and producing images for unsupervised domain adaptation from both the source and the target domains. We demonstrate the robustness of our proposed method which exceeds the state-of-the-art performance in unsupervised domain adaptation settings by performing experiments on digit and object classification tasks.
翻译:未受监督的域适应努力缩小源域和目标域域之间的分布差异,考虑到源域的标签样本和目标域的未标签样本。生成的对抗性网络(GANs)通过制作专门用于培训的域图,在域适应方面显示出了显著的改进。然而,大多数现有的基于GAN的未受监督域适应技术在域匹配过程中不考虑语义信息,因此这些方法在源和目标域数据在语义上不同时会降低性能。在本文中,我们提议建立一个端到端的小说语义一致的基因对抗网络(SCGAN),通过在功能层面捕捉语义信息,从源和目标领域生成不受监督的域适应图像,从而实现目标域匹配源。我们通过对数字和对象分类任务进行实验,展示了我们拟议方法的稳健性,该方法超过了在不受控制的域适应环境中最先进的性能。