Domain Adaptation is an actively researched problem in Computer Vision. In this work, we propose an approach that leverages unsupervised data to bring the source and target distributions closer in a learned joint feature space. We accomplish this by inducing a symbiotic relationship between the learned embedding and a generative adversarial network. This is in contrast to methods which use the adversarial framework for realistic data generation and retraining deep models with such data. We demonstrate the strength and generality of our approach by performing experiments on three different tasks with varying levels of difficulty: (1) Digit classification (MNIST, SVHN and USPS datasets) (2) Object recognition using OFFICE dataset and (3) Domain adaptation from synthetic to real data. Our method achieves state-of-the art performance in most experimental settings and by far the only GAN-based method that has been shown to work well across different datasets such as OFFICE and DIGITS.
翻译:在这项工作中,我们提出一种方法,利用不受监督的数据,使源和目标分布更接近于一个学习到的共同特征空间。我们通过在学习到的嵌入和基因对抗网络之间建立共生关系来实现这一目标。这与使用对抗框架来实际生成数据和用这些数据对深层模型进行再培训的方法不同。我们通过对三种不同任务进行不同难度的实验,展示了我们的方法的力度和一般性:(1) Digit分类(MNIST、SVHN和USPS数据集)(2) 利用Office数据集对物体进行识别,(3) 从合成数据到真实数据对目标进行校验。我们的方法在大多数实验环境中实现了最先进的艺术性能,远而言,也是在办公室和DIGITS等不同数据集之间显示的唯一基于GAN的方法。