We propose a new technique called CHATTY: Coupled Holistic Adversarial Transport Terms with Yield for Unsupervised Domain Adaptation. Adversarial training is commonly used for learning domain-invariant representations by reversing the gradients from a domain discriminator head to train the feature extractor layers of a neural network. We propose significant modifications to the adversarial head, its training objective, and the classifier head. With the aim of reducing class confusion, we introduce a sub-network which displaces the classifier outputs of the source and target domain samples in a learnable manner. We control this movement using a novel transport loss that spreads class clusters away from each other and makes it easier for the classifier to find the decision boundaries for both the source and target domains. The results of adding this new loss to a careful selection of previously proposed losses leads to improvement in UDA results compared to the previous state-of-the-art methods on benchmark datasets. We show the importance of the proposed loss term using ablation studies and visualization of the movement of target domain sample in representation space.
翻译:我们提出了一种新技术,称为 CHATTY(Coupled Holistic Adversarial Transport Terms with Yield)——带产量的耦合全面对抗迁移项,用于无监督领域自适应。对抗训练通常用于学习域不变表示,通过将来自域判别器头的梯度反向传播到神经网络的特征提取器层上。我们对对抗头、其训练目标和分类器头提出了重要修改。为了减少类混淆,我们引入了一个子网络,以可学习的方式调整源域和目标域样本的分类器输出。我们使用一种新的传输损失来控制这种移动,将类簇相互分开,使得分类器更易于找到源域和目标域的决策边界。将这种新损失与之前提出的损失进行精心选择后得到的结果,超过了基准数据集上先前最先进的方法。我们通过消融研究和在表示空间中可视化目标域样本的移动,展示了所提出的损失项的重要性。