We propose a new technique called CHATTY: Coupled Holistic Adversarial Transport Terms with Yield for Unsupervised Domain Adaptation. Adversarial training is commonly used for learning domain-invariant representations by reversing the gradients from a domain discriminator head to train the feature extractor layers of a neural network. We propose significant modifications to the adversarial head, its training objective, and the classifier head. With the aim of reducing class confusion, we introduce a sub-network which displaces the classifier outputs of the source and target domain samples in a learnable manner. We control this movement using a novel transport loss that spreads class clusters away from each other and makes it easier for the classifier to find the decision boundaries for both the source and target domains. The results of adding this new loss to a careful selection of previously proposed losses leads to improvement in UDA results compared to the previous state-of-the-art methods on benchmark datasets. We show the importance of the proposed loss term using ablation studies and visualization of the movement of target domain sample in representation space.
翻译:我们提出了一种新技术,称为CHATTY:耦合全面对抗传输项和收益的无监督域适应。对抗训练通常用于通过将来自域鉴别器头部的梯度反转到神经网络的特征提取器层来学习域不变表示。我们对对抗头、训练目标和分类器头部进行了重大修改。为了减少类混淆,我们引入了一个子网络,以可学习的方式移动源和目标域样本的分类器输出。我们使用一种新颖的传输损失来控制这种移动,将类簇从彼此分开,并使分类器更容易找到源域和目标域的决策边界。将此新损失添加到精心选择的先前提出的损失中的结果使得 UDA 在基准数据集上的结果比先前的最先进方法有所改进。我们使用消融研究和可视化目标域样本在表示空间中的移动来展示所提出的损失项的重要性。