In this work, we study Unsupervised Domain Adaptation (UDA) in a challenging self-supervised approach. One of the difficulties is how to learn task discrimination in the absence of target labels. Unlike previous literature which directly aligns cross-domain distributions or leverages reverse gradient, we propose Domain Confused Contrastive Learning (DCCL) to bridge the source and the target domains via domain puzzles, and retain discriminative representations after adaptation. Technically, DCCL searches for a most domain-challenging direction and exquisitely crafts domain confused augmentations as positive pairs, then it contrastively encourages the model to pull representations towards the other domain, thus learning more stable and effective domain invariances. We also investigate whether contrastive learning necessarily helps with UDA when performing other data augmentations. Extensive experiments demonstrate that DCCL significantly outperforms baselines.
翻译:在这项工作中,我们以具有挑战性的自我监督方法研究无人监督的域适应(UDA) 。 困难之一是如何在没有目标标签的情况下学习任务歧视。 与以往直接对跨域分布进行对齐或利用反向梯度的文献不同,我们提议Doline Confredtive Contracting Learning(DCCL)通过域拼图连接源和目标领域,并在适应后保留有区别的表述。 从技术上讲,DCCLL寻找一个最具领域挑战性的方向和精巧的手工艺域,将增强部分混为正对,然后相反地鼓励模型将表达方式拉动到另一个领域,从而学习更加稳定和有效的变异性域。 我们还调查对比学习是否必然有助于UDA在进行其他数据增强时对数据产生影响。 广泛的实验表明DCCL大大超过基线。