Extensive studies on Unsupervised Domain Adaptation (UDA) have propelled the deployment of deep learning from limited experimental datasets into real-world unconstrained domains. Most UDA approaches align features within a common embedding space and apply a shared classifier for target prediction. However, since a perfectly aligned feature space may not exist when the domain discrepancy is large, these methods suffer from two limitations. First, the coercive domain alignment deteriorates target domain discriminability due to lacking target label supervision. Second, the source-supervised classifier is inevitably biased to source data, thus it may underperform in target domain. To alleviate these issues, we propose to simultaneously conduct feature alignment in two individual spaces focusing on different domains, and create for each space a domain-oriented classifier tailored specifically for that domain. Specifically, we design a Domain-Oriented Transformer (DOT) that has two individual classification tokens to learn different domain-oriented representations, and two classifiers to preserve domain-wise discriminability. Theoretical guaranteed contrastive-based alignment and the source-guided pseudo-label refinement strategy are utilized to explore both domain-invariant and specific information. Comprehensive experiments validate that our method achieves state-of-the-art on several benchmarks.
翻译:关于无人监督的域适应(UDA)的广泛研究推动了从有限的实验数据集中深入学习的深层次学习被推广到现实世界中不受限制的领域。多数UDA方法在共同嵌入空间内统一特征,并应用共同的分类器进行目标预测。然而,由于当域差异很大时可能不存在完全一致的特性空间,这些方法存在两个限制。第一,胁迫性域对齐使目标域差异因缺乏目标标签监督而恶化。第二,源监督的分类器不可避免地偏向于源数据,因此在目标领域可能不完善。为了缓解这些问题,我们提议在两个以不同领域为重点的单个空间同时进行特征调整,并为每个空间专门为该领域设计一个面向域的面向域的分类器。具体地说,我们设计了一种主控式变换器(DOT),它有两个单独的分类标志来学习不同的域面向的表达方式,两个分类器来保存域上偏差的分类器。理论上保证的对比性调整和源导的伪标签改进战略用于探索若干域域变和具体信息的基准。