Domain adaptation (DA) paves the way for label annotation and dataset bias issues by the knowledge transfer from a label-rich source domain to a related but unlabeled target domain. A mainstream of DA methods is to align the feature distributions of the two domains. However, the majority of them focus on the entire image features where irrelevant semantic information, e.g., the messy background, is inevitably embedded. Enforcing feature alignments in such case will negatively influence the correct matching of objects and consequently lead to the semantically negative transfer due to the confusion of irrelevant semantics. To tackle this issue, we propose Semantic Concentration for Domain Adaptation (SCDA), which encourages the model to concentrate on the most principal features via the pair-wise adversarial alignment of prediction distributions. Specifically, we train the classifier to class-wisely maximize the prediction distribution divergence of each sample pair, which enables the model to find the region with large differences among the same class of samples. Meanwhile, the feature extractor attempts to minimize that discrepancy, which suppresses the features of dissimilar regions among the same class of samples and accentuates the features of principal parts. As a general method, SCDA can be easily integrated into various DA methods as a regularizer to further boost their performance. Extensive experiments on the cross-domain benchmarks show the efficacy of SCDA.
翻译:域适应 (DA) 为通过从标签丰富源域向相关但无标签的目标域的知识转移,标签说明和数据集偏差问题铺平了道路。 DA 方法的一个主流是调和这两个域的特征分布。 但是,它们中的大多数侧重于整个图像特征,在这些特征中,无关的语义信息(例如混乱的背景)不可避免地嵌入其中。在这类情况下强化特征调整将对对象的正确匹配产生消极影响,从而导致由于不相关的语义的混淆而导致语义转移。为了解决这一问题,我们建议DA 的语义集中用于 Domain 适应(SCDA),这鼓励模式通过对称的预测分布对称对称对立对调,集中关注最主要特征。具体地说,我们培训分类者,以便按等级最大限度地扩大每组样本的预测分布差异,使模型能够找到同一类别之间差异很大的区域。同时,为尽量减少这种差异,从而抑制同一类别不同区域的特征,我们建议Domain适应(SCDA), 将常规性测试方法提升SDA 。