Unsupervised Domain Adaptation (UDA), which aims to explore the transferrable features from a well-labeled source domain to a related unlabeled target domain, has been widely progressed. Nevertheless, as one of the mainstream, existing adversarial-based methods neglect to filter the irrelevant semantic knowledge, hindering adaptation performance improvement. Besides, they require an additional domain discriminator that strives extractor to generate confused representations, but discrete designing may cause model collapse. To tackle the above issues, we propose Crucial Semantic Classifier-based Adversarial Learning (CSCAL), which pays more attention to crucial semantic knowledge transferring and leverages the classifier to implicitly play the role of domain discriminator without extra network designing. Specifically, in intra-class-wise alignment, a Paired-Level Discrepancy (PLD) is designed to transfer crucial semantic knowledge. Additionally, based on classifier predictions, a Nuclear Norm-based Discrepancy (NND) is formed that considers inter-class-wise information and improves the adaptation performance. Moreover, CSCAL can be effortlessly merged into different UDA methods as a regularizer and dramatically promote their performance.
翻译:无监督的域适应(UDA)旨在探索从标签良好的源域向相关无标签的目标域的可转移特性,但已经取得了广泛的进展,然而,作为主流的一个,现有的基于对抗性的方法忽视了过滤无关的语义知识,妨碍了适应性绩效的改进。此外,这些方法要求增加一个域歧视者,争取提取器产生混乱的表述,但独立的设计可能导致模型崩溃。为了解决上述问题,我们提议以关键语义分类为基础的反向学习(CSCAL),更多地关注关键语义知识的转移,并利用分类者在不设计额外网络的情况下隐含地发挥域歧视者的作用。具体地说,在类内调整中,一个等级差异(PLD)旨在转移关键的语义性知识。此外,根据叙级预测,一个基于核诺的不一致(NND)正在形成一个考虑等级间信息并改进适应性能的系统。此外,CSCAL可以毫不费力地将不同的UDA性能作为常规化和快速的促进。