Unsupervised domain adaption (UDA) aims to adapt models learned from a well-annotated source domain to a target domain, where only unlabeled samples are given. Current UDA approaches learn domain-invariant features by aligning source and target feature spaces. Such alignments are imposed by constraints such as statistical discrepancy minimization or adversarial training. However, these constraints could lead to the distortion of semantic feature structures and loss of class discriminability. In this paper, we introduce a novel prompt learning paradigm for UDA, named Domain Adaptation via Prompt Learning (DAPL). In contrast to prior works, our approach makes use of pre-trained vision-language models and optimizes only very few parameters. The main idea is to embed domain information into prompts, a form of representations generated from natural language, which is then used to perform classification. This domain information is shared only by images from the same domain, thereby dynamically adapting the classifier according to each domain. By adopting this paradigm, we show that our model not only outperforms previous methods on several cross-domain benchmarks but also is very efficient to train and easy to implement.
翻译:未受监督的域适应(UDA)旨在将从有良好说明的来源域中学习的模型调整到仅提供未贴标签样本的目标域。当前 UDA 方法通过对源和目标特性空间进行对齐来学习域变量特征。这种调整是由统计差异最小化或对抗性培训等制约因素所施加的。然而,这些制约可能导致语义特征结构扭曲和类别差异性丧失。在本文件中,我们为UDA引入了一个新的快速学习模式,名为“Domain Adit ” (DAPL ) 。与以前的工作不同,我们的方法是使用预先训练的视觉语言模型,并且只优化极少的参数。主要想法是将域信息嵌入提示器,这是一种由自然语言生成的表达形式,然后用于进行分类。这一域信息只能由同一域的图像共享,从而动态地根据每个域对分类器进行调整。我们采用这一模式,表明我们的模型不仅超越了几个交叉基准的以往方法,而且非常高效地进行培训和易于执行。