We extend semi-supervised learning to the problem of domain adaptation to learn significantly higher-accuracy models that train on one data distribution and test on a different one. With the goal of generality, we introduce AdaMatch, a method that unifies the tasks of unsupervised domain adaptation (UDA), semi-supervised learning (SSL), and semi-supervised domain adaptation (SSDA). In an extensive experimental study, we compare its behavior with respective state-of-the-art techniques from SSL, SSDA, and UDA on vision classification tasks. We find AdaMatch either matches or significantly exceeds the state-of-the-art in each case using the same hyper-parameters regardless of the dataset or task. For example, AdaMatch nearly doubles the accuracy compared to that of the prior state-of-the-art on the UDA task for DomainNet and even exceeds the accuracy of the prior state-of-the-art obtained with pre-training by 6.4% when AdaMatch is trained completely from scratch. Furthermore, by providing AdaMatch with just one labeled example per class from the target domain (i.e., the SSDA setting), we increase the target accuracy by an additional 6.1%, and with 5 labeled examples, by 13.6%.
翻译:我们将半监督学习扩展至领域适应问题,以学习在一个数据分布和不同数据测试上训练的高度精准模型。为了普遍性的目的,我们引入了AdaMatch,这是一个统一未监督域适应任务(UDA)、半监督学习(SSL)和半监督域适应任务(SSDA)的方法。在一项广泛的实验研究中,我们将AdaMatch的行为与SSL、SDA和UDA在视觉分类任务方面各自最先进的技术进行比较。我们发现AdaMatch要么匹配,要么大大超过每个案例的艺术状态,使用相同的超参数,而不管数据集或任务。例如,AdaMatch的精确度几乎是DADA的两倍。在对AdaMatch进行完全从头抓起的培训时,AdaMatch与6.4%的先进技术相比,甚至超过了先前在AdaMatch完全从头到脚的训练中获得的6.4%的先进技术的精确度。此外,我们提供了AdaMatch在1个域域标中增加了1个指标。