Unsupervised Domain Adaptation (UDA) aims to learn a predictor model for an unlabeled domain by transferring knowledge from a separate labeled source domain. However, most of these conventional UDA approaches make the strong assumption of having access to the source data during training, which may not be very practical due to privacy, security and storage concerns. A recent line of work addressed this problem and proposed an algorithm that transfers knowledge to the unlabeled target domain from a single source model without requiring access to the source data. However, for adaptation purposes, if there are multiple trained source models available to choose from, this method has to go through adapting each and every model individually, to check for the best source. Thus, we ask the question: can we find the optimal combination of source models, with no source data and without target labels, whose performance is no worse than the single best source? To answer this, we propose a novel and efficient algorithm which automatically combines the source models with suitable weights in such a way that it performs at least as good as the best source model. We provide intuitive theoretical insights to justify our claim. Furthermore, extensive experiments are conducted on several benchmark datasets to show the effectiveness of our algorithm, where in most cases, our method not only reaches best source accuracy but also outperforms it.
翻译:无人监督的域域适应(UDA)的目的是通过从一个有标签的单独源域转让知识,学习一个无标签域的预测模型。然而,大多数这些传统的UDA方法都强烈假定在培训期间能够访问源数据,由于隐私、安全和储存方面的考虑,这种假设可能并不十分实用。最近的一项工作解决了这一问题,并提出了一个算法,将知识从单一源模型转移到无标签目标域,而无需查阅源数据。然而,为了适应目的,如果有多种经过培训的源模型可以从中选择,这种方法必须通过对每个模型进行单独调整,以检查最佳来源。因此,我们提出这样一个问题:我们能否找到源模型的最佳组合,而没有源数据,没有目标标签,其性能不比单一最佳来源差?为了回答这个问题,我们建议一种新颖和有效的算法,将源模型与适当重量自动结合起来,至少表现为最佳来源模型。我们提供了直观的理论洞察力来证明我们的说法是正确的。此外,在几个基准数据序列中,我们只能用最精确的方法来证明我们最准确性。