Source-free domain adaptation (SFDA) aims to adapt a classifier to an unlabelled target data set by only using a pre-trained source model. However, the absence of the source data and the domain shift makes the predictions on the target data unreliable. We propose quantifying the uncertainty in the source model predictions and utilizing it to guide the target adaptation. For this, we construct a probabilistic source model by incorporating priors on the network parameters inducing a distribution over the model predictions. Uncertainties are estimated by employing a Laplace approximation and incorporated to identify target data points that do not lie in the source manifold and to down-weight them when maximizing the mutual information on the target data. Unlike recent works, our probabilistic treatment is computationally lightweight, decouples source training and target adaptation, and requires no specialized source training or changes of the model architecture. We show the advantages of uncertainty-guided SFDA over traditional SFDA in the closed-set and open-set settings and provide empirical evidence that our approach is more robust to strong domain shifts even without tuning.
翻译:无源域适应(SFDA)的目的是只使用经过事先培训的源模型,使分类器适应未标签目标数据集。然而,由于没有源数据和域变换,因此对目标数据的预测不可靠。我们提议量化源模型预测中的不确定性,并利用该预测指导目标调整。为此,我们通过在网络参数上纳入前缀,引导模型预测的分布,构建一种概率源模型。不确定性是通过使用拉普尔近似法估计的,并结合来查明并非来源方的具体目标数据点,并在尽可能扩大目标数据相互信息时将其缩小加权。与最近的工程不同,我们概率处理方法是计算轻度、脱couples源培训和目标调整,不需要专门的源培训或模型结构的改变。我们展示了在封闭式和开放式环境中以不确定性制导的SFDA超过传统的SFDA的优势,并提供了经验证据,证明我们的方法即使在不作调整的情况下,也更稳健地进行强的域变。