We aim for source-free domain adaptation, where the task is to deploy a model pre-trained on source domains to target domains. The challenges stem from the distribution shift from the source to the target domain, coupled with the unavailability of any source data and labeled target data for optimization. Rather than fine-tuning the model by updating the parameters, we propose to perturb the source model to achieve adaptation to target domains. We introduce perturbations into the model parameters by variational Bayesian inference in a probabilistic framework. By doing so, we can effectively adapt the model to the target domain while largely preserving the discriminative ability. Importantly, we demonstrate the theoretical connection to learning Bayesian neural networks, which proves the generalizability of the perturbed model to target domains. To enable more efficient optimization, we further employ a parameter sharing strategy, which substantially reduces the learnable parameters compared to a fully Bayesian neural network. Our model perturbation provides a new probabilistic way for domain adaptation which enables efficient adaptation to target domains while maximally preserving knowledge in source models. Experiments on several source-free benchmarks under three different evaluation settings verify the effectiveness of the proposed variational model perturbation for source-free domain adaptation.
翻译:我们的目标是实现无源域适应,我们的任务是在目标领域部署一个在源域上经过预先培训的模型。挑战来自分配从源域向目标域的分布转移,加上没有任何源数据和标签目标数据无法优化。我们提议通过更新参数对模型进行微调,而不是通过更新参数对模型进行微调,而是对源模型进行扰动,以实现目标域的适应。我们通过在概率框架中变异的巴伊西亚神经网络对模型参数进行扰动。通过这样做,我们可以有效地将模型调整到目标域,同时在很大程度上保持有区别性的能力。重要的是,我们展示了与学习巴伊西亚神经网络的理论联系,这证明过敏模型对目标域的可普遍适用性。为了能够更有效地优化,我们进一步采用了参数共享战略,大大降低了可学习的参数,与完全的巴伊西亚神经网络相比。我们的模型的扰动性为域适应提供了一种新的概率性方法,使得能够有效地适应目标域域,同时最大限度地保存源模型的知识。重要的是,我们在三个不同的评估设置下对若干无源域基准进行了实验,以核实拟议的源变化的有效性。