Unsupervised domain adaptation (UDA) aims to improve the prediction performance in the target domain under distribution shifts from the source domain. The key principle of UDA is to minimize the divergence between the source and the target domains. To follow this principle, many methods employ a domain discriminator to match the feature distributions. Some recent methods evaluate the discrepancy between two predictions on target samples to detect those that deviate from the source distribution. However, their performance is limited because they either match the marginal distributions or measure the divergence conservatively. In this paper, we present a novel UDA method that learns domain-invariant features that minimize the domain divergence. We propose model uncertainty as a measure of the domain divergence. Our UDA method based on model uncertainty (MUDA) adopts a Bayesian framework and provides an efficient way to evaluate model uncertainty by means of Monte Carlo dropout sampling. Empirical results on image recognition tasks show that our method is superior to existing state-of-the-art methods. We also extend MUDA to multi-source domain adaptation problems.
翻译:未经监督的域适应(UDA)旨在改进从源域转而分布的目标域的预测绩效。UDA的关键原则是最大限度地缩小源与目标域之间的差异。为了遵循这一原则,许多方法使用域区分器来匹配地貌分布。一些最近的方法评估了目标样品两种预测之间的差异,以检测偏离源分布的。然而,它们的绩效是有限的,因为它们与边际分布相匹配或以稳妥的方式测量差异。在本文中,我们介绍了一种新的UDA方法,该方法学习了最小化域差异的域异性特征。我们提出了模型不确定性,以衡量域差异。我们基于模型不确定性(MUDA)的UDA方法采用了一种巴耶斯框架,为通过蒙特卡洛退出取样评估模型不确定性提供了有效的方法。关于图像识别任务的经验显示,我们的方法优于现有的状态方法。我们还将MUDA扩大到多源域适应问题。