Domain generalization is challenging due to the domain shift and the uncertainty caused by the inaccessibility of target domain data. In this paper, we address both challenges with a probabilistic framework based on variational Bayesian inference, by incorporating uncertainty into neural network weights. We couple domain invariance in a probabilistic formula with the variational Bayesian inference. This enables us to explore domain-invariant learning in a principled way. Specifically, we derive domain-invariant representations and classifiers, which are jointly established in a two-layer Bayesian neural network. We empirically demonstrate the effectiveness of our proposal on four widely used cross-domain visual recognition benchmarks. Ablation studies validate the synergistic benefits of our Bayesian treatment when jointly learning domain-invariant representations and classifiers for domain generalization. Further, our method consistently delivers state-of-the-art mean accuracy on all benchmarks.
翻译:由于域变和无法获取目标域数据造成的不确定性,广域化具有挑战性。在本文中,我们通过将不确定性纳入神经网络重量,以基于不同贝耶斯语推论的概率框架来应对这两项挑战。我们将不确定性纳入神经网络加权数。我们用一种概率公式将域变异与贝耶斯语推论相提并论。这使我们能够以有原则的方式探索域变异学习。具体地说,我们从一个双层贝耶斯神经网络中共同建立的域变异表达和分类器中得出。我们从经验上展示了我们关于四种广泛使用的跨域视觉识别基准的建议的有效性。减法研究验证了我们巴伊斯语治疗在共同学习域变异表达和域一般化分类时的协同效益。此外,我们的方法始终一贯地提供所有基准的“最新”中值精度。