The problem of generalizing deep neural networks from multiple source domains to a target one is studied under two settings: When unlabeled target data is available, it is a multi-source unsupervised domain adaptation (UDA) problem, otherwise a domain generalization (DG) problem. We propose a unified framework termed domain adaptive ensemble learning (DAEL) to address both problems. A DAEL model is composed of a CNN feature extractor shared across domains and multiple classifier heads each trained to specialize in a particular source domain. Each such classifier is an expert to its own domain and a non-expert to others. DAEL aims to learn these experts collaboratively so that when forming an ensemble, they can leverage complementary information from each other to be more effective for an unseen target domain. To this end, each source domain is used in turn as a pseudo-target-domain with its own expert providing supervisory signal to the ensemble of non-experts learned from the other sources. For unlabeled target data under the UDA setting where real expert does not exist, DAEL uses pseudo-label to supervise the ensemble learning. Extensive experiments on three multi-source UDA datasets and two DG datasets show that DAEL improves the state of the art on both problems, often by significant margins. The code is released at \url{https://github.com/KaiyangZhou/Dassl.pytorch}.
翻译:将多源域的深神经网络推广到目标区域的问题在两种设置下研究: 当有未标注的目标数据时, 它是一个多源的不受监督的域适应问题(UDA), 或者是一个域通用问题(DG) 。 我们提出一个统一框架, 称为域适应共性学习( DAEL), 以解决这两个问题。 DAEL 模型由CNN 的特征提取器组成, 由在多个域间共享的特征提取器和多分类头组成, 每一个都受过特定源域的专门化训练。 每一个这样的分类器都是自己域的专家, 而不是他人的专家。 DAEL 的目标是合作学习这些专家, 以便他们在形成一个共同的集合时, 能够利用彼此的补充信息, 对一个看不见的目标域更加有效。 为此, 每个源域都被使用为假目标, 向从其它来源中学习的非专家的群集提供监督信号。 在 UDA 设置下没有真正专家的未标定目标数据, DAEL 使用伪- 标签, 在多个域域域域中, 将两个数据库/ DADGDL 学习两个重要的数据。