While Unsupervised Domain Adaptation (UDA) algorithms, i.e., there are only labeled data from source domains, have been actively studied in recent years, most algorithms and theoretical results focus on Single-source Unsupervised Domain Adaptation (SUDA). However, in the practical scenario, labeled data can be typically collected from multiple diverse sources, and they might be different not only from the target domain but also from each other. Thus, domain adapters from multiple sources should not be modeled in the same way. Recent deep learning based Multi-source Unsupervised Domain Adaptation (MUDA) algorithms focus on extracting common domain-invariant representations for all domains by aligning distribution of all pairs of source and target domains in a common feature space. However, it is often very hard to extract the same domain-invariant representations for all domains in MUDA. In addition, these methods match distributions without considering domain-specific decision boundaries between classes. To solve these problems, we propose a new framework with two alignment stages for MUDA which not only respectively aligns the distributions of each pair of source and target domains in multiple specific feature spaces, but also aligns the outputs of classifiers by utilizing the domain-specific decision boundaries. Extensive experiments demonstrate that our method can achieve remarkable results on popular benchmark datasets for image classification.
翻译:虽然未监督的域适应算法,即只有源域的标签数据,近年来一直在积极研究,大多数算法和理论结果侧重于单一源的无监督域适应(SUDA),但是在实际情况下,标签数据通常可以从多种不同来源收集,它们可能不仅与目标领域不同,而且彼此不同。因此,不应以同样的方式模拟来自多个来源的域适应(UDA)算法。最近基于多种源的多源非监督的域适应(MUDA)算法的深层次学习侧重于通过在一个共同的特性空间中统一所有源和目标域对齐的分布,在所有源和目标域对齐分布。然而,在实际情况下,标签数据通常很难从多种不同来源和目标域中提取相同的域差异表示。此外,这些方法在不考虑不同类别之间特定域的决定界限的情况下匹配分布。为了解决这些问题,我们提议为MUDA提出一个新的框架,其中有两个调和两个调和阶段,不仅将每个域域域域域的域内差异差异异性表示所有来源和目标域的分布结果,并且通过不同的域域域级模型,通过不同的模型展示我们的具体模型的模型,可以实现不同范围的模型的模型。