Partial domain adaptation which assumes that the unknown target label space is a subset of the source label space has attracted much attention in computer vision. Despite recent progress, existing methods often suffer from three key problems: negative transfer, lack of discriminability, and domain invariance in the latent space. To alleviate the above issues, we develop a novel 'Select, Label, and Mix' (SLM) framework that aims to learn discriminative invariant feature representations for partial domain adaptation. First, we present an efficient "select" module that automatically filters out the outlier source samples to avoid negative transfer while aligning distributions across both domains. Second, the "label" module iteratively trains the classifier using both the labeled source domain data and the generated pseudo-labels for the target domain to enhance the discriminability of the latent space. Finally, the "mix" module utilizes domain mixup regularization jointly with the other two modules to explore more intrinsic structures across domains leading to a domain-invariant latent space for partial domain adaptation. Extensive experiments on several benchmark datasets for partial domain adaptation demonstrate the superiority of our proposed framework over state-of-the-art methods.
翻译:部分域适应假设未知的目标标签空间是源标签空间的一个子集,这在计算机视野中引起了很大的注意。尽管最近取得了进展,但现有方法经常遇到三个关键问题:负转移、缺乏差异和潜在空间的域差异。为了缓解上述问题,我们开发了一个小说“选择”、“标签”和“Mix”(SLM)框架,目的是学习部分域适应的差别性差异性特征表现。首先,我们展示了一个高效的“选择”模块,该模块自动过滤外部源样本,以避免负转移,同时对两个域的分布进行对齐。第二,“标签”模块对分类器进行迭代培训,同时使用标签源域数据和目标域生成的伪标签,以加强潜在空间的可分离性。最后,“混合”模块与其他两个模块联合使用域组合,探索更内在的结构,导致部分域适应的域内差异性潜在空间。关于部分域适应的若干基准数据集的广泛实验表明,我们提议的框架优于状态方法。