Domain shift is a fundamental problem in visual recognition which typically arises when the source and target data follow different distributions. The existing domain adaptation approaches which tackle this problem work in the closed-set setting with the assumption that the source and the target data share exactly the same classes of objects. In this paper, we tackle a more realistic problem of open-set domain shift where the target data contains additional classes that are not present in the source data. More specifically, we introduce an end-to-end Progressive Graph Learning (PGL) framework where a graph neural network with episodic training is integrated to suppress underlying conditional shift and adversarial learning is adopted to close the gap between the source and target distributions. Compared to the existing open-set adaptation approaches, our approach guarantees to achieve a tighter upper bound of the target error. Extensive experiments on three standard open-set benchmarks evidence that our approach significantly outperforms the state-of-the-arts in open-set domain adaptation.
翻译:当源和目标数据遵循不同的分布时,通常会出现视觉识别方面的一个基本问题。现有的领域适应方法在封闭式设置环境中处理这一问题,其假设是源和目标数据完全共享相同的对象类别。在本文件中,我们处理一个更现实的开放式域转移问题,即目标数据包含源数据中不存在的其他类别。更具体地说,我们引入了一个端到端渐进图学习框架,将具有偶发性培训的图形神经网络整合在一起,以抑制潜在的有条件转移,并采用对抗性学习来缩小源和目标分布之间的差距。与现有的开放型适应方法相比,我们的方法保证实现目标错误的更严格上限。关于三个标准开放型基准的大规模实验证明,我们的方法大大超过了开放型域适应中的最新条件。