Since human-labeled samples are free for the target set, unsupervised person re-identification (Re-ID) has attracted much attention in recent years, by additionally exploiting the source set. However, due to the differences on camera styles, illumination and backgrounds, there exists a large gap between source domain and target domain, introducing a great challenge on cross-domain matching. To tackle this problem, in this paper we propose a novel method named Dual-stream Reciprocal Disentanglement Learning (DRDL), which is quite efficient in learning domain-invariant features. In DRDL, two encoders are first constructed for id-related and id-unrelated feature extractions, which are respectively measured by their associated classifiers. Furthermore, followed by an adversarial learning strategy, both streams reciprocally and positively effect each other, so that the id-related features and id-unrelated features are completely disentangled from a given image, allowing the encoder to be powerful enough to obtain the discriminative but domain-invariant features. In contrast to existing approaches, our proposed method is free from image generation, which not only reduces the computational complexity remarkably, but also removes redundant information from id-related features. Extensive experiments substantiate the superiority of our proposed method compared with the state-of-the-arts. The source code has been released in https://github.com/lhf12278/DRDL.
翻译:由于人类标签样本对目标集是免费的,因此,近年来无人监督的人重新识别(Re-ID)通过进一步利用源集而引起人们的极大注意。然而,由于在相机样式、光照和背景上的差异,源域和目标域之间存在巨大差距,对跨域匹配提出了巨大挑战。为了解决这一问题,我们在本文件中提议了一种名为“双流互换差异学习”(DRDL)的新颖方法,这种方法在学习域异性特征方面相当有效。在DRDL中,两个编码器首先为ID相关和ID非相关特征提取而建,分别由相关分类者测量。此外,在源域域域域与目标域间匹配方面,源与目标域间存在巨大的差距。为了解决这个问题,我们提议的方法与源域/域错开,与现有方法不同的是,我们拟议的方法也远离了复制型/升级型的模型。我们提议的方法的变现型模型的变现型模型也远离了复制型模型。