Person re-identification (Re-ID) models usually show a limited performance when they are trained on one dataset and tested on another dataset due to the inter-dataset bias (e.g. completely different identities and backgrounds) and the intra-dataset difference (e.g. camera invariance). In terms of this issue, given a labelled source training set and an unlabelled target training set, we propose an unsupervised transfer learning method characterized by 1) bridging inter-dataset bias and intra-dataset difference via a proposed ImitateModel simultaneously; 2) regarding the unsupervised person Re-ID problem as a semi-supervised learning problem formulated by a dual classification loss to learn a discriminative representation across domains; 3) exploiting the underlying commonality across different domains from the class-style space to improve the generalization ability of re-ID models. Extensive experiments are conducted on two widely employed benchmarks, including Market-1501 and DukeMTMC-reID, and experimental results demonstrate that the proposed method can achieve a competitive performance against other state-of-the-art unsupervised Re-ID approaches.
翻译:个人再识别(Re-ID)模型在就一个数据集进行培训并在另一个数据集上测试时通常表现有限,这是因为数据之间的偏差(例如,完全不同的身份和背景)和数据内部的差异(例如,照相机变化不定)导致数据内部的差异(例如,照相机变化),关于这个问题,鉴于有标签的源培训套件和未贴标签的目标培训套件,我们建议采用一种无人监督的转移学习方法,其特点为1)通过拟议的IMitateModel同时弥合数据之间的偏差和数据内部差异;2)将无人监督的人再识别问题作为半受监督的学习问题,因为双重分类损失形成了一种半受监督的学习问题,以学习跨领域的歧视性代表;3)利用阶级空间不同领域的基本共性,以提高再开发模型的普及能力;对两种广泛采用的基准,包括市场1501和DukMCTMMC-ReID,进行了广泛的实验结果表明,拟议的方法可与其他州一级未受监督的再识别方法取得竞争性的绩效。