Most existing person re-identification (re-id) methods require supervised model learning from a separate large set of pairwise labelled training data for every single camera pair. This significantly limits their scalability and usability in real-world large scale deployments with the need for performing re-id across many camera views. To address this scalability problem, we develop a novel deep learning method for transferring the labelled information of an existing dataset to a new unseen (unlabelled) target domain for person re-id without any supervised learning in the target domain. Specifically, we introduce an Transferable Joint Attribute-Identity Deep Learning (TJ-AIDL) for simultaneously learning an attribute-semantic and identitydiscriminative feature representation space transferrable to any new (unseen) target domain for re-id tasks without the need for collecting new labelled training data from the target domain (i.e. unsupervised learning in the target domain). Extensive comparative evaluations validate the superiority of this new TJ-AIDL model for unsupervised person re-id over a wide range of state-of-the-art methods on four challenging benchmarks including VIPeR, PRID, Market-1501, and DukeMTMC-ReID.
翻译:为了解决这一可缩放问题,我们开发了一种新的深层次学习方法,将现有数据集的标签信息转移到新的不可见(无标签)目标领域,而没有目标领域的任何监督学习。具体地说,我们引入了一套可转移的通用身份深入学习模式(TJ-AIDL),以同时学习一个可属性管理和身份分界特征代表空间,用于在四个具有挑战性的基准(包括VIPR01、PRID 4、DIMMD 4、DUMMRMMMMD、PRID 4 4 上,包括VIPMMMW、PRID MAD 4 4 上具有挑战性的基准。