In this paper, we propose a graph correspondence transfer (GCT) approach for person re-identification. Unlike existing methods, the GCT model formulates person re-identification as an off-line graph matching and on-line correspondence transferring problem. In specific, during training, the GCT model aims to learn off-line a set of correspondence templates from positive training pairs with various pose-pair configurations via patch-wise graph matching. During testing, for each pair of test samples, we select a few training pairs with the most similar pose-pair configurations as references, and transfer the correspondences of these references to test pair for feature distance calculation. The matching score is derived by aggregating distances from different references. For each probe image, the gallery image with the highest matching score is the re-identifying result. Compared to existing algorithms, our GCT can handle spatial misalignment caused by large variations in view angles and human poses owing to the benefits of patch-wise graph matching. Extensive experiments on five benchmarks including VIPeR, Road, PRID450S, 3DPES and CUHK01 evidence the superior performance of GCT model over other state-of-the-art methods.
翻译:在本文中,我们建议了个人再识别的图表通信传输(GCT)方法。与现有方法不同,GCT模型将人重新识别为离线图形匹配和在线通信传输问题。具体地说,在培训期间,GCT模型旨在从正面培训配对中学习一套非线通信模板,这些配对通过配对式图配对,配有各种姿势配置。在测试期间,我们为每对测试样品选择了几对培训配对,配对的配方配置最相似,作为参考,并将这些引用的对应方转换为测试配方,以进行特征距离计算。匹配得分是通过从不同参照点汇总距离得出的。对于每张探测图像,匹配得分最高的相片廊图像是重新识别的结果。与现有的算法相比,我们的GCT可处理由于视觉角度和人体构成的巨大变化而导致的空间错配对,因为配对的好处是配对的。在ViteR、Road、RID450S、3DPES和CUHK01等五个基准上进行广泛的实验,包括VENT模型的优异性表现。