ideo-based person re-identification (Re-ID) aims to match person images in video sequences captured by disjoint surveillance cameras. Traditional video-based person Re-ID methods focus on exploring appearance information, thus, vulnerable against illumination changes, scene noises, camera parameters, and especially clothes/carrying variations. Gait recognition provides an implicit biometric solution to alleviate the above headache. Nonetheless, it experiences severe performance degeneration as camera view varies. In an attempt to address these problems, in this paper, we propose a framework that utilizes the sequence masks (SeqMasks) in the video to integrate appearance information and gait modeling in a close fashion. Specifically, to sufficiently validate the effectiveness of our method, we build a novel dataset named MaskMARS based on MARS. Comprehensive experiments on our proposed large wild video Re-ID dataset MaskMARS evidenced our extraordinary performance and generalization capability. Validations on the gait recognition metric CASIA-B dataset further demonstrated the capability of our hybrid model.
翻译:传统视频人再识别方法侧重于探索外观信息,因此,容易受照明变化、现场噪音、相机参数,特别是衣着/随身衣物变化的影响。盖特识别为缓解上述头痛提供了隐含的生物鉴别解决办法。然而,由于摄像视视视,它的工作表现发生严重退化。为了解决这些问题,我们在本文件中提议了一个框架,利用视频中的序列遮罩(SeqMasks)整合外观信息,并进行近距离的模拟。具体地说,为了充分验证我们的方法的有效性,我们建立了一个新数据集,名为MAMSMARS。关于我们拟议的大型野生视频再识别数据集的全面实验证明了我们非凡的性能和一般化能力。对数字识别指标CASIA-B数据集的验证进一步证明了我们混合模型的能力。