Person re-identification (re-ID) concerns the matching of subject images across different camera views in a multi camera surveillance system. One of the major challenges in person re-ID is pose variations across the camera network, which significantly affects the appearance of a person. Existing development data lack adequate pose variations to carry out effective training of person re-ID systems. To solve this issue, in this paper we propose an end-to-end pose-driven attention-guided generative adversarial network, to generate multiple poses of a person. We propose to attentively learn and transfer the subject pose through an attention mechanism. A semantic-consistency loss is proposed to preserve the semantic information of the person during pose transfer. To ensure fine image details are realistic after pose translation, an appearance discriminator is used while a pose discriminator is used to ensure the pose of the transferred images will exactly be the same as the target pose. We show that by incorporating the proposed approach in a person re-identification framework, realistic pose transferred images and state-of-the-art re-identification results can be achieved.
翻译:个人再识别(Re-ID)涉及在多相机监视系统中对不同相机视图的主题图像进行匹配。人再识别(Re-ID)的主要挑战之一是在相机网络中造成差异,严重影响一个人的外貌。现有开发数据不足,对有效培训人再识别系统造成差异。为了解决这个问题,我们在本文件中提议一个端到端的以表面为驱动的引人注意引导的基因对抗网络,以产生一个人的多重外形。我们提议通过关注机制认真学习和转移主体的外形。建议使用语义一致性损失,以保存某人在变形过程中的语义信息。为了确保在变形后真实的图像细节,在使用摆设器确保所转让图像的外形形状与目标形状完全相同。我们表明,通过将拟议方法纳入人再识别框架,可以实现真实的外形图像和最新再识别结果。