Occluded person re-identification (ReID) is a person retrieval task which aims at matching occluded person images with holistic ones. For addressing occluded ReID, part-based methods have been shown beneficial as they offer fine-grained information and are well suited to represent partially visible human bodies. However, training a part-based model is a challenging task for two reasons. Firstly, individual body part appearance is not as discriminative as global appearance (two distinct IDs might have the same local appearance), this means standard ReID training objectives using identity labels are not adapted to local feature learning. Secondly, ReID datasets are not provided with human topographical annotations. In this work, we propose BPBreID, a body part-based ReID model for solving the above issues. We first design two modules for predicting body part attention maps and producing body part-based features of the ReID target. We then propose GiLt, a novel training scheme for learning part-based representations that is robust to occlusions and non-discriminative local appearance. Extensive experiments on popular holistic and occluded datasets show the effectiveness of our proposed method, which outperforms state-of-the-art methods by 0.7% mAP and 5.6% rank-1 accuracy on the challenging Occluded-Duke dataset. Our code is available at https://github.com/VlSomers/bpbreid.
翻译:隐形人重新身份识别( ReID) 是个人检索任务, 旨在将隐形人图像与整体图像相匹配。 为了处理隐蔽的隐蔽人图像。 为解决隐蔽 ReID, 部分基于方法显示是有益的, 因为它们提供了精细的信息, 并且非常适合代表部分可见的人体。 但是, 培训一个基于部分的模型是一项具有挑战性的任务, 有两个原因。 首先, 个人身体部分外观不像全球外观那样具有歧视性( 两个不同的身份显示可能具有相同的当地外观), 这意味着使用身份标签的 ReID 标准培训目标不适应本地特征学习 。 其次, 重新ID 数据集没有提供人类地形说明 。 在此工作中, 我们建议 BBBBBreID, 一种基于身体部分的 ReID 模式, 完全适合部分可见的人体身体。 我们首先设计两个模块来预测身体部分注意地图和产生基于身体部分特征的 ReID 目标。 我们然后提出一个全新的培训计划, 用于学习部分内隐含和无异性本地外观的图像。 在大众整体和5.8 APB 的精确度数据设置方法上进行广泛的实验, 我们的拟议的常规/ 格式化的Omagiel- dememet- droduction- s