Person re-identification (Person ReID) is a challenging task due to the large variations in camera viewpoint, lighting, resolution, and human pose. Recently, with the advancement of deep learning technologies, the performance of Person ReID has been improved swiftly. Feature extraction and feature matching are two crucial components in the training and deployment stages of Person ReID. However, many existing Person ReID methods have measure inconsistency between the training stage and the deployment stage, and they couple magnitude and orientation information of feature vectors in feature representation. Meanwhile, traditional triplet loss methods focus on samples within a mini-batch and lack knowledge of global feature distribution. To address these issues, we propose a novel homocentric hypersphere embedding scheme to decouple magnitude and orientation information for both feature and weight vectors, and reformulate classification loss and triplet loss to their angular versions and combine them into an angular discriminative loss. We evaluate our proposed method extensively on the widely used Person ReID benchmarks, including Market1501, CUHK03 and DukeMTMC-ReID. Our method demonstrates leading performance on all datasets.
翻译:个人再识别(Person ReID)是一项具有挑战性的任务,因为照相机观点、照明、分辨率和人造面的差别很大。最近,随着深层学习技术的进步,个人再识别的性能得到迅速改善。特征提取和特征匹配是个人再识别培训和部署阶段的两个关键组成部分。然而,许多现有的个人再识别方法测量了培训阶段和部署阶段之间的不一致,并且将特征代表中特征矢量的大小和定向信息相加。与此同时,传统的三重损失方法侧重于小型批量中的样本,缺乏全球特征分布的知识。为了解决这些问题,我们提议了一个新型的以同性为中心的超视谱嵌入计划,以对特性矢量和重量矢量的信息进行调和定向,并将分类损失和三重损失重新定位到其角形版本,并把它们合并成三角歧视性损失。我们广泛评估了我们提出的方法,即广泛使用的人再识别基准,包括市场1501、CUHK03和DukMMC-ReID。我们的方法展示了所有数据集的主要性。