Mining the shared features of same identity in different scene, and the unique features of different identity in same scene, are most significant challenges in the field of person re-identification (ReID). Online Instance Matching (OIM) loss function and Triplet loss function are main methods for person ReID. Unfortunately, both of them have drawbacks. OIM loss treats all samples equally and puts no emphasis on hard samples. Triplet loss processes batch construction in a complicated and fussy way and converges slowly. For these problems, we propose a Triplet Online Instance Matching (TOIM) loss function, which lays emphasis on the hard samples and improves the accuracy of person ReID effectively. It combines the advantages of OIM loss and Triplet loss and simplifies the process of batch construction, which leads to a more rapid convergence. It can be trained on-line when handle the joint detection and identification task. To validate our loss function, we collect and annotate a large-scale benchmark dataset (UESTC-PR) based on images taken from surveillance cameras, which contains 499 identities and 60,437 images. We evaluated our proposed loss function on Duke, Marker-1501 and UESTC-PR using ResNet-50, and the result shows that our proposed loss function outperforms the baseline methods by a maximum of 21.7%, including Softmax loss, OIM loss and Triplet loss.
翻译:不同场景中相同身份的共同特征的采矿作业,以及同一场景中不同身份的独特特征,是个人再识别(ReID)领域最重大的挑战。在线匹配(OIM)损失功能和Triplet损失功能是个人再识别的主要方法。不幸的是,两者都有缺陷。OIM损失平等地处理所有样本,不强调硬样品。Triplet损失过程以复杂和杂乱的方式分批建造,缓慢交汇。关于这些问题,我们建议采用一个三联网上匹配(TOIM)损失功能,强调硬样本,并有效提高个人再识别的准确性。它将OIM损失和Triplet损失的优势结合起来,简化批量建设过程,从而导致更快的趋同。OIM损失对所有样本进行在线培训,为了验证我们的损失功能,我们收集并公布一个大型的基准数据集(UESTC-PR),该数据库包含499个身份和60,437图像。我们用拟议中的RIPM-15损失模型来评估我们拟议的US-150损失基准和21C损失模型的损失计算结果,我们用UC-LES01和21Ex损失模型计算,我们拟议的US-LES-LA和21损失模型计算,我们拟议的第21损失模型计算,我们拟议的第21损失模型和21损失模型的拟议损失模型计算。