This paper pays close attention to the cross-modality visible-infrared person re-identification (VI Re-ID) task, which aims to match human samples between visible and infrared modes. In order to reduce the discrepancy between features of different modalities, most existing works usually use constraints based on Euclidean metric. Since the Euclidean based distance metric cannot effectively measure the internal angles between the embedded vectors, the above methods cannot learn the angularly discriminative feature embedding. Because the most important factor affecting the classification task based on embedding vector is whether there is an angularly discriminativ feature space, in this paper, we propose a new loss function called Enumerate Angular Triplet (EAT) loss. Also, motivated by the knowledge distillation, to narrow down the features between different modalities before feature embedding, we further present a new Cross-Modality Knowledge Distillation (CMKD) loss. The experimental results on RegDB and SYSU-MM01 datasets have shown that the proposed method is superior to the other most advanced methods in terms of impressive performance.
翻译:本文密切关注跨现代可见红外线人再识别(VI Re-ID)任务,该任务旨在将人类样本与可见和红外模式相匹配。为了缩小不同模式特征之间的差异,大多数现有工程通常使用基于Euclidean 度量的制约。由于以Euclidean为基础的距离测量无法有效测量嵌入矢量之间的内部角度,上述方法无法了解嵌入矢量的三角区别性特征嵌入。由于影响基于嵌入矢量的分类任务的最重要因素是是否有一个角分化特征空间,因此在本文件中,我们提议了一个新的损失函数,称为Enumberate 角Triplet(EAT) 损失。此外,由于知识蒸馏的动机,在嵌入之前缩小不同模式之间的特征,我们进一步介绍了一个新的跨模式知识蒸馏(CMKKD)损失。RegDB和SYSU-M01数据集的实验结果表明,拟议的方法在令人印象深刻的性表现方面优于其他最先进的方法。