Vehicle re-identification (re-ID) focuses on matching images of the same vehicle across different cameras. It is fundamentally challenging because differences between vehicles are sometimes subtle. While several studies incorporate spatial-attention mechanisms to help vehicle re-ID, they often require expensive keypoint labels or suffer from noisy attention mask if not trained with expensive labels. In this work, we propose a dedicated Semantics-guided Part Attention Network (SPAN) to robustly predict part attention masks for different views of vehicles given only image-level semantic labels during training. With the help of part attention masks, we can extract discriminative features in each part separately. Then we introduce Co-occurrence Part-attentive Distance Metric (CPDM) which places greater emphasis on co-occurrence vehicle parts when evaluating the feature distance of two images. Extensive experiments validate the effectiveness of the proposed method and show that our framework outperforms the state-of-the-art approaches.
翻译:车辆再识别(re-ID)侧重于在不同相机上对同一车辆的图像进行匹配,这具有根本性的挑战性,因为车辆之间的差异有时很微妙。虽然一些研究纳入了空间注意机制,以帮助车辆再识别,但往往需要昂贵的钥匙点标签,或者如果没有经过昂贵标签的培训,它们往往会受到吵闹的注意面罩。在这项工作中,我们提议建立一个专门的语义引导部分注意网络(SPAN),以强有力地预测对车辆不同观点的部分注意面罩,在培训期间只给图像级语义标签。在部分注意面罩的帮助下,我们可以在每一部分中分别提取歧视特征。然后我们引入“共同注意部分远程计量”(CPDM),在评价两种图像的特征距离时,更多地强调相互重叠的车辆部件。广泛的实验验证了拟议方法的有效性,并表明我们的框架超越了最先进的方法。