Learning cross-view consistent feature representation is the key for accurate vehicle Re-identification (ReID), since the visual appearance of vehicles changes significantly under different viewpoints. To this end, most existing approaches resort to the supervised cross-view learning using extensive extra viewpoints annotations, which however, is difficult to deploy in real applications due to the expensive labelling cost and the continous viewpoint variation that makes it hard to define discrete viewpoint labels. In this study, we present a pluggable Weakly-supervised Cross-View Learning (WCVL) module for vehicle ReID. Through hallucinating the cross-view samples as the hardest positive counterparts in feature domain, we can learn the consistent feature representation via minimizing the cross-view feature distance based on vehicle IDs only without using any viewpoint annotation. More importantly, the proposed method can be seamlessly plugged into most existing vehicle ReID baselines for cross-view learning without re-training the baselines. To demonstrate its efficacy, we plug the proposed method into a bunch of off-the-shelf baselines and obtain significant performance improvement on four public benchmark datasets, i.e., VeRi-776, VehicleID, VRIC and VRAI.
翻译:由于车辆重新识别(ReID)的视觉外观在不同的视角下变化很大,因此,准确的车辆重新识别(REID)的特征代表是准确的车辆重新识别(REID)的关键所在。为此,大多数现有做法都采用使用广泛的额外视角说明进行监督的交叉访问学习,然而,由于标签成本昂贵,且连续的视角差异使得难以定义离散观点标签,因此很难在实际应用中应用,然而,由于标签成本高,而且由于连锁观点差异变化,因此很难在实际应用中部署。在本研究中,我们为车辆再识别(REID)提供了一个可插入的、可强化监督的交叉查看学习模块。通过将交叉视图样本视为地貌域中最强的对应方,我们可以通过在不使用任何视角说明的情况下尽量减少基于车辆标识的交叉视图特征距离来学习一致的特征代表。更重要的是,拟议的方法可以无缝合地插入大多数现有的车辆再视图学习基线,而不对基线进行再培训。为了证明其有效性,我们将拟议方法插入一组离场的基线,并在四个公共基准数据集上取得显著的业绩改进,即,即,VRI、VIID、VID、VIID、VIID、VIID、VIID、VI、VI、VI、VI和VI等。