Unsupervised person re-identification (ReID) is a challenging task without data annotation to guide discriminative learning. Existing methods attempt to solve this problem by clustering extracted embeddings to generate pseudo labels. However, most methods ignore the intra-class gap caused by camera style variance, and some methods are relatively complex and indirect although they try to solve the negative impact of the camera style on feature distribution. To solve this problem, we propose a camera-aware style separation and contrastive learning method (CA-UReID), which directly separates camera styles in the feature space with the designed camera-aware attention module. It can explicitly divide the learnable feature into camera-specific and camera-agnostic parts, reducing the influence of different cameras. Moreover, to further narrow the gap across cameras, we design a camera-aware contrastive center loss to learn more discriminative embedding for each identity. Extensive experiments demonstrate the superiority of our method over the state-of-the-art methods on the unsupervised person ReID task.
翻译:无监督的人重新身份识别(ReID)是一项艰巨的任务,没有数据说明来指导歧视性学习。 现有的方法试图通过将提取的嵌入器聚合成成假标签来解决这个问题。 然而,大多数方法忽略了由于相机风格差异造成的阶级内部差距,有些方法相对复杂和间接,尽管它们试图解决相机风格对特征分布的消极影响。 为了解决这个问题,我们建议了一种有相机认知风格的分离和对比学习方法(CA- UReID),它直接将特征空间中的相机样式与设计成的有相机认知的注意模块分开。它可以明确地将可学习的特征分化为相机专用和摄像像学的部件,减少不同相机的影响。此外,为了进一步缩小相机之间的鸿沟,我们设计了一个有相机意识的对比中心损失,以学习对每个身份的更具有歧视性的嵌入。 广泛的实验表明我们的方法优于未被监督的人ReID任务上最先进的方法。