Multi-grained features extracted from convolutional neural networks (CNNs) have demonstrated their strong discrimination ability in supervised person re-identification (Re-ID) tasks. Inspired by them, this work investigates the way of extracting multi-grained features from a pure transformer network to address the unsupervised Re-ID problem that is label-free but much more challenging. To this end, we build a dual-branch network architecture based upon a modified Vision Transformer (ViT). The local tokens output in each branch are reshaped and then uniformly partitioned into multiple stripes to generate part-level features, while the global tokens of two branches are averaged to produce a global feature. Further, based upon offline-online associated camera-aware proxies (O2CAP) that is a top-performing unsupervised Re-ID method, we define offline and online contrastive learning losses with respect to both global and part-level features to conduct unsupervised learning. Extensive experiments on three person Re-ID datasets show that the proposed method outperforms state-of-the-art unsupervised methods by a considerable margin, greatly mitigating the gap to supervised counterparts. Code will be available soon at https://github.com/RikoLi/WACV23-workshop-TMGF.
翻译:从 convolution 神经网络(CNNs) 中提取的多重特征显示了它们在监管人员再识别(Re-ID)任务中具有很强的区别能力。 受他们启发, 这项工作调查了从纯变压器网络提取多重特征的方法, 以解决无标签但更具挑战性的无监管的重置问题。 为此, 我们根据修改后的愿景变换器( VIT) 建立了一个双层网络架构。 每个分支的本地标牌输出都经过重塑, 然后统一分割成多个条状, 产生部分级特征, 而两个分支的全球标牌则平均生成一个全球特征。 此外, 在离线相关变压器网络(O2CAP) 的基础上, 我们定义了全球和部分级特征的离线和在线对比学习损失, 以进行不超超前的学习。 对三个人再置数据设置的广泛实验显示, 拟议的方法超越了部分级特征, 而两个分支的全球标牌, 将显示, 将很快, 在离线相关的摄像系统/ 监管的FADRsurviews