Pedestrian attribute recognition (PAR) has received increasing attention because of its wide application in video surveillance and pedestrian analysis. Extracting robust feature representation is one of the key challenges in this task. The existing methods mainly use the convolutional neural network (CNN) as the backbone network to extract features. However, these methods mainly focus on small discriminative regions while ignoring the global perspective. To overcome these limitations, we propose a pure transformer-based multi-task PAR network named PARFormer, which includes four modules. In the feature extraction module, we build a transformer-based strong baseline for feature extraction, which achieves competitive results on several PAR benchmarks compared with the existing CNN-based baseline methods. In the feature processing module, we propose an effective data augmentation strategy named batch random mask (BRM) block to reinforce the attentive feature learning of random patches. Furthermore, we propose a multi-attribute center loss (MACL) to enhance the inter-attribute discriminability in the feature representations. In the viewpoint perception module, we explore the impact of viewpoints on pedestrian attributes, and propose a multi-view contrastive loss (MCVL) that enables the network to exploit the viewpoint information. In the attribute recognition module, we alleviate the negative-positive imbalance problem to generate the attribute predictions. The above modules interact and jointly learn a highly discriminative feature space, and supervise the generation of the final features. Extensive experimental results show that the proposed PARFormer network performs well compared to the state-of-the-art methods on several public datasets, including PETA, RAP, and PA100K. Code will be released at https://github.com/xwf199/PARFormer.
翻译:行人属性识别(PAR)由于在视频监控和行人分析中的广泛应用而越来越受到关注。提取稳健的特征表示是此任务的关键之一。现有方法主要使用卷积神经网络(CNN)作为骨干网络来提取特征。然而,这些方法主要关注小的有区别性的区域,而忽略了全局视角。为了克服这些限制,我们提出了一个纯变压器的多任务PAR网络,称为PARFormer,其中包括四个模块。在特征提取模块中,我们构建了一个基于变压器的强基线来提取特征,在几个PAR基准测试中与现有基于CNN的基线方法相比取得了有竞争力的结果。在特征处理模块中,我们提出了一种有效的数据增强策略,称为批量随机掩码(BRM)块,以加强随机补丁的专注特征学习。此外,我们提出了一种多属性中心损失(MACL)来增强特征表示中的属性区分性。在视点感知模块中,我们探讨了视点对行人属性的影响,并提出了一种多视角对比损失(MCVL),使网络能够利用视点信息。在属性识别模块中,我们缓解了负-正不平衡问题以产生属性预测。以上模块相互作用并共同学习高度区分性的特征空间,并监督生成最终特征。广泛的实验结果表明,与PETA、RAP和PA100K等几个公共数据集上的现有最先进方法相比,所提出的PARFormer网络表现良好。代码将在https://github.com/xwf199/PARFormer上公开。