The task of 2D human pose estimation is challenging as the number of keypoints is typically large (~ 17) and this necessitates the use of robust neural network architectures and training pipelines that can capture the relevant features from the input image. These features are then aggregated to make accurate heatmap predictions from which the final keypoints of human body parts can be inferred. Many papers in literature use CNN-based architectures for the backbone, and/or combine it with a transformer, after which the features are aggregated to make the final keypoint predictions [1]. In this paper, we consider the recently proposed Bottleneck Transformers [2], which combine CNN and multi-head self attention (MHSA) layers effectively, and we integrate it with a Transformer encoder and apply it to the task of 2D human pose estimation. We consider different backbone architectures and pre-train them using the DINO self-supervised learning method [3], this pre-training is found to improve the overall prediction accuracy. We call our model BTranspose, and experiments show that on the COCO validation set, our model achieves an AP of 76.4, which is competitive with other methods such as [1] and has fewer network parameters. Furthermore, we also present the dependencies of the final predicted keypoints on both the MHSA block and the Transformer encoder layers, providing clues on the image sub-regions the network attends to at the mid and high levels.
翻译:2D 人体构成估计的任务具有挑战性,因为关键点的数量通常很大(~ 17),这就需要使用强大的神经网络架构和培训管道,能够从输入图像中捕捉到相关特征。这些特征随后被汇总,以便作出准确的热映预测,从中可以推断人体部分的最后关键点。许多文献论文使用CNN的骨干结构,并(或)与变压器相结合,然后将特征汇总,以得出最终关键点预测[1]。在本文件中,我们认为最近提出的Bottleneck变压器[2],能够有效地将CNN和多头自我关注(MHSA)层结合起来,我们将其与变压器编码整合,并将其应用于2D人体部分的最后关键值估计任务。我们考虑不同的主干结构,并使用DINO自我控制的学习方法对其进行预先控制[3],这一培训前发现提高总体预测的准确性。我们称之为模型变压器,并实验显示CO的校准系统、CNN和多头自我关注层(MHSA)各层,我们的模型和MAS的中值都以较低的预值系统。