This paper presents a significant contribution to the field of repetitive action counting through the introduction of a new approach called Pose Saliency Representation. The proposed method efficiently represents each action using only two salient poses instead of redundant frames, which significantly reduces the computational cost while improving the performance. Moreover, we introduce a pose-level method, PoseRAC, which is based on this representation and achieves state-of-the-art performance on two new version datasets by using Pose Saliency Annotation to annotate salient poses for training. Our lightweight model is highly efficient, requiring only 20 minutes for training on a GPU, and infers nearly 10x faster compared to previous methods. In addition, our approach achieves a substantial improvement over the previous state-of-the-art TransRAC, achieving an OBO metric of 0.56 compared to 0.29 of TransRAC. The code and new dataset are available at https://github.com/MiracleDance/PoseRAC for further research and experimentation, making our proposed approach highly accessible to the research community.
翻译:本文通过采用称为 " 浮标性能代表 " 的新方法,为重复行动领域作出了重大贡献。拟议方法有效地代表了每一项行动,仅使用两个突出的外表而不是冗余的框架,这大大降低了计算成本,同时改进了业绩。此外,我们采用了一种以这种外观为基础的造型方法,即PoseRAC, 并采用“浮标性能说明”对两个新版本数据集进行最先进的性能。我们的轻量级模型非常高效,只需要20分钟的GPU培训时间,比以前的方法更快地推算出近10x。此外,我们的方法大大改进了以前最先进的TransRAC,实现了OBO指标0.56比TransRAC0.29。 代码和新数据集可在https://github.com/MircleDance/PoseRAC/PoseRAC,供进一步研究和实验使用,使研究界非常容易查阅。</s>