Human action recognition as an important application of computer vision has been studied for decades. Among various approaches, skeleton-based methods recently attract increasing attention due to their robust and superior performance. However, existing skeleton-based methods ignore the potential action relationships between different persons, while the action of a person is highly likely to be impacted by another person especially in complex events. In this paper, we propose a novel group-skeleton-based human action recognition method in complex events. This method first utilizes multi-scale spatial-temporal graph convolutional networks (MS-G3Ds) to extract skeleton features from multiple persons. In addition to the traditional key point coordinates, we also input the key point speed values to the networks for better performance. Then we use multilayer perceptrons (MLPs) to embed the distance values between the reference person and other persons into the extracted features. Lastly, all the features are fed into another MS-G3D for feature fusion and classification. For avoiding class imbalance problems, the networks are trained with a focal loss. The proposed algorithm is also our solution for the Large-scale Human-centric Video Analysis in Complex Events Challenge. Results on the HiEve dataset show that our method can give superior performance compared to other state-of-the-art methods.
翻译:几十年来,人们一直在研究各种方法中,以骨架为基础的方法最近因其强劲和优异的性能而引起越来越多的关注。然而,以骨架为基础的方法忽视了不同人之间潜在的行动关系,而一个人的行动极有可能受到另一个人的影响,特别是在复杂事件中。在本文中,我们提议在复杂事件中采用以群星为基础的人类行动识别新颖方法。这种方法首先利用多尺度空间时钟图共变网络(MS-G3Ds)从多个人中提取骨架特征。除了传统的关键点坐标外,我们还将关键点速度值输入网络,以便提高性能。然后我们使用多层倍感应器(MLPs)将参考人和其他人之间的距离值嵌入提取的特征中。最后,所有特征被输入到另一个用于特征融合和分类的MS-G3D中。为了避免阶级失衡问题,网络被培训为焦点损失。拟议的算法也是我们在复杂事件中大规模以人为中心的视频分析的解决方案。我们使用多层点速度将关键点值输入网络,然后我们使用多层分速值,把参考人与其他人之间的测算法显示其他的性状态数据设置方法。HiveEset 能够显示其他的状态显示高级数据设置。