Advancements in deep neural networks have contributed to near perfect results for many computer vision problems such as object recognition, face recognition and pose estimation. However, human action recognition is still far from human-level performance. Owing to the articulated nature of the human body, it is challenging to detect an action from multiple viewpoints, particularly from an aerial viewpoint. This is further compounded by a scarcity of datasets that cover multiple viewpoints of actions. To fill this gap and enable research in wider application areas, we present a multi-viewpoint outdoor action recognition dataset collected from YouTube and our own drone. The dataset consists of 20 dynamic human action classes, 2324 video clips and 503086 frames. All videos are cropped and resized to 720x720 without distorting the original aspect ratio of the human subjects in videos. This dataset should be useful to many research areas including action recognition, surveillance and situational awareness. We evaluated the dataset with a two-stream CNN architecture coupled with a recently proposed temporal pooling scheme called kernelized rank pooling that produces nonlinear feature subspace representations. The overall baseline action recognition accuracy is 74.0%.
翻译:深神经网络的进步为许多计算机视觉问题(如物体识别、面部识别和估计)近乎完美的结果作出了贡献。然而,人类行动识别仍远非人类层面的性能。由于人体的清晰性质,从多种角度,特别是从空中角度,探测出一个行动是具有挑战性的。由于缺少包含多种行动视角的数据集,这更加复杂。为了填补这一空白,并促成更广泛的应用领域的研究,我们提出了一个从YouTube和我们自己无人机收集的多视角室外行动识别数据集。数据集由20个动态人类行动类、2324个视频剪辑和503086框架组成。所有视频都被裁剪裁,并重新缩至720x720,而没有扭曲视频中人类主题的原始比例。这一数据集对许多研究领域应该有用,包括行动识别、监视和了解情况。我们用两个流的CNN结构对数据集进行了评估,并最近提出的称为内嵌式级集计划,产生非线性空间子演示。总体基线行动识别准确度为74.0%。