Most state-of-the-art methods for action recognition rely only on 2D spatial features encoding appearance, motion or pose. However, 2D data lacks the depth information, which is crucial for recognizing fine-grained actions. In this paper, we propose a depth-aware volumetric descriptor that encodes pose and motion information in a unified representation for action classification in-the-wild. Our framework is robust to many challenges inherent to action recognition, e.g. variation in viewpoint, scene, clothing and body shape. The key component of our method is the Depth-Aware Pose Motion representation (DA-PoTion), a new video descriptor that encodes the 3D movement of semantic keypoints of the human body. Given a video, we produce human joint heatmaps for each frame using a state-of-the-art 3D human pose regressor and we give each of them a unique color code according to the relative time in the clip. Then, we aggregate such 3D time-encoded heatmaps for all human joints to obtain a fixed-size descriptor (DA-PoTion), which is suitable for classifying actions using a shallow 3D convolutional neural network (CNN). The DA-PoTion alone defines a new state-of-the-art on the Penn Action Dataset. Moreover, we leverage the intrinsic complementarity of our pose motion descriptor with appearance based approaches by combining it with Inflated 3D ConvNet (I3D) to define a new state-of-the-art on the JHMDB Dataset.
翻译:多数最先进的行动识别方法仅依赖于 2D 空间特征的编码外观、 运动或布局。 但是, 2D 数据缺乏深度信息, 这对识别细微的动作至关重要 。 在本文中, 我们提出一个深觉量描述符, 编码以统一代表方式提出和移动信息, 以统一行动分类 。 我们的框架强于行动识别所固有的许多挑战, 例如观点、 场景、 服装和身体形状的变异。 我们的方法的关键组成部分是 深度- Aware Pose Motion 显示( DA- Potion), 这是一种新的视频网络互补性描述器, 以编码人体的3D 关键点的3D 移动。 有了视频, 我们用3D 人类的3D 人形回归器来生成每个框架的人类联合热映射图, 我们根据剪辑中的相对时间来给每个框架设定一个独特的颜色代码。 然后, 我们将这种3D 时间编码的热映射图 用于所有人类联合的 JON- D 的内层图, 将一个新的 DOD 数据库的 数据- dloal- dal- dal- droction 分类, 用来定义一个新的 。