There is significant progress in recognizing traditional human activities from videos focusing on highly distinctive actions involving discriminative body movements, body-object and/or human-human interactions. Driver's activities are different since they are executed by the same subject with similar body parts movements, resulting in subtle changes. To address this, we propose a novel framework by exploiting the spatiotemporal attention to model the subtle changes. Our model is named Coarse Temporal Attention Network (CTA-Net), in which coarse temporal branches are introduced in a trainable glimpse network. The goal is to allow the glimpse to capture high-level temporal relationships, such as 'during', 'before' and 'after' by focusing on a specific part of a video. These branches also respect the topology of the temporal dynamics in the video, ensuring that different branches learn meaningful spatial and temporal changes. The model then uses an innovative attention mechanism to generate high-level action specific contextual information for activity recognition by exploring the hidden states of an LSTM. The attention mechanism helps in learning to decide the importance of each hidden state for the recognition task by weighing them when constructing the representation of the video. Our approach is evaluated on four publicly accessible datasets and significantly outperforms the state-of-the-art by a considerable margin with only RGB video as input.
翻译:在承认传统的人类活动方面取得了显著的进展,通过视频中以高度独特的行动为焦点,涉及有区别的身体运动、身体物体和(或)人与人之间的相互作用。驱动器的活动是不同的,因为它们是由同一主题执行的,其身体部分也有相似的移动,从而导致微妙的变化。为了解决这个问题,我们提议了一个新框架,利用时空的微调关注来模拟微妙的变化。我们的模型称为Coarse时空注意网络(CTA-Net),在其中将粗略的时间分支引入一个可训练的视觉网络。关注机制的目的是通过关注视频中的特定部分,让其显示高层次的时间关系,如“持续”、“之前”和“之后”等。这些分支还尊重视频中时间动态的表层,确保不同分支学习有意义的空间和时间变化。然后,模型使用创新的注意机制,通过探索LSTM的隐蔽状态来产生活动识别的高级行动特定背景信息。关注机制有助于了解每一个隐蔽状态对于识别任务的重要性,在构建视频中进行权衡它们的重要性,方法是在构建视频时对它们进行权衡。我们的方法仅以可公开访问的四度外方位评估。我们的方法仅以可获取的图像的形式在可获取的四度上进行重大的图像格式上评估。