Most of the existing works on human activity analysis focus on recognition or early recognition of the activity labels from complete or partial observations. Similarly, almost all of the existing video captioning approaches focus on the observed events in videos. Predicting the labels and the captions of future activities where no frames of the predicted activities have been observed is a challenging problem, with important applications that require anticipatory response. In this work, we propose a system that can infer the labels and the captions of a sequence of future activities. Our proposed network for label prediction of a future activity sequence has three branches where the first branch takes visual features from the objects present in the scene, the second branch takes observed sequential activity features, and the third branch captures the last observed activity features. The predicted labels and the observed scene context are then mapped to meaningful captions using a sequence-to-sequence learning-based method. Experiments on four challenging activity analysis datasets and a video description dataset demonstrate that our label prediction approach achieves comparable performance with the state-of-the-arts and our captioning framework outperform the state-of-the-arts.
翻译:人类活动分析的现有大部分工作都侧重于从完整或部分观测中确认或早期确认活动标签。同样,几乎所有现有的视频字幕方法都侧重于视频中观察到的事件。预测未观察到预计活动框架的未来活动的标签和说明是一个具有挑战性的问题,重要的应用需要预先作出反应。在这项工作中,我们提议了一个系统,可以推断标签和未来活动顺序的字幕。我们提议的今后活动序列的标签预测网络有三个分支,第一个分支从现场的物体中取得视觉特征,第二个分支有观察到的连续活动特征,第三个分支捕捉最后观察到的活动特征。预测标签和所观察到的场景背景随后用顺序到顺序的学习方法绘制为有意义的说明。对四个具有挑战性的活动分析数据集和视频描述数据集的实验表明,我们的标签预测方法取得了与现状和我们的标题框架相近于状态的类似性效果。