Recent works in video prediction have mainly focused on passive forecasting and low-level action-conditional prediction, which sidesteps the learning of interaction between agents and objects. We introduce the task of semantic action-conditional video prediction, which uses semantic action labels to describe those interactions and can be regarded as an inverse problem of action recognition. The challenge of this new task primarily lies in how to effectively inform the model of semantic action information. Inspired by the idea of Mixture of Experts, we embody each abstract label by a structured combination of various visual concept learners and propose a novel video prediction model, Modular Action Concept Network (MAC). Our method is evaluated on two newly designed synthetic datasets, CLEVR-Building-Blocks and Sapien-Kitchen, and one real-world dataset called Tower-Creation. Extensive experiments demonstrate that MAC can correctly condition on given instructions and generate corresponding future frames without need of bounding boxes. We further show that the trained model can make out-of-distribution generalization, be quickly adapted to new object categories and exploit its learnt features for object detection, showing the progression towards higher-level cognitive abilities. More visualizations can be found at http://www.pair.toronto.edu/mac/.
翻译:视频预测的近期工作主要侧重于被动预测和低水平行动条件预测,这回避了对代理人和物体之间互动的学习。我们引入了语义动作-条件视频预测任务,使用语义动作标签描述这些互动,可被视为反向行动识别问题。这一新任务的挑战主要在于如何有效地为语义行动信息模型提供信息。受专家混合思想的启发,我们通过由各种视觉概念学习者组成的结构化组合来体现每个抽象标签,并提出一个新的视频预测模型,即模子行动概念网络(MAC)。我们的方法是用两个新设计的合成数据集,即CLEVR-Building-Blocks和Sapien-Kitchen,以及一个称为Tow-Cregation的真实世界数据集来进行评估。广泛的实验表明,MAC可以正确地为给定的指示提供条件,并生成相应的未来框架,而不需要捆绑框。我们进一步表明,经过培训的模型可以实现分配的通用,可以迅速适应新的对象类别,并利用其学习的特性来探测物体。More-devialastototo lactional