We study object interaction anticipation in egocentric videos. This task requires an understanding of the spatiotemporal context formed by past actions on objects, coined action context. We propose TransFusion, a multimodal transformer-based architecture. It exploits the representational power of language by summarising the action context. TransFusion leverages pre-trained image captioning and vision-language models to extract the action context from past video frames. This action context together with the next video frame is processed by the multimodal fusion module to forecast the next object interaction. Our model enables more efficient end-to-end learning. The large pre-trained language models add common sense and a generalisation capability. Experiments on Ego4D and EPIC-KITCHENS-100 show the effectiveness of our multimodal fusion model. They also highlight the benefits of using language-based context summaries in a task where vision seems to suffice. Our method outperforms state-of-the-art approaches by 40.4% in relative terms in overall mAP on the Ego4D test set. We validate the effectiveness of TransFusion via experiments on EPIC-KITCHENS-100. Video and code are available at: https://eth-ait.github.io/transfusion-proj/.
翻译:我们研究了自我中心视频中的物体交互预测。这项任务需要理解过去的动作对物体形成的时空背景,即动作上下文。我们提出了TransFusion,这是一种基于多模态变压器的架构。它利用语言的表征能力,通过总结动作上下文来处理。TransFusion利用预训练的图像字幕和视觉语言模型从过去的视频帧中提取动作上下文。这个动作上下文与下一个视频帧一起被多模态融合模块处理,以预测下一个物体交互。我们的模型可以实现更有效的端到端学习。大规模预训练语言模型增加了通用的常识和泛化能力。在Ego4D和EPIC-KITCHENS-100实验中,我们的多模态融合模型显示了其有效性。它们也突显了在一个看起来足以使用视觉的任务中使用基于语言的上下文总结的优点。我们的方法在Ego4D测试集的总体平均精度(mAP)上相对于最先进的方法提高了40.4%。我们通过在EPIC-KITCHENS-100上的实验验证了TransFusion的有效性。视频和代码可从以下网址获得:https://eth-ait.github.io/transfusion-proj/.