This paper presents a novel method, termed Bridge to Answer, to infer correct answers for questions about a given video by leveraging adequate graph interactions of heterogeneous crossmodal graphs. To realize this, we learn question conditioned visual graphs by exploiting the relation between video and question to enable each visual node using question-to-visual interactions to encompass both visual and linguistic cues. In addition, we propose bridged visual-to-visual interactions to incorporate two complementary visual information on appearance and motion by placing the question graph as an intermediate bridge. This bridged architecture allows reliable message passing through compositional semantics of the question to generate an appropriate answer. As a result, our method can learn the question conditioned visual representations attributed to appearance and motion that show powerful capability for video question answering. Extensive experiments prove that the proposed method provides effective and superior performance than state-of-the-art methods on several benchmarks.
翻译:本文介绍了一种新颖的方法,称为“答案之桥”,通过利用多种不同模式图的适当的图形互动,对特定视频的提问作出正确的答案。为此,我们通过利用视频和问题之间的关系学习了有限定条件的视觉图表,使每个视觉节点都能够利用视频和视觉互动,包括视觉和语言提示。此外,我们提议将问题图作为中间桥,从而纳入关于外观和运动的两种补充性视觉信息。这一架桥结构使得可靠的信息能够通过问题的成份语义表达来产生适当的答案。因此,我们的方法可以学习由于显示有强大视频问题回答能力的外观和运动而产生的有限定条件的视觉表现。广泛的实验证明,拟议的方法提供了有效而优于几种基准上的最新方法。