Existing visual question answering methods tend to capture the spurious correlations from visual and linguistic modalities, and fail to discover the true casual mechanism that facilitates reasoning truthfully based on the dominant visual evidence and the correct question intention. Additionally, the existing methods usually ignore the complex event-level understanding in multi-modal settings that requires a strong cognitive capability of causal inference to jointly model cross-modal event temporality, causality, and dynamics. In this work, we focus on event-level visual question answering from a new perspective, i.e., cross-modal causal relational reasoning, by introducing causal intervention methods to mitigate the spurious correlations and discover the true causal structures for the integration of visual and linguistic modalities. Specifically, we propose a novel event-level visual question answering framework named Cross-Modal Causal RelatIonal Reasoning (CMCIR), to achieve robust casuality-aware visual-linguistic question answering. To uncover the causal structures for visual and linguistic modalities, the novel Causality-aware Visual-Linguistic Reasoning (CVLR) module is proposed to collaboratively disentangle the visual and linguistic spurious correlations via elaborately designed front-door and back-door causal intervention modules. To discover the fine-grained interactions between linguistic semantics and spatial-temporal representations, we build a novel Spatial-Temporal Transformer (STT) that builds the multi-modal co-occurrence interactions between visual and linguistic content. Extensive experiments on large-scale event-level urban dataset SUTD-TrafficQA and three benchmark real-world datasets TGIF-QA, MSVD-QA, and MSRVTT-QA demonstrate the effectiveness of our CMCIR for discovering visual-linguistic causal structures.
翻译:现有视觉问题解答方法往往能捕捉视觉和语言模式的虚假关联,并且未能发现真正的偶然机制,这种机制有助于根据占支配地位的视觉证据和正确的问题意图进行真实的推理。此外,现有方法通常忽视多模式环境中复杂的事件层面理解,这需要强大的因果推解能力,以联合模拟跨模式事件的时间性、因果关系和动态。在这项工作中,我们侧重于从新的角度,即跨模式的视觉问题解答,即跨模式的因果关系推理,通过引入因果干预方法,减轻虚假的关联,并发现视觉和语言模式整合的真正因果关系结构。具体地说,我们提议一个新的事件层面的视觉问题解答框架,名为C-Modal Causal 相对原因(CMCIR),以稳健的视觉-视觉-视觉-货币-货币-货币-货币-货币-结构(CVLR)模块,以协作的方式将视觉-语言-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-结构-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-货币-内流学-货币-货币-货币-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-结构-