Visual attention in Visual Question Answering (VQA) targets at locating the right image regions regarding the answer prediction. However, recent studies have pointed out that the highlighted image regions from the visual attention are often irrelevant to the given question and answer, leading to model confusion for correct visual reasoning. To tackle this problem, existing methods mostly resort to aligning the visual attention weights with human attentions. Nevertheless, gathering such human data is laborious and expensive, making it burdensome to adapt well-developed models across datasets. To address this issue, in this paper, we devise a novel visual attention regularization approach, namely AttReg, for better visual grounding in VQA. Specifically, AttReg firstly identifies the image regions which are essential for question answering yet unexpectedly ignored (i.e., assigned with low attention weights) by the backbone model. And then a mask-guided learning scheme is leveraged to regularize the visual attention to focus more on these ignored key regions. The proposed method is very flexible and model-agnostic, which can be integrated into most visual attention-based VQA models and require no human attention supervision. Extensive experiments over three benchmark datasets, i.e., VQA-CP v2, VQA-CP v1, and VQA v2, have been conducted to evaluate the effectiveness of AttReg. As a by-product, when incorporating AttReg into the strong baseline LMH, our approach can achieve a new state-of-the-art accuracy of 59.92% with an absolute performance gain of 6.93% on the VQA-CP v2 benchmark dataset. In addition to the effectiveness validation, we recognize that the faithfulness of the visual attention in VQA has not been well explored in literature. In the light of this, we propose to empirically validate such property of visual attention and compare it with the prevalent gradient-based approaches.
翻译:视觉问题解答(VQA) 的视觉关注目标在定位正确图像区域的答案预测中的位置。然而,最近的研究表明,视觉关注中突出的图像区域往往与给定的问答不相干,导致对正确视觉推理的模型混乱。为解决这一问题,现有方法大多采用视觉关注权与人类关注力相结合的方法。然而,收集这种人类数据既费力又昂贵,使在数据集之间调整完善的模型变得过于繁琐。为了解决这一问题,我们在本文件中设计了一种新的视觉关注规范化方法,即AttReg,在VQA中改进视觉定位。具体地说,AttReg首先确定了对回答问题至关重要但被主干模型意外忽略(即轻重度被赋予)的图像区域。然后,一个蒙蔽式指导学习计划,将视觉关注更多地聚焦于这些被忽略的关键区域。为了解决这个问题,我们提出的方法非常灵活和模型性能,可以纳入以视觉为基础的VQA模型,在VQA基准值上不需要人注意力监督。