Visual attention in Visual Question Answering (VQA) targets at locating the right image regions regarding the answer prediction. However, recent studies have pointed out that the highlighted image regions from the visual attention are often irrelevant to the given question and answer, leading to model confusion for correct visual reasoning. To tackle this problem, existing methods mostly resort to aligning the visual attention weights with human attentions. Nevertheless, gathering such human data is laborious and expensive, making it burdensome to adapt well-developed models across datasets. To address this issue, in this paper, we devise a novel visual attention regularization approach, namely AttReg, for better visual grounding in VQA. Specifically, AttReg firstly identifies the image regions which are essential for question answering yet unexpectedly ignored (i.e., assigned with low attention weights) by the backbone model. And then a mask-guided learning scheme is leveraged to regularize the visual attention to focus more on these ignored key regions. The proposed method is very flexible and model-agnostic, which can be integrated into most visual attention-based VQA models and require no human attention supervision. Extensive experiments over three benchmark datasets, i.e., VQA-CP v2, VQA-CP v1, and VQA v2, have been conducted to evaluate the effectiveness of AttReg. As a by-product, when incorporating AttReg into the strong baseline LMH, our approach can achieve a new state-of-the-art accuracy of 59.92% with an absolute performance gain of 6.93% on the VQA-CP v2 benchmark dataset. In addition to the effectiveness validation, we recognize that the faithfulness of the visual attention in VQA has not been well explored in literature. In the light of this, we propose to empirically validate such property of visual attention and compare it with the prevalent gradient-based approaches.


翻译:视觉问题解答(VQA) 的视觉关注目标在定位正确图像区域的答案预测中的位置。然而,最近的研究表明,视觉关注中突出的图像区域往往与给定的问答不相干,导致对正确视觉推理的模型混乱。为解决这一问题,现有方法大多采用视觉关注权与人类关注力相结合的方法。然而,收集这种人类数据既费力又昂贵,使在数据集之间调整完善的模型变得过于繁琐。为了解决这一问题,我们在本文件中设计了一种新的视觉关注规范化方法,即AttReg,在VQA中改进视觉定位。具体地说,AttReg首先确定了对回答问题至关重要但被主干模型意外忽略(即轻重度被赋予)的图像区域。然后,一个蒙蔽式指导学习计划,将视觉关注更多地聚焦于这些被忽略的关键区域。为了解决这个问题,我们提出的方法非常灵活和模型性能,可以纳入以视觉为基础的VQA模型,在VQA基准值上不需要人注意力监督。

1
下载
关闭预览

相关内容

视觉问答(Visual Question Answering,VQA),是一种涉及计算机视觉和自然语言处理的学习任务。这一任务的定义如下: A VQA system takes as input an image and a free-form, open-ended, natural-language question about the image and produces a natural-language answer as the output[1]。 翻译为中文:一个VQA系统以一张图片和一个关于这张图片形式自由、开放式的自然语言问题作为输入,以生成一条自然语言答案作为输出。简单来说,VQA就是给定的图片进行问答。

知识荟萃

精品入门和进阶教程、论文和代码整理等

更多

查看相关VIP内容、论文、资讯等
100+篇《自监督学习(Self-Supervised Learning)》论文最新合集
专知会员服务
161+阅读 · 2020年3月18日
Stabilizing Transformers for Reinforcement Learning
专知会员服务
57+阅读 · 2019年10月17日
计算机视觉最佳实践、代码示例和相关文档
专知会员服务
17+阅读 · 2019年10月9日
最新BERT相关论文清单,BERT-related Papers
专知会员服务
52+阅读 · 2019年9月29日
意识是一种数学模式
CreateAMind
3+阅读 · 2019年6月24日
Hierarchically Structured Meta-learning
CreateAMind
23+阅读 · 2019年5月22日
Transferring Knowledge across Learning Processes
CreateAMind
26+阅读 · 2019年5月18日
Unsupervised Learning via Meta-Learning
CreateAMind
41+阅读 · 2019年1月3日
A Technical Overview of AI & ML in 2018 & Trends for 2019
待字闺中
16+阅读 · 2018年12月24日
Arxiv
3+阅读 · 2018年11月29日
Arxiv
5+阅读 · 2018年3月16日
Arxiv
9+阅读 · 2016年10月27日
VIP会员
Top
微信扫码咨询专知VIP会员