We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing ~0.25M images, ~0.76M questions, and ~10M answers (www.visualqa.org), and discuss the information it provides. Numerous baselines and methods for VQA are provided and compared with human performance. Our VQA demo is available on CloudCV (http://cloudcv.org/vqa).
翻译:我们提出自由形式和开放的视觉问题解答(VQA)的任务。鉴于图像和关于图像的自然语言问题,我们的任务是提供准确的自然语言解答。镜像真实世界情景,如帮助视力受损者,问答都是开放式的。视觉问题有选择地针对图像的不同领域,包括背景细节和背景背景。因此,在VQA上成功的系统通常需要比产生通用图像说明的系统更详细地了解图像和复杂推理。此外,VQA可以自动评估,因为许多开放式答案只包含几个字或一组封闭的答案,可以以多选格式提供。我们提供的数据集包含~0.25M图像,~0.76M问题和~10M答案(www.visualqa.org),并讨论它提供的信息。提供了许多VQA的基线和方法,并与人类性能进行比较。我们的VQA演示可在LoudCV(http://cloudcv.org/vqa)上查阅。