The study of algorithms to automatically answer visual questions currently is motivated by visual question answering (VQA) datasets constructed in artificial VQA settings. We propose VizWiz, the first goal-oriented VQA dataset arising from a natural VQA setting. VizWiz consists of over 31,000 visual questions originating from blind people who each took a picture using a mobile phone and recorded a spoken question about it, together with 10 crowdsourced answers per visual question. VizWiz differs from the many existing VQA datasets because (1) images are captured by blind photographers and so are often poor quality, (2) questions are spoken and so are more conversational, and (3) often visual questions cannot be answered. Evaluation of modern algorithms for answering visual questions and deciding if a visual question is answerable reveals that VizWiz is a challenging dataset. We introduce this dataset to encourage a larger community to develop more generalized algorithms that can assist blind people.
翻译:目前,对自动回答视觉问题的算法进行研究的动机是视觉问答(VQA)数据集。我们提议VizWiz,这是自然VQA设置中第一个面向目标的VQA数据集。VizWiz由来自盲人的31 000多个视觉问题组成,他们每人用手机拍摄一张照片并记录了对它的一个语音问题,同时每个视觉问题记录了10个众源解答。VizWiz与许多现有的VQA数据集不同,因为(1)图像由盲人摄影师拍摄,因此往往质量差,(2)问题是交谈性的,(3)往往无法回答视觉问题。用于回答视觉问题的现代算法评估以及决定视觉问题是否可回答,显示VizWiz是一个具有挑战性的数据集。我们介绍这个数据集是为了鼓励更广大的社群开发更普遍的算法,以帮助盲人。