Synthetic datasets have successfully been used to probe visual question-answering datasets for their reasoning abilities. CLEVR (johnson2017clevr), for example, tests a range of visual reasoning abilities. The questions in CLEVR focus on comparisons of shapes, colors, and sizes, numerical reasoning, and existence claims. This paper introduces a minimally biased, diagnostic visual question-answering dataset, QLEVR, that goes beyond existential and numerical quantification and focus on more complex quantifiers and their combinations, e.g., asking whether there are more than two red balls that are smaller than at least three blue balls in an image. We describe how the dataset was created and present a first evaluation of state-of-the-art visual question-answering models, showing that QLEVR presents a formidable challenge to our current models. Code and Dataset are available at https://github.com/zechenli03/QLEVR
翻译:合成数据集已被成功地用于为其推理能力探测可视问答数据集。例如,CLEVR(Johnson2017clevr)测试了一系列视觉推理能力。CLEVR中的问题侧重于对形状、颜色和大小、数字推理和存在主张的比较。本文件介绍了一个微小的、诊断性直观解答数据集,QLEVR,它超越了存在性和数字量化,侧重于更复杂的量化器及其组合,例如,询问图像中是否有两个小于至少三个蓝色球的红色球。我们描述了数据集是如何创建的,并首次评估了最新视觉问答模型,表明QLEVR给我们目前的模型提出了严峻的挑战。代码和数据集可在https://github.com/zechenli03/QLEVR查阅。