We introduce GQA, a new dataset for real-world visual reasoning and compositional question answering, seeking to address key shortcomings of previous VQA datasets. We have developed a strong and robust question engine that leverages scene graph structures to create 22M diverse reasoning questions, all come with functional programs that represent their semantics. We use the programs to gain tight control over the answer distribution and present a new tunable smoothing technique to mitigate question biases. Accompanying the dataset is a suite of new metrics that evaluate essential qualities such as consistency, grounding and plausibility. An extensive analysis is performed for baselines as well as state-of-the-art models, providing fine-grained results for different question types and topologies. Whereas a blind LSTM obtains mere 42.1%, and strong VQA models achieve 54.1%, human performance tops at 89.3%, offering ample opportunity for new research to explore. We strongly hope GQA will provide an enabling resource for the next generation of models with enhanced robustness, improved consistency, and deeper semantic understanding for images and language.
翻译:我们引入了GQA,这是真实世界视觉推理和构成问题解答的新数据集,旨在解决先前VQA数据集的关键缺陷。我们开发了一个强大有力的问题引擎,利用现场图表结构创建22M多种推理问题,所有程序都具有功能性程序,代表其语义。我们利用程序对答案分布进行严格控制,并推出一种新的可调和金枪鱼的技术,以缓解问题偏差。同时,数据集是一套评估一致性、地基性和可视性等基本品质的新指标。我们为基线和最新模型进行了广泛分析,为不同问题类型和型态提供了细微的结果。而盲人LSTM只获得42.1%,强大的VQA模型达到54.1%,人类性能顶部为89.3%,为新的研究提供了充分的机会。我们强烈希望GQA将为下一代模型提供一种扶持性资源,能够更加稳健、更加一致和更深入地理解图像和语言。