Many vision and language tasks require commonsense reasoning beyond data-driven image and natural language processing. Here we adopt Visual Question Answering (VQA) as an example task, where a system is expected to answer a question in natural language about an image. Current state-of-the-art systems attempted to solve the task using deep neural architectures and achieved promising performance. However, the resulting systems are generally opaque and they struggle in understanding questions for which extra knowledge is required. In this paper, we present an explicit reasoning layer on top of a set of penultimate neural network based systems. The reasoning layer enables reasoning and answering questions where additional knowledge is required, and at the same time provides an interpretable interface to the end users. Specifically, the reasoning layer adopts a Probabilistic Soft Logic (PSL) based engine to reason over a basket of inputs: visual relations, the semantic parse of the question, and background ontological knowledge from word2vec and ConceptNet. Experimental analysis of the answers and the key evidential predicates generated on the VQA dataset validate our approach.
翻译:许多视觉和语言任务需要超越数据驱动图像和自然语言处理的常识推理。 我们在这里采用视觉问答(VQA)作为示例, 系统预计将用自然语言回答关于图像的问题。 目前最先进的系统试图使用深层神经结构解决任务并取得有希望的性能。 但是, 由此产生的系统一般不透明, 难以理解需要额外知识的问题 。 在本文中, 我们在一组倒数第二神经网络系统之上展示了一个明确的推理层 。 推理层可以在需要额外知识的地方进行推理和回答问题, 同时为终端用户提供一个可解释的界面 。 具体地说, 推理层采用了一种基于精密软逻辑( PSL) 的引擎, 以理解一篮投入: 视觉关系、 问题的语义分析、 以及 word2vec 和概念网 提供的背景学知识。 答案的实验分析以及 VQA 数据集中生成的关键证据前导, 证实了我们的方法 。