Visual question answering (VQA) has been gaining a lot of traction in the machine learning community in the recent years due to the challenges posed in understanding information coming from multiple modalities (i.e., images, language). In VQA, a series of questions are posed based on a set of images and the task at hand is to arrive at the answer. To achieve this, we take a symbolic reasoning based approach using the framework of formal logic. The image and the questions are converted into symbolic representations on which explicit reasoning is performed. We propose a formal logic framework where (i) images are converted to logical background facts with the help of scene graphs, (ii) the questions are translated to first-order predicate logic clauses using a transformer based deep learning model, and (iii) perform satisfiability checks, by using the background knowledge and the grounding of predicate clauses, to obtain the answer. Our proposed method is highly interpretable and each step in the pipeline can be easily analyzed by a human. We validate our approach on the CLEVR and the GQA dataset. We achieve near perfect accuracy of 99.6% on the CLEVR dataset comparable to the state of art models, showcasing that formal logic is a viable tool to tackle visual question answering. Our model is also data efficient, achieving 99.1% accuracy on CLEVR dataset when trained on just 10% of the training data.
翻译:视觉解答( VQA) 近些年来,由于对多种模式(即图像、语言)信息的理解存在挑战,机器学习界的视觉解答( VQA) 获得了大量牵引力。 在VQA 中,根据一组图像提出了一系列问题,而手头的任务是找到答案。 为此,我们采用基于正式逻辑框架的象征性推理方法。图像和问题被转换为象征性的表示,并据此进行明确的推理。我们提出了一个正式逻辑框架,其中(一) 图像在现场图表的帮助下转换为逻辑背景事实,(二) 问题被转换为使用基于深层学习模式的变压器的一级上游逻辑条款,以及(三) 利用背景知识和前提条款的基础来进行可识别性检查,以获得答案。我们提议的方法是高度易解析的,管道中的每一个步骤都可以轻易地由人来分析。 我们验证了我们在CLEVR和GQA数据集方面的做法。 我们在CLEVR数据模型上实现了接近99.6%的精确度,在CLEVR数据模型上,我们所训练的直观性数据模型上也可比较于我们所学的C.