Bridging the semantic gap between image and question is an important step to improve the accuracy of the Visual Question Answering (VQA) task. However, most of the existing VQA methods focus on attention mechanisms or visual relations for reasoning the answer, while the features at different semantic levels are not fully utilized. In this paper, we present a new reasoning framework to fill the gap between visual features and semantic clues in the VQA task. Our method first extracts the features and predicates from the image and question. We then propose a new reasoning framework to effectively jointly learn these features and predicates in a coarse-to-fine manner. The intensively experimental results on three large-scale VQA datasets show that our proposed approach achieves superior accuracy comparing with other state-of-the-art methods. Furthermore, our reasoning framework also provides an explainable way to understand the decision of the deep neural network when predicting the answer.
翻译:缩小图像和问题之间的语义差距是提高视觉问答(VQA)任务准确性的一个重要步骤。 但是,现有的VQA方法大多侧重于关注机制或视觉关系,以推理答案,而不同语义层次的特征没有得到充分利用。在本文件中,我们提出了一个新的推理框架,以填补VQA任务中视觉特征和语义线索之间的差距。我们的方法首先从图像和问题中提取特征和前提。然后我们提出了一个新的推理框架,以粗略到淡化的方式共同有效地学习这些特征和上游。三个大规模VQA数据集的密集实验结果表明,我们拟议方法与其他最新方法相比,实现了更高的准确性。此外,我们的推理框架还提供了一个解释方法,以理解在预测答案时深神经网络的决定。