We propose an architecture for VQA which utilizes recurrent layers to generate visual and textual attention. The memory characteristic of the proposed recurrent attention units offers a rich joint embedding of visual and textual features and enables the model to reason relations between several parts of the image and question. Our single model outperforms the first place winner on the VQA 1.0 dataset, performs within margin to the current state-of-the-art ensemble model. We also experiment with replacing attention mechanisms in other state-of-the-art models with our implementation and show increased accuracy. In both cases, our recurrent attention mechanism improves performance in tasks requiring sequential or relational reasoning on the VQA dataset.
翻译:我们为VQA建议了一个结构,它利用经常性层来引起视觉和文字关注。提议的经常性关注单位的记忆特征提供了丰富的视觉和文字特征的混合嵌入,并使模型能够解释图像和问题中若干部分之间的关系。我们唯一的模型优于VQA 1.0数据集中的第一位赢家,在与目前最先进的组合模型的距离内运行。我们还试验用我们的实施来取代其他最先进的模型中的关注机制,并显示更高的准确性。在这两种情况下,我们的经常性关注机制都改善了要求连续或关联推理VQA数据集的任务的绩效。