Visual Question Answering (VQA) models have struggled with counting objects in natural images so far. We identify a fundamental problem due to soft attention in these models as a cause. To circumvent this problem, we propose a neural network component that allows robust counting from object proposals. Experiments on a toy task show the effectiveness of this component and we obtain state-of-the-art accuracy on the number category of the VQA v2 dataset without negatively affecting other categories, even outperforming ensemble models with our single model. On a difficult balanced pair metric, the component gives a substantial improvement in counting over a strong baseline by 6.6%.
翻译:视觉问题解答( VQA) 模型迄今一直与自然图像中的天体计数挣扎。 我们发现一个根本问题, 原因是这些模型中注意的软性。 为了绕过这个问题, 我们提议一个神经网络组件, 允许对对象提议进行强力计数。 玩具任务实验显示此组件的有效性, 我们获得VQA v2 数据集数量类别的最新准确性, 而不会对其他类别造成负面影响, 甚至比我们单一模型的组合模型要差。 在困难的一对平衡度指标上, 该组件大大改进了6.6%对强基线的计算 。