Spatial reasoning poses a particular challenge for intelligent agents and is at the same time a prerequisite for their successful interaction and communication in the physical world. One such reasoning task is to describe the position of a target object with respect to the intrinsic orientation of some reference object via relative directions. In this paper, we introduce GRiD-A-3D, a novel diagnostic visual question-answering (VQA) dataset based on abstract objects. Our dataset allows for a fine-grained analysis of end-to-end VQA models' capabilities to ground relative directions. At the same time, model training requires considerably fewer computational resources compared with existing datasets, yet yields a comparable or even higher performance. Along with the new dataset, we provide a thorough evaluation based on two widely known end-to-end VQA architectures trained on GRiD-A-3D. We demonstrate that within a few epochs, the subtasks required to reason over relative directions, such as recognizing and locating objects in a scene and estimating their intrinsic orientations, are learned in the order in which relative directions are intuitively processed.
翻译:空间推理对智能物剂提出了特殊的挑战,同时也是智能物剂在物理世界中成功互动和交流的先决条件。这种推理任务之一是描述目标对象的位置,通过相对方向描述某些参照物的内在方向。在本文中,我们采用了基于抽象物体的新颖的GRID-A-3D,即基于抽象物体的诊断性直观问答(VQA)数据集。我们的数据集允许对终端至终端VQA模型的能力进行细微的细微分析,以至地面相对方向。同时,模型培训需要比现有数据集少得多的计算资源,但产生可比的甚至更高的性能。除了新的数据集外,我们还根据两种广为人知的端对端VQA结构,根据在GRID-A-3D上培训的两种广为人知的端对端VQA结构进行彻底评估。我们证明,在几个地方,人们需要从相对方向上理解的子任务,例如识别和定位在现场的物体,并估计其内在方向,在相对方向的顺序上学习。