Understanding spatial relations is essential for intelligent agents to act and communicate in the physical world. Relative directions are spatial relations that describe the relative positions of target objects with regard to the intrinsic orientation of reference objects. Grounding relative directions is more difficult than grounding absolute directions because it not only requires a model to detect objects in the image and to identify spatial relation based on this information, but it also needs to recognize the orientation of objects and integrate this information into the reasoning process. We investigate the challenging problem of grounding relative directions with end-to-end neural networks. To this end, we provide GRiD-3D, a novel dataset that features relative directions and complements existing visual question answering (VQA) datasets, such as CLEVR, that involve only absolute directions. We also provide baselines for the dataset with two established end-to-end VQA models. Experimental evaluations show that answering questions on relative directions is feasible when questions in the dataset simulate the necessary subtasks for grounding relative directions. We discover that those subtasks are learned in an order that reflects the steps of an intuitive pipeline for processing relative directions.
翻译:了解空间关系对于智能剂在物理世界中采取行动和进行交流至关重要。 相对方向是描述目标物体相对于参照对象内在方向的相对位置的空间关系。 定位相对方向比确定绝对方向更加困难, 因为它不仅需要一种模型来检测图像中的物体, 并根据这种信息确定空间关系, 而且它也需要识别对象的方向, 并将这种信息纳入推理过程。 我们调查了用端对端神经网络定位相对方向的棘手问题。 为此, 我们提供了GRID-3D, 一套具有相对方向的新数据集, 并补充了现有的直观回答(如CLEVR)数据集, 仅涉及绝对方向。 我们还提供了两个既定端对端VQA模型的数据集基线。 实验性评估表明,当数据集中的问题模拟确定相对方向所需的子任务时, 回答相对方向问题是可行的。 我们发现, 这些子任务是在反映直径管道步骤以处理相对方向的顺序中学习的。