Understanding spatial relations (e.g., "laptop on table") in visual input is important for both humans and robots. Existing datasets are insufficient as they lack large-scale, high-quality 3D ground truth information, which is critical for learning spatial relations. In this paper, we fill this gap by constructing Rel3D: the first large-scale, human-annotated dataset for grounding spatial relations in 3D. Rel3D enables quantifying the effectiveness of 3D information in predicting spatial relations on large-scale human data. Moreover, we propose minimally contrastive data collection -- a novel crowdsourcing method for reducing dataset bias. The 3D scenes in our dataset come in minimally contrastive pairs: two scenes in a pair are almost identical, but a spatial relation holds in one and fails in the other. We empirically validate that minimally contrastive examples can diagnose issues with current relation detection models as well as lead to sample-efficient training. Code and data are available at https://github.com/princeton-vl/Rel3D.
翻译:在视觉输入中,理解空间关系(例如“桌面上的笔记本”)对人类和机器人都很重要。现有的数据集不够充分,因为它们缺乏大规模、高质量的三维地面真象信息,这对于学习空间关系至关重要。在本文中,我们通过构建Rel3D来填补这一空白:为在3D中定位空间关系而建立第一个大型、人文附加说明的数据集。Rel3D能够量化三维信息在预测大规模人类数据的空间关系方面的有效性。此外,我们提议了最小的对比性数据收集 -- -- 一种减少数据集偏差的新的众包方法。我们数据集中的三维场景以最小的对比性配对方式出现:一对两场景几乎相同,但一对空间关系维持在一对,另一对则失败。我们从经验上确认,最小的对比性实例可以诊断当前关系探测模型的问题,并导致抽样高效培训。代码和数据可在https://github.com/princent-vl/Rel3D上查阅。