Spatial commonsense, the knowledge about spatial position and relationship between objects (like the relative size of a lion and a girl, and the position of a boy relative to a bicycle when cycling), is an important part of commonsense knowledge. Although pretrained language models (PLMs) succeed in many NLP tasks, they are shown to be ineffective in spatial commonsense reasoning. Starting from the observation that images are more likely to exhibit spatial commonsense than texts, we explore whether models with visual signals learn more spatial commonsense than text-based PLMs. We propose a spatial commonsense benchmark that focuses on the relative scales of objects, and the positional relationship between people and objects under different actions. We probe PLMs and models with visual signals, including vision-language pretrained models and image synthesis models, on this benchmark, and find that image synthesis models are more capable of learning accurate and consistent spatial knowledge than other models. The spatial knowledge from image synthesis models also helps in natural language understanding tasks that require spatial commonsense.
翻译:空间常识、关于空间位置和物体之间关系的知识(如狮子和女孩的相对大小,以及男孩在骑自行车时相对于自行车的地位)是常识知识的一个重要部分。尽管预先培训的语言模型(PLMs)在许多NLP任务中取得了成功,但在空间常识推理方面却证明这些模型是无效的。从图像比文本更有可能显示空间常识的观察出发,我们探讨具有视觉信号的模型是否比基于文字的PLMs学到更多的空间常识。我们提出了一个空间常识基准,侧重于物体的相对尺度以及不同行动下人与物体之间的定位关系。我们用视觉信号探测PLMs和模型,包括视觉语言预培训模型和图像合成模型,发现图像合成模型比其他模型更有能力学习准确和一致的空间知识。图像合成模型的空间知识也有助于自然语言理解需要空间常识的任务。