Current computational models capturing words' meaning mostly rely on textual corpora. While these approaches have been successful over the last decades, their lack of grounding in the real world is still an ongoing problem. In this paper, we focus on visual grounding of word embeddings and target two important questions. First, how can language benefit from vision in the process of visual grounding? And second, is there a link between visual grounding and abstract concepts? We investigate these questions by proposing a simple yet effective approach where language benefits from vision specifically with respect to the modeling of both concrete and abstract words. Our model aligns word embeddings with their corresponding visual representation without deteriorating the knowledge captured by textual distributional information. We apply our model to a behavioral experiment reported by G\"unther et al. (2020), which addresses the plausibility of having visual mental representations for abstract words. Our evaluation results show that: (1) It is possible to predict human behaviour to a large degree using purely textual embeddings. (2) Our grounded embeddings model human behavior better compared to their textual counterparts. (3) Abstract concepts benefit from visual grounding implicitly through their connections to concrete concepts, rather than from having corresponding visual representations.
翻译:当前计算模型捕捉文字的含义时,大多依赖于文字体。 虽然这些方法在过去几十年中一直很成功, 但它们在现实世界中缺乏立足点仍然是一个持续的问题。 在本文中, 我们侧重于文字嵌入的视觉定位, 并针对两个重要问题。 首先, 语言如何在视觉定位过程中从视觉定位中受益? 其次, 视觉定位和抽象概念之间是否有联系? 我们通过提出一个简单而有效的方法来调查这些问题, 语言从具体和抽象文字的建模中都从愿景中受益。 我们的模型将词嵌入其对应的视觉表达方式, 而不会破坏文本分布信息所获取的知识。 我们将我们的模型应用于G\ “ unther et al. (2020年) 所报告的行为实验, 该实验涉及对抽象文字进行视觉精神表达的可取性。 我们的评估结果表明:(1) 使用纯文字嵌入方式可以在很大程度上预测人类行为。 (2) 我们基于人类行为模型的嵌入比其文本对应方更好。 (3) 抽象概念从通过视觉连接到具体概念的隐含基础定位中受益, 而不是通过视觉表达方式, 而不是从视觉表达方式来受益。