To what extent can experience from language contribute to our conceptual knowledge? Computational explorations of this question have shed light on the ability of powerful neural language models (LMs) -- informed solely through text input -- to encode and elicit information about concepts and properties. To extend this line of research, we present a framework that uses neural-network language models (LMs) to perform property induction -- a task in which humans generalize novel property knowledge (has sesamoid bones) from one or more concepts (robins) to others (sparrows, canaries). Patterns of property induction observed in humans have shed considerable light on the nature and organization of human conceptual knowledge. Inspired by this insight, we use our framework to explore the property inductions of LMs, and find that they show an inductive preference to generalize novel properties on the basis of category membership, suggesting the presence of a taxonomic bias in their representations.
翻译:语言的经验在多大程度上能促进我们的概念知识? 对这一问题的计算探讨揭示了强大的神经语言模型 -- -- 仅通过文本输入而获得信息 -- -- 能够对概念和属性进行编码和收集信息的能力。为了扩大这一研究范围,我们提出了一个框架,利用神经网络语言模型(LMs)来进行财产上岗工作 -- -- 这项任务是人类将新颖的财产知识(含代谢骨骼)从一个或一个以上概念(罗宾斯)到另一个概念(罗宾斯、金丝雀)进行概括化,在人类观察到的财产上岗模式在很大程度上揭示了人类概念知识的性质和组织。受这一洞见的启发,我们利用我们的框架探索了LMs的属性上岗,发现它们表现出一种根据类别成员身份将新财产(含代谢骨骼)概括化的感性倾向,表明其表述中存在分类偏见。