Recent artificial neural networks that process natural language achieve unprecedented performance in tasks requiring sentence-level understanding. As such, they could be interesting models of the integration of linguistic information in the human brain. We review works that compare these artificial language models with human brain activity and we assess the extent to which this approach has improved our understanding of the neural processes involved in natural language comprehension. Two main results emerge. First, the neural representation of word meaning aligns with the context-dependent, dense word vectors used by the artificial neural networks. Second, the processing hierarchy that emerges within artificial neural networks broadly matches the brain, but is surprisingly inconsistent across studies. We discuss current challenges in establishing artificial neural networks as process models of natural language comprehension. We suggest exploiting the highly structured representational geometry of artificial neural networks when mapping representations to brain data.
翻译:处理自然语言的近期人工神经网络在需要理解判决层面的任务中取得了前所未有的表现。 因此,它们可以成为将语言信息融入人类大脑的有趣模型。 我们审查将这些人工语言模型与人类大脑活动进行比较的工作,我们评估这一方法在多大程度上增进了我们对自然语言理解所涉及的神经过程的理解。 出现了两个主要结果。 首先,词义的神经表达方式与人造神经网络使用的基于环境的、密集的文字矢量相一致。 其次,人工神经网络中出现的处理层次与大脑大相径庭,但令人惊讶的是,各种研究之间却不一致。 我们讨论了当前在建立人为神经网络作为自然语言理解过程模型方面的挑战。 我们建议,在对大脑数据进行映射时,利用人为神经网络结构化的高度代表性几何方法。