Building systems that achieve a deeper understanding of language is one of the central goals of natural language processing (NLP). Towards this goal, recent works have begun to train language models on narrative datasets which require extracting the most critical information by integrating across long contexts. However, it is still an open question whether these models are learning a deeper understanding of the text, or if the models are simply learning a heuristic to complete the task. This work investigates this further by turning to the one language processing system that truly understands complex language: the human brain. We show that training language models for deeper narrative understanding results in richer representations that have improved alignment to human brain activity. We further find that the improvements in brain alignment are larger for character names than for other discourse features, which indicates that these models are learning important narrative elements. Taken together, these results suggest that this type of training can indeed lead to deeper language understanding. These findings have consequences both for cognitive neuroscience by revealing some of the significant factors behind brain-NLP alignment, and for NLP by highlighting that understanding of long-range context can be improved beyond language modeling.
翻译:建立能够更深入理解语言的系统是自然语言处理的中心目标之一。 为实现这一目标,最近的工作开始在叙述数据集方面培训语言模型,这些模型需要通过跨越长长的语境来提取最关键的信息。然而,这些模型是否正在学习对文本的更深的理解,或者这些模型是否只是正在学习完成这项任务的杂乱无章。这项工作通过转向真正理解复杂语言的一个语言处理系统,即人类大脑,进一步调查这一问题。我们表明,为更深层次叙述理解而培训语言模型,其结果为更丰富的表达形式,改善了与人类大脑活动的匹配。我们进一步发现,对于字符名称而言,大脑对齐方面的改进大于其他对话特征,表明这些模型正在学习重要的叙述要素。综合起来,这些结果表明,这种培训确实能够导致更深入的语言理解。这些研究结果通过揭示大脑-NLP对齐背后的一些重要因素,对认知神经科学产生影响,而对于NLP则通过强调超越语言建模之外对远程环境的理解可以改进,从而对认知产生影响。</s>