Neural Language Models (NLMs) have made tremendous advances during the last years, achieving impressive performance on various linguistic tasks. Capitalizing on this, studies in neuroscience have started to use NLMs to study neural activity in the human brain during language processing. However, many questions remain unanswered regarding which factors determine the ability of a neural language model to capture brain activity (aka its 'brain score'). Here, we make first steps in this direction and examine the impact of test loss, training corpus and model architecture (comparing GloVe, LSTM, GPT-2 and BERT), on the prediction of functional Magnetic Resonance Imaging timecourses of participants listening to an audiobook. We find that (1) untrained versions of each model already explain significant amount of signal in the brain by capturing similarity in brain responses across identical words, with the untrained LSTM outperforming the transformerbased models, being less impacted by the effect of context; (2) that training NLP models improves brain scores in the same brain regions irrespective of the model's architecture; (3) that Perplexity (test loss) is not a good predictor of brain score; (4) that training data have a strong influence on the outcome and, notably, that off-the-shelf models may lack statistical power to detect brain activations. Overall, we outline the impact of modeltraining choices, and suggest good practices for future studies aiming at explaining the human language system using neural language models.
翻译:过去几年来,神经语言模型(NLMS)取得了巨大进步,在各种语言任务上取得了令人印象深刻的成绩。利用这一点,神经科学研究开始利用NLMS来研究语言处理过程中人类大脑的神经活动。然而,关于哪些因素决定神经语言模型捕捉脑活动的能力(其“脑分”),仍有许多问题没有得到解答。在这里,我们朝这个方向迈出第一步,并研究测试损失、培训实体和模型结构(比较GloVe、LSTM、GPT-2和BERT)对预测功能性磁共振感反应时间导流的影响。我们发现(1) 每种模型的未经培训版本已经通过在相同词的大脑反应中捕捉相似性来解释大脑中的大量信号,而未受过培训的LSTM在基于变异模型上表现优于受背景影响较小;(2) 培训NLP模式改进了同一大脑区域的脑积分,而不论模型的结构为何;(3) 坚固的变异性(测试损失)在听取听一本音频书的参与者的时程中,我们无法对结果进行良好的预测;(4) 测试结果分析结果分析,我们可能无法评估。