Several popular Transformer based language models have been found to be successful for text-driven brain encoding. However, existing literature leverages only pretrained text Transformer models and has not explored the efficacy of task-specific learned Transformer representations. In this work, we explore transfer learning from representations learned for ten popular natural language processing tasks (two syntactic and eight semantic) for predicting brain responses from two diverse datasets: Pereira (subjects reading sentences from paragraphs) and Narratives (subjects listening to the spoken stories). Encoding models based on task features are used to predict activity in different regions across the whole brain. Features from coreference resolution, NER, and shallow syntax parsing explain greater variance for the reading activity. On the other hand, for the listening activity, tasks such as paraphrase generation, summarization, and natural language inference show better encoding performance. Experiments across all 10 task representations provide the following cognitive insights: (i) language left hemisphere has higher predictive brain activity versus language right hemisphere, (ii) posterior medial cortex, temporo-parieto-occipital junction, dorsal frontal lobe have higher correlation versus early auditory and auditory association cortex, (iii) syntactic and semantic tasks display a good predictive performance across brain regions for reading and listening stimuli resp.
翻译:一些流行的以变换器为基础的语言模型被认为在文本驱动的大脑编码方面是成功的。然而,现有的文献只利用了预先训练的文本变换模型,而没有探索特定任务的知识变换器显示器的功效。在这项工作中,我们探讨从十个流行的自然语言处理任务(两个合成和八个语义)的演示中学习,以预测两种不同数据集的大脑反应:佩雷拉(阅读段落中的句子的科目)和叙事(听话故事的科目)。基于任务特征的编码模型被用于预测整个大脑不同区域的活动。 调试解析法、 NER 和浅语系合成分析器的特征解释阅读活动更大的差异。另一方面,对于听力活动而言,诸如引言生成、概括和自然语言推断等任务显示更好的编码性表现。所有10种任务表达式的实验提供了以下认知性洞察力:(一) 语言左半球的预测性大脑活动比语言右半球的预测性要高,(二) 上层介质介质、温度-parie-直观的大脑前部和直观性演判前部的预判和直观分析室的预判(三)。