When a language model is trained to predict natural language sequences, its prediction at each moment depends on a representation of prior context. What kind of information about the prior context can language models retrieve? We tested whether language models could retrieve the exact words that occurred previously in a text. In our paradigm, language models (transformers and an LSTM) processed English text in which a list of nouns occurred twice. We operationalized retrieval as the reduction in surprisal from the first to the second list. We found that the transformers retrieved both the identity and ordering of nouns from the first list. Further, the transformers' retrieval was markedly enhanced when they were trained on a larger corpus and with greater model depth. Lastly, their ability to index prior tokens was dependent on learned attention patterns. In contrast, the LSTM exhibited less precise retrieval, which was limited to list-initial tokens and to short intervening texts. The LSTM's retrieval was not sensitive to the order of nouns and it improved when the list was semantically coherent. We conclude that transformers implemented something akin to a working memory system that could flexibly retrieve individual token representations across arbitrary delays; conversely, the LSTM maintained a coarser and more rapidly-decaying semantic gist of prior tokens, weighted toward the earliest items.
翻译:当语言模型被训练用来预测自然语言序列时,它每个时刻的预测都取决于先前背景的表示。关于先前背景的哪一种信息可以检索语言模型? 我们测试了语言模型是否可以检索先前文本中出现的准确字数。 在我们的范式中,语言模型(翻译器和LSTM)处理过英文文本,其中出现一个名词清单两次。我们把检索作为从第一个列表到第二个列表的超常减少。我们发现变压器从第一个列表中检索了名词的身份和顺序。此外,变压器的检索能力在更大范围、更深的模型中得到了显著的提高。最后,它们将先前标语索引的能力取决于所学的注意模式。相比之下,LSTM的检索不那么精确,仅限于列出首个符号和简短的干涉文本。LSTM的检索对名词顺序并不敏感,当列表具有语义一致性时,它就会得到改善。我们的结论是,变压器采用了一个类似于工作式的内存系统,可以灵活地检索个人在任意延误上的高级标记。