The training of modern large language models (LLMs) takes place in a regime where most training examples are seen only a few times by the model during the course of training. What does a model remember about such examples seen only a few times during training and how long does that memory persist in the face of continuous training with new examples? Here, we investigate these questions through simple recognition, recall, and retention experiments with LLMs. In recognition experiments, we ask if the model can distinguish the seen example from a novel example; in recall experiments, we ask if the model can correctly recall the seen example when cued by a part of it; and in retention experiments, we periodically probe the model's memory for the original examples as the model is trained continuously with new examples. We find that a single exposure is generally sufficient for a model to achieve near perfect accuracy even in very challenging recognition experiments. We estimate that the recognition performance of even small language models easily exceeds human recognition performance reported in similar experiments with humans (Shepard, 1967). Achieving near perfect recall takes more exposures, but most models can do it in just 3 exposures. The flip side of this remarkable capacity for fast learning is that precise memories are quickly overwritten: recall performance for the original examples drops steeply over the first 10 training updates with new examples, followed by a more gradual decline. Even after 100K updates, however, some of the original examples are still recalled near perfectly. A qualitatively similar retention pattern has been observed in human long-term memory retention studies before (Bahrick, 1984). Finally, recognition is much more robust to interference than recall and memory for natural language sentences is generally superior to memory for stimuli without structure.
翻译:现代大型语言模型(LLM)的训练是在一个范例中进行的,在训练过程中,大多数训练样本只被模型看到了几次。模型记住了这些只被训练过几次的样本,这个记忆面对新的样本持续训练又持续多久。在本研究中,我们通过对LLMs进行简单的识别、召回和保留实验,探讨这些问题。在识别实验中,我们询问模型是否可以区分已见样本和新样本;在召回实验中,我们询问模型当它只收到样本的一部分提示时,它是否可以正确地召回已见样本;在保留实验中,我们定期对模型的记忆进行测试,看看随着模型持续训练新样本,原始样本的召回性能如何。我们发现,即使在非常具有挑战性的识别实验中,只需一次曝光,模型通常就可以达到近乎完美的准确性。我们估计,即使是较小的语言模型,其识别性能也很容易超过早期人类在类似实验中报告的识别性能(Shepard,1967)。要取得近乎完美的召回性能需要更多的曝光,但大多数模型只需进行3次曝光就可以做到。这种快速学习的显著能力的反面是,确定的记忆很快被覆盖:当新样本进行前10次训练更新时,对原始样本的召回性能急剧下降,然后逐渐下降。即使在100K次更新之后,一些最初的样本仍然可以近乎完美地召回。人类长期记忆保留研究之前观察到了类似的保留模式(Bahrick,1984)。最后,识别比召回更具有鲁棒性,自然语言句子的记忆通常比没有结构的刺激的记忆更好。