Language Models (LMs) become outdated as the world changes; they often fail to perform tasks requiring recent factual information which was absent or different during training, a phenomenon called temporal misalignment. This is especially a challenging problem because the research community still lacks a coherent dataset for assessing the adaptability of LMs to frequently-updated knowledge corpus such as Wikipedia. To this end, we introduce TemporalWiki, a lifelong benchmark for ever-evolving LMs that utilizes the difference between consecutive snapshots of English Wikipedia and English Wikidata for training and evaluation, respectively. The benchmark hence allows researchers to periodically track an LM's ability to retain previous knowledge and acquire updated/new knowledge at each point in time. We also find that training an LM on the diff data through continual learning methods achieves similar or better perplexity than on the entire snapshot in our benchmark with 12 times less computational cost, which verifies that factual knowledge in LMs can be safely updated with minimal training data via continual learning. The dataset and the code are available at https://github.com/joeljang/temporalwiki.
翻译:语言模型(LMs)在世界变化时变得过时;通常它们无法执行需要在训练时缺失或不同的最新事实信息的任务,这种现象称为时间错位。这是一个特别具有挑战性的问题,因为研究社区仍然缺乏一个一致的数据集,以评估LM对经常更新的知识语料库(例如Wikipedia)的适应性。为此,我们介绍了TemporalWiki,这是一个终身基准,专门用于训练和评估不断发展的LM,其利用英语Wikipedia和英语Wikidata之间连续快照的差异进行训练和评估。因此,该基准允许研究人员定期跟踪LM在每个时刻保留先前知识和获取更新/新知识的能力。我们还发现,通过持续学习方法在差分数据上训练LM的困惑度在我们的基准测试中可实现与镜像整个快照相似或更好的结果,而计算成本仅为12倍,这证实了LM中的事实知识可以通过持续学习以最小的训练数据安全地更新。数据集和代码可从https://github.com/joeljang/temporalwiki获取。