Language Models (LMs) become outdated as the world changes; they often fail to perform tasks requiring recent factual information which was absent or different during training, a phenomenon called temporal misalignment. This is especially a challenging problem because the research community still lacks a coherent dataset for assessing the adaptability of LMs to frequently-updated knowledge corpus such as Wikipedia. To this end, we introduce TemporalWiki, a lifelong benchmark for ever-evolving LMs that utilizes the difference between consecutive snapshots of English Wikipedia and English Wikidata for training and evaluation, respectively. The benchmark hence allows researchers to periodically track an LM's ability to retain previous knowledge and acquire updated/new knowledge at each point in time. We also find that training an LM on the diff data through continual learning methods achieves similar or better perplexity than on the entire snapshot in our benchmark with 12 times less computational cost, which verifies that factual knowledge in LMs can be safely updated with minimal training data via continual learning. The dataset and the code are available at https://github.com/joeljang/temporalwiki.
翻译:随着世界的变化,语言模型(LMS)已经过时;它们往往不能执行需要最新事实信息的任务,而这种信息在培训期间不存在或不同,这是一种称为时间错配的现象。这是一个特别具有挑战性的问题,因为研究界仍然缺乏一套连贯的数据集来评估LMS适应经常更新的知识,例如维基百科。为此,我们引入了TemoralWiki,这是不断演变的LMS的终身基准,它分别利用英文维基百科连续短片和英文维基数据连续短片之间的差别进行培训和评价。因此,基准使研究人员能够定期跟踪LM在每一时刻保留先前知识并获得更新/新知识的能力。我们还发现,通过持续学习方法对LM数据进行LM进行的培训,与我们基准的全部截图相比,与计算成本低12倍的完整截图相比,具有相似性或更好混淆性,这证明LMS的事实知识可以通过持续学习获得最低限度的培训数据来安全更新。数据集和代码可在https://github.com/joelgeang/temporalwiki上查阅。