Large Language Models (LMs) are known to encode world knowledge in their parameters as they pretrain on a vast amount of web corpus, which is often utilized for performing knowledge-dependent downstream tasks such as question answering, fact-checking, and open dialogue. In real-world scenarios, the world knowledge stored in the LMs can quickly become outdated as the world changes, but it is non-trivial to avoid catastrophic forgetting and reliably acquire new knowledge while preserving invariant knowledge. To push the community towards better maintenance of ever-changing LMs, we formulate a new continual learning (CL) problem called Continual Knowledge Learning (CKL). We construct a new benchmark and metric to quantify the retention of time-invariant world knowledge, the update of outdated knowledge, and the acquisition of new knowledge. We adopt applicable recent methods from literature to create several strong baselines. Through extensive experiments, we find that CKL exhibits unique challenges that are not addressed in previous CL setups, where parameter expansion is necessary to reliably retain and learn knowledge simultaneously. By highlighting the critical causes of knowledge forgetting, we show that CKL is a challenging and important problem that helps us better understand and train ever-changing LMs.
翻译:大型语言模型(LMS)将世界知识纳入其参数中,因为它们预示着大量网络资料,常常用于执行以知识为依存的下游任务,例如回答问题、核对事实和公开对话。在现实世界的情景中,LMS中储存的世界知识随着世界的变化而迅速过时,但是,通过广泛的实验,我们发现CKL展示了以往CL设置中未涉及的独特挑战,而以前CL设置中需要扩展参数以可靠地保留和学习知识,而在这方面,为了更好地维护不断变化的LMS,我们制定了一个新的持续学习问题,称为CKL(CL)知识学习(CKL) 。我们通过强调知识遗忘的关键原因,我们发现CKL是一个具有挑战性和重要的问题,有助于我们更好理解和培训LMS。