Recently decades have witnessed the empirical success of framing Knowledge Graph (KG) embeddings via language models. However, language model-based KG embeddings are usually deployed as static artifacts, which are challenging to modify without re-training after deployment. To address this issue, we propose a new task of editing language model-based KG embeddings in this paper. The proposed task aims to enable data-efficient and fast updates to KG embeddings without damaging the performance of the rest. We build four new datasets: E-FB15k237, A-FB15k237, E-WN18RR, and A-WN18RR, and evaluate several knowledge editing baselines demonstrating the limited ability of previous models to handle the proposed challenging task. We further propose a simple yet strong baseline dubbed KGEditor, which utilizes additional parametric layers of the hyper network to edit/add facts. Comprehensive experimental results demonstrate that KGEditor can perform better when updating specific facts while not affecting the rest with low training resources. Code and datasets will be available in https://github.com/zjunlp/PromptKG/tree/main/deltaKG.
翻译:最近几十年来,通过语言模型构建知识图谱嵌入的实证成功已经被证实。然而,基于语言模型的知识图谱嵌入通常作为静态预训练好的模型在部署时比较难以修改。为了解决这个问题,本文提出了一项新任务——编辑基于语言模型的知识图谱嵌入。所提出的任务旨在实现对知识图谱嵌入进行数据有效和快速的修改,而不会对其他部分的性能产生负面影响。我们构建了四个新数据集:E-FB15k237, A-FB15k237, E-WN18RR 和 A-WN18RR,并评估了几个知识编辑基线模型,证明了以往模型处理本文提出的挑战性任务的能力有限。我们提出了一种简单但强大的基线模型 KGEditor,利用超网络的额外参数化层来编辑/添加事实。全面的实验结果表明,KGEditor 可以在低训练资源下更好地更新特定事实而不会影响其他内容。代码和数据集可在 https://github.com/zjunlp/PromptKG/tree/main/deltaKG 上获得。