In the past years, Knowledge-Based Question Answering (KBQA), which aims to answer natural language questions using facts in a knowledge base, has been well developed. Existing approaches often assume a static knowledge base. However, the knowledge is evolving over time in the real world. If we directly apply a fine-tuning strategy on an evolving knowledge base, it will suffer from a serious catastrophic forgetting problem. In this paper, we propose a new incremental KBQA learning framework that can progressively expand learning capacity as humans do. Specifically, it comprises a margin-distilled loss and a collaborative exemplar selection method, to overcome the catastrophic forgetting problem by taking advantage of knowledge distillation. We reorganize the SimpleQuestion dataset to evaluate the proposed incremental learning solution to KBQA. The comprehensive experiments demonstrate its effectiveness and efficiency when working with the evolving knowledge base.
翻译:在过去的几年里,旨在利用知识库中的事实回答自然语言问题的基于知识的问答(KBQA)已经相当成熟。现有的方法往往假设一个静态的知识库。然而,知识是在现实世界中逐渐演变的。如果我们直接对不断演变的知识库应用微调战略,它将遭受一个严重的灾难性遗忘问题。在本文件中,我们提出了一个新的递增的KBQA学习框架,可以像人类一样逐步扩大学习能力。具体地说,它包含一种悬浮损失和协作性示范选择方法,以便利用知识蒸馏来克服灾难性的遗忘问题。我们重组简单问题数据集,以评价向KBQA提出的递增学习解决方案。全面实验表明,在与不断发展的知识库合作时,它的有效性和效率。