Most existing approaches for Knowledge Base Question Answering (KBQA) focus on a specific underlying knowledge base either because of inherent assumptions in the approach, or because evaluating it on a different knowledge base requires non-trivial changes. However, many popular knowledge bases share similarities in their underlying schemas that can be leveraged to facilitate generalization across knowledge bases. To achieve this generalization, we introduce a KBQA framework based on a 2-stage architecture that explicitly separates semantic parsing from the knowledge base interaction, facilitating transfer learning across datasets and knowledge graphs. We show that pretraining on datasets with a different underlying knowledge base can nevertheless provide significant performance gains and reduce sample complexity. Our approach achieves comparable or state-of-the-art performance for LC-QuAD (DBpedia), WebQSP (Freebase), SimpleQuestions (Wikidata) and MetaQA (Wikimovies-KG).
翻译:知识库问题解答(KBQA)的现有方法大多侧重于特定的基本知识库,要么是因为该方法的内在假设,要么是因为在不同的知识库中评估该方法需要非三重变化;然而,许多大众知识库在其基础体系中有着相似之处,可以加以利用,以促进知识库之间的普遍化。为了实现这一概括化,我们引入了一个基于两个阶段结构的KBQA框架,明确区分语义与知识库互动的语义分解,促进跨数据集和知识图的转移学习。我们表明,对具有不同基础知识库的数据集进行预先培训,仍然可以带来显著的业绩收益,降低样本复杂性。我们的方法在LC-QuAD(DBpedia)、WebQSP(Freebase)、Sicsiques(Wikidata)和MetaQA(Wikimovies-KG)方面实现了类似或最新业绩。