Predicting missing facts in a knowledge graph (KG) is a crucial task in knowledge base construction and reasoning, and it has been the subject of much research in recent works using KG embeddings. While existing KG embedding approaches mainly learn and predict facts within a single KG, a more plausible solution would benefit from the knowledge in multiple language-specific KGs, considering that different KGs have their own strengths and limitations on data quality and coverage. This is quite challenging, since the transfer of knowledge among multiple independently maintained KGs is often hindered by the insufficiency of alignment information and the inconsistency of described facts. In this paper, we propose KEnS, a novel framework for embedding learning and ensemble knowledge transfer across a number of language-specific KGs. KEnS embeds all KGs in a shared embedding space, where the association of entities is captured based on self-learning. Then, KEnS performs ensemble inference to combine prediction results from embeddings of multiple language-specific KGs, for which multiple ensemble techniques are investigated. Experiments on five real-world language-specific KGs show that KEnS consistently improves state-of-the-art methods on KG completion, via effectively identifying and leveraging complementary knowledge.
翻译:在知识图(KG)中预测缺失的事实是知识基础构建和推理中的一项关键任务,而且这是最近使用KG嵌入器的工程中大量研究的主题。虽然现有的KG嵌入方法主要学习和预测一个KG内的事实,但更合理的解决办法将受益于多语言特定KG的知识,因为不同的KG本身有优势,数据质量和覆盖范围也有限。这相当具有挑战性,因为多个独立维护的KG之间知识的转让往往因缺乏统一信息而受阻,而且所述事实不一致。在本文件中,我们提议KENS,这是一个将学习和共同知识转让嵌入若干语言特定KG内的新框架。KENS将所有KG嵌入一个共同嵌入空间,在这个空间中,各实体的组合以自我学习为基础。然后,KENS进行全方位推导出将多种语言特定KG的嵌入的预测结果结合起来,为此将调查多种组合技术。在五种现实-世界特定语言的KEnal-Cental-Creal-S-Cental-S-S-Cental-S-plegligligal-S-S-S-S-S-Cental-Cal-S-S-S-S-S-S-S-S-S-S-Silvercustrual-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-Silg-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S