Knowledge graphs store a large number of factual triples while they are still incomplete, inevitably. The previous knowledge graph completion (KGC) models predict missing links between entities merely relying on fact-view data, ignoring the valuable commonsense knowledge. The previous knowledge graph embedding (KGE) techniques suffer from invalid negative sampling and the uncertainty of fact-view link prediction, limiting KGC's performance. To address the above challenges, we propose a novel and scalable Commonsense-Aware Knowledge Embedding (CAKE) framework to automatically extract commonsense from factual triples with entity concepts. The generated commonsense augments effective self-supervision to facilitate both high-quality negative sampling (NS) and joint commonsense and fact-view link prediction. Experimental results on the KGC task demonstrate that assembling our framework could enhance the performance of the original KGE models, and the proposed commonsense-aware NS module is superior to other NS techniques. Besides, our proposed framework could be easily adaptive to various KGE models and explain the predicted results.
翻译:先前的知识图形嵌入技术受到无效的负面抽样和事实视图链接预测的不确定性的影响,限制了知识图形的性能。为了应对上述挑战,我们提出了一个新颖和可缩放的常识知识嵌入(CAK)框架,以便从实体概念的事实三重中自动提取常识。生成的常识增强有效的自我监督,以促进高质量负抽样和共同常识和事实视图链接的预测。关于知识图形嵌入技术的实验结果表明,构建我们的框架可以提高原始KGE模型的性能,而拟议的常识和常识NS模块优于其他NS技术。此外,我们提议的框架可以很容易地适应各种通用的KGE模型,并解释预测的结果。