Knowledge representation learning (KRL) aims to represent entities and relations in knowledge graph in low-dimensional semantic space, which have been widely used in massive knowledge-driven tasks. In this article, we introduce the reader to the motivations for KRL, and overview existing approaches for KRL. Afterwards, we extensively conduct and quantitative comparison and analysis of several typical KRL methods on three evaluation tasks of knowledge acquisition including knowledge graph completion, triple classification, and relation extraction. We also review the real-world applications of KRL, such as language modeling, question answering, information retrieval, and recommender systems. Finally, we discuss the remaining challenges and outlook the future directions for KRL. The codes and datasets used in the experiments can be found in https://github.com/thunlp/OpenKE.
翻译:知识代表学习(KRL)的目的是在低维语义空间的知识图表中代表实体和关系,这些实体和关系已被广泛用于大规模知识驱动的任务。在本条中,我们向读者介绍KRL的动机,并概述KRL的现有做法。随后,我们广泛开展并定量比较和分析关于知识获取的三个评价任务的几种典型KRL方法,包括知识图的完成、三级分类和关系提取。我们还审查了KRL的实际应用,例如语言建模、问答、信息检索和建议系统。最后,我们讨论了余下的挑战以及KRL的未来方向。实验中使用的代码和数据集见https://github.com/thunlp/OpenKE。