The concepts in knowledge graphs (KGs) enable machines to understand natural language, and thus play an indispensable role in many applications. However, existing KGs have the poor coverage of concepts, especially fine-grained concepts. In order to supply existing KGs with more fine-grained and new concepts, we propose a novel concept extraction framework, namely MRC-CE, to extract large-scale multi-granular concepts from the descriptive texts of entities. Specifically, MRC-CE is built with a machine reading comprehension model based on BERT, which can extract more fine-grained concepts with a pointer network. Furthermore, a random forest and rule-based pruning are also adopted to enhance MRC-CE's precision and recall simultaneously. Our experiments evaluated upon multilingual KGs, i.e., English Probase and Chinese CN-DBpedia, justify MRC-CE's superiority over the state-of-the-art extraction models in KG completion. Particularly, after running MRC-CE for each entity in CN-DBpedia, more than 7,053,900 new concepts (instanceOf relations) are supplied into the KG. The code and datasets have been released at https://github.com/fcihraeipnusnacwh/MRC-CE
翻译:知识图(KGs)中的概念使机器能够理解自然语言,从而在许多应用中发挥着不可或缺的作用。然而,现有的KGs对概念的覆盖范围较差,特别是精细的精细概念。为了向现有的KGs提供更精细的和新的概念,我们提议了一个新的概念提取框架,即MRC-CE,从实体的说明文本中提取大规模多语系概念。具体地说,MRC-CE是用一个基于BERT的机器阅读理解模型建造的,该模型可以用一个指示网络提取更多精细的精细概念。此外,还采用一个随机森林和基于规则的切割功能来提高MRC-CE的精确度和回溯性。我们用多种语言KGs(即英语普罗巴和中国的CN-DBpedia)进行的实验,证明MRC-CE优于KG完成的状态-艺术提取模型。特别是,在为CN-DBBedpedia每个实体运行MRC-CE之后,7,053,900多个新的概念(In_Crancermas)已经提供给了K-CSetGs。