Knowledge graphs such as DBpedia, Freebase or Wikidata always contain a taxonomic backbone that allows the arrangement and structuring of various concepts in accordance with the hypo-hypernym ("class-subclass") relationship. With the rapid growth of lexical resources for specific domains, the problem of automatic extension of the existing knowledge bases with new words is becoming more and more widespread. In this paper, we address the problem of taxonomy enrichment which aims at adding new words to the existing taxonomy. We present a new method that allows achieving high results on this task with little effort. It uses the resources which exist for the majority of languages, making the method universal. We extend our method by incorporating deep representations of graph structures like node2vec, Poincar\'e embeddings, GCN etc. that have recently demonstrated promising results on various NLP tasks. Furthermore, combining these representations with word embeddings allows us to beat the state of the art. We conduct a comprehensive study of the existing approaches to taxonomy enrichment based on word and graph vector representations and their fusion approaches. We also explore the ways of using deep learning architectures to extend the taxonomic backbones of knowledge graphs. We create a number of datasets for taxonomy extension for English and Russian. We achieve state-of-the-art results across different datasets and provide an in-depth error analysis of mistakes.
翻译:DBpedia、Freebase或Wikidata等知识图形总是包含一种分类主干线,这种主干线允许根据低超率(“类子子”)关系安排和构建各种概念。随着特定领域的词汇资源的迅速增长,现有知识基础以新词自动扩展的问题正在变得越来越广泛。在本文件中,我们处理以现有分类学增加新词的分类学丰富问题。我们提出了一种新的方法,能够使这项工作取得高成果。它利用了大多数语言现有的资源,使方法普遍化。我们扩展了我们的方法,将诸如诺德韦奇、Poincar\e嵌入、GCN等图表结构的深度表达方式纳入其中,这些结构最近显示了各种新词的可喜结果。此外,将这些表达方式与词嵌入的词汇结合起来,使我们能够击败现有关于基于文字和图形矢量表达及其融合方式的现有方法的研究。我们还探索了一种方法,即深度展示了像素2vec、Poincar\e嵌入、GCN等图表结构的深度表达方法。我们还探索了一种用于深入学习俄罗斯税制数据架构的数据结构的方法。我们利用了各种数据结构,从而得出了数字。