Machine Learning has been the quintessential solution for many AI problems, but learning is still heavily dependent on the specific training data. Some learning models can be incorporated with a prior knowledge in the Bayesian set up, but these learning models do not have the ability to access any organised world knowledge on demand. In this work, we propose to enhance learning models with world knowledge in the form of Knowledge Graph (KG) fact triples for Natural Language Processing (NLP) tasks. Our aim is to develop a deep learning model that can extract relevant prior support facts from knowledge graphs depending on the task using attention mechanism. We introduce a convolution-based model for learning representations of knowledge graph entity and relation clusters in order to reduce the attention space. We show that the proposed method is highly scalable to the amount of prior information that has to be processed and can be applied to any generic NLP task. Using this method we show significant improvement in performance for text classification with News20, DBPedia datasets and natural language inference with Stanford Natural Language Inference (SNLI) dataset. We also demonstrate that a deep learning model can be trained well with substantially less amount of labeled training data, when it has access to organised world knowledge in the form of knowledge graph.
翻译:机器学习是许多人工智能问题的基本解决办法,但学习在很大程度上仍然取决于具体的培训数据。有些学习模式可以纳入贝叶西亚人设置的先前知识,但是这些学习模式没有能力根据需求获取任何有组织的世界知识。在这项工作中,我们提议用自然语言处理任务的知识图表(KG)事实三重来增强世界知识的学习模式。我们的目标是开发一个深层次的学习模式,根据使用注意机制的任务,从知识图表中提取相关的先前支持事实。我们引入了一个基于革命的模型,用于学习知识图表实体和关系组群的演示,以减少关注空间。我们表明,拟议的方法与必须处理的先前信息的数量高度相称,并可用于任何通用的自然语言处理任务。我们采用这种方法,我们显示了在与New20、DBBedia数据集和自然语言的推导力方面,在与斯坦福自然语言引用数据集(SNLI)相关的文本分类方面,我们还表明,在掌握世界知识时,可以很好地以数据库形式对数据进行训练,在相当小的标签上,我们还表明,在掌握知识的情况下,可以很好地将世界数据形式组织成一个深层次的学习模型。