We introduce a new method DOLORES for learning knowledge graph embeddings that effectively captures contextual cues and dependencies among entities and relations. First, we note that short paths on knowledge graphs comprising of chains of entities and relations can encode valuable information regarding their contextual usage. We operationalize this notion by representing knowledge graphs not as a collection of triples but as a collection of entity-relation chains, and learn embeddings for entities and relations using deep neural models that capture such contextual usage. In particular, our model is based on Bi-Directional LSTMs and learn deep representations of entities and relations from constructed entity-relation chains. We show that these representations can very easily be incorporated into existing models to significantly advance the state of the art on several knowledge graph prediction tasks like link prediction, triple classification, and missing relation type prediction (in some cases by at least 9.5%).
翻译:我们引入了一种新的方法DOLORES,用于学习知识图嵌入,有效地捕捉实体和关系之间的背景线索和依赖性。首先,我们注意到由实体和关系链组成的知识图短路可以编码关于其背景使用情况的宝贵信息。我们通过将知识图作为知识图而不是三重图的集合,而是作为实体关系链的集合来实施这一概念,并使用反映这种背景使用情况的深层神经模型来学习实体和关系的嵌入。特别是,我们的模型以双向LSTMs为基础,并学习实体的深度表现和建筑实体关系链的关系。我们表明,这些表现很容易被纳入现有的模型,以大大推进关于若干知识图预测任务(例如链接预测、三重分类和缺失关系类型预测(有时至少达到9.5% ) 的艺术状态。