Representation learning of knowledge graphs aims to embed entities and relations into low-dimensional vectors. Most existing works only consider the direct relations or paths between an entity pair. It is considered that such approaches disconnect the semantic connection of multi-relations between an entity pair, and we propose a convolutional and multi-relational representation learning model, ConvMR. The proposed ConvMR model addresses the multi-relation issue in two aspects: (1) Encoding the multi-relations between an entity pair into a unified vector that maintains the semantic connection. (2) Since not all relations are necessary while joining multi-relations, we propose an attention-based relation encoder to automatically assign weights to different relations based on semantic hierarchy. Experimental results on two popular datasets, FB15k-237 and WN18RR, achieved consistent improvements on the mean rank. We also found that ConvMR is efficient to deal with less frequent entities.
翻译:介绍知识图表的学习旨在将实体和关系嵌入低维矢量中。大多数现有工作只考虑实体对子之间的直接关系或路径。认为这种做法切断了实体对子之间多种关系之间的语义联系,我们提议了一个革命和多关系代表性学习模式,ConvMR。拟议的ConvMR模式从两个方面处理多关系问题:(1)将实体对子之间的多种关系编码成一个维持语义联系的统一矢量。(2)由于并非所有关系在结合多种关系时都是必要的,我们建议一种基于注意的关系编码器,以便自动为基于语义等级的不同关系分配权重。两个流行数据集(FB15k-237和WN18RR)的实验结果在平均等级上取得了一致的改进。我们还发现,ConMR有效处理频率较低的实体。