Transformers have achieved remarkable performance in widespread fields, including natural language processing, computer vision and graph mining. However, vanilla Transformer architectures have not yielded promising improvements in the Knowledge Graph (KG) representations, where the translational distance paradigm dominates this area. Note that vanilla Transformer architectures struggle to capture the intrinsically heterogeneous semantic and structural information of knowledge graphs. To this end, we propose a new variant of Transformer for knowledge graph representations dubbed Relphormer. Specifically, we introduce Triple2Seq which can dynamically sample contextualized sub-graph sequences as the input to alleviate the heterogeneity issue. We propose a novel structure-enhanced self-attention mechanism to encode the relational information and keep the globally semantic information among sub-graphs. Moreover, we propose masked knowledge modeling as a new paradigm for knowledge graph representation learning. We apply Relphormer to three tasks, namely, knowledge graph completion, KG-based question answering and KG-based recommendation for evaluation. Experimental results show that Relphormer can obtain better performance on benchmark datasets compared with baselines. Code is available in https://github.com/zjunlp/Relphormer.
翻译:包括自然语言处理、计算机视觉和图解采矿在内的广泛领域的变异器取得了显著的绩效;然而,香草变异器结构在知识图(KG)显示方面没有带来有希望的改善,因为翻译距离范式主导了这个领域。请注意,香草变异器结构在努力捕捉知识图中固有的多元语义和结构信息。为此,我们提议了一个新的变异器变异器,用于知识图表解,代号为Rebbbphormer。具体地说,我们引入了三重变式变异器,可以动态地抽样背景子图序列,作为缓解异质问题的投入。我们提议了一个创新结构强化的自我注意机制,将关系信息编码成新结构,并将全球语义信息保留在子图中。此外,我们提议将遮掩知识模型作为学习知识图解的新范式。我们将Relphormer应用于三项任务,即:知识图的完成、KG-基质解答和KG-基于评价的建议。实验结果显示,Repmer/Regmurgmal 将获得更好的基准数据设置的性工作。