Transformers have achieved remarkable performance in widespread fields, including natural language processing, computer vision and graph mining. However, vanilla Transformer architectures have not yielded promising improvements in the Knowledge Graph (KG) representations, where the translational distance paradigm dominates this area. Note that vanilla Transformer architectures struggle to capture the intrinsically heterogeneous structural and semantic information of knowledge graphs. To this end, we propose a new variant of Transformer for knowledge graph representations dubbed Relphormer. Specifically, we introduce Triple2Seq which can dynamically sample contextualized sub-graph sequences as the input to alleviate the heterogeneity issue. We propose a novel structure-enhanced self-attention mechanism to encode the relational information and keep the semantic information within entities and relations. Moreover, we utilize masked knowledge modeling for general knowledge graph representation learning, which can be applied to various KG-based tasks including knowledge graph completion, question answering, and recommendation. Experimental results on six datasets show that Relphormer can obtain better performance compared with baselines. Code is available in https://github.com/zjunlp/Relphormer.
翻译:然而,香草变异器结构在知识图(KG)的演示中并没有带来有希望的改善,因为翻译距离范式主导了这个领域。请注意,香草变异器结构在努力捕捉知识图中固有的各异结构和语义信息。为此,我们提议了一个新的变异器变异器,用于知识图示(以rebbbed Relphormer为代号)的图示。具体地说,我们引入了Triple2Seq,可以动态地样样样背景子图序列,作为缓解异质问题的投入。我们提议了一个创新的结构强化自省机制,用于编码关系信息,并将语义信息保存在实体和关系中。此外,我们使用遮掩的知识模型,用于一般知识图示学,这可以应用于基于KG的各种任务,包括知识图的完成、问题回答和建议。六个数据集的实验结果显示,Relphormer可以比基线取得更好的性能。代码可在 http://giuthurphr/regunal.code中查阅。</s>