Graph Attention Network (GAT) focuses on modelling simple undirected and single relational graph data only. This limits its ability to deal with more general and complex multi-relational graphs that contain entities with directed links of different labels (e.g., knowledge graphs). Therefore, directly applying GAT on multi-relational graphs leads to sub-optimal solutions. To tackle this issue, we propose r-GAT, a relational graph attention network to learn multi-channel entity representations. Specifically, each channel corresponds to a latent semantic aspect of an entity. This enables us to aggregate neighborhood information for the current aspect using relation features. We further propose a query-aware attention mechanism for subsequent tasks to select useful aspects. Extensive experiments on link prediction and entity classification tasks show that our r-GAT can model multi-relational graphs effectively. Also, we show the interpretability of our approach by case study.
翻译:图形关注网络(GAT) 侧重于模拟简单的非方向和单一关联图形数据。 这限制了它处理更一般和复杂的多关系图的能力,这些图中含有不同标签(例如知识图)直接链接的实体。 因此, 在多关系图上直接应用 GAT 会导致亚最佳解决方案。 为了解决这一问题, 我们提议 r- GAT, 一个关联图关注网络, 学习多渠道实体的表示方式。 具体地说, 每个频道都对应一个实体的隐性语义学方面。 这使我们能够用关系特征来汇总当前方面的邻里信息。 我们进一步建议一个查询关注机制, 用于选择有用的方面。 关于链接预测和实体分类任务的广泛实验显示, 我们的 r- GAT 可以有效地模拟多关系图。 此外, 我们通过案例研究来展示我们的方法的可解释性 。