Graph neural networks (GNNs) have been widely used in representation learning on graphs and achieved state-of-the-art performance in tasks such as node classification and link prediction. However, most existing GNNs are designed to learn node representations on the fixed and homogeneous graphs. The limitations especially become problematic when learning representations on a misspecified graph or a heterogeneous graph that consists of various types of nodes and edges. In this paper, we propose Graph Transformer Networks (GTNs) that are capable of generating new graph structures, which involve identifying useful connections between unconnected nodes on the original graph, while learning effective node representation on the new graphs in an end-to-end fashion. Graph Transformer layer, a core layer of GTNs, learns a soft selection of edge types and composite relations for generating useful multi-hop connections so-called meta-paths. Our experiments show that GTNs learn new graph structures, based on data and tasks without domain knowledge, and yield powerful node representation via convolution on the new graphs. Without domain-specific graph preprocessing, GTNs achieved the best performance in all three benchmark node classification tasks against the state-of-the-art methods that require pre-defined meta-paths from domain knowledge.
翻译:图表神经网络(GNNs)被广泛用于在图表上进行代表学习,并在节点分类和链接预测等任务中达到最先进的性能,然而,大多数现有的GNNs旨在学习固定和同质图形上的节点表示。当在由各种节点和边缘组成的错误指定的图表或由不同类型节点和边缘组成的混合图上学习演示时,这些限制尤其成问题。在本文中,我们提议能够生成新图表结构的图形变换器网络(GNNs),这包括确定原始图表上未连接的节点之间的有用连接,同时学习以端到端的方式在新图表上的有效节点表示。GNNNs的图变层是GTNs的核心层,学会软化地选择边缘类型和复合关系,以产生有用的多点连接,即所谓的元路径。我们的实验显示,GTNS在没有域知识的情况下,根据数据和任务学习新的图形结构,并通过新图的演化产生强大的节点代表。没有域图预处理,GTNNNS在端到端图处理,GTNTN实现了最佳的业绩,因此需要所有领域前确定的元分类方法。