Graph attention networks (GATs) have been recognized as powerful tools for learning in graph structured data. However, how to enable the attention mechanisms in GATs to smoothly consider both structural and feature information is still very challenging. In this paper, we propose Graph Joint Attention Networks (JATs) to address the aforementioned challenge. Different from previous attention-based graph neural networks (GNNs), JATs adopt novel joint attention mechanisms which can automatically determine the relative significance between node features and structural coefficients learned from graph topology, when computing the attention scores. Therefore, representations concerning more structural properties can be inferred by JATs. Besides, we theoretically analyze the expressive power of JATs and further propose an improved strategy for the joint attention mechanisms that enables JATs to reach the upper bound of expressive power which every message-passing GNN can ultimately achieve, i.e., 1-WL test. JATs can thereby be seen as most powerful message-passing GNNs. The proposed neural architecture has been extensively tested on widely used benchmarking datasets, and has been compared with state-of-the-art GNNs for various downstream predictive tasks. Experimental results show that JATs achieve state-of-the-art performance on all the testing datasets.
翻译:图表关注网络(GATs)已被公认为是图表结构数据中学习的有力工具。然而,如何使GATs的注意机制能够顺利地考虑结构性和特征信息,仍然非常困难。在本文中,我们提议“图形联合关注网络(JATs)”应对上述挑战。不同于以往基于关注的图形神经网络(GNNSs),JATs采用了新的联合关注机制,这些机制可以自动确定在计算关注分数时从图形表层学中学会的节点特征和结构系数之间的相对重要性。因此,关于更多结构性属性的表述可以由JATs进行广泛测试。此外,我们理论上分析JATs的表达力和特征信息特征信息信息信息信息信息,并进一步建议改进联合关注机制的战略,使JATs能够达到每个信息传递GNNS最终都能达到的表达力的上限,即1-WL测试。因此,JATs可以被视为最强大的信息传递GNNSs。拟议的神经结构已经在广泛使用基准数据集中进行了测试,并且与JATs-art 实验所有下游级数据测试结果。