Graph Attention Networks (GATs) have been intensively studied and widely used in graph data learning tasks. Existing GATs generally adopt the self-attention mechanism to conduct graph edge attention learning, requiring expensive computation. It is known that Spiking Neural Networks (SNNs) can perform inexpensive computation by transmitting the input signal data into discrete spike trains and can also return sparse outputs. Inspired by the merits of SNNs, in this work, we propose a novel Graph Spiking Attention Network (GSAT) for graph data representation and learning. In contrast to self-attention mechanism in existing GATs, the proposed GSAT adopts a SNN module architecture which is obvious energy-efficient. Moreover, GSAT can return sparse attention coefficients in natural and thus can perform feature aggregation on the selective neighbors which makes GSAT perform robustly w.r.t graph edge noises. Experimental results on several datasets demonstrate the effectiveness, energy efficiency and robustness of the proposed GSAT model.
翻译:在图表数据学习任务中,对图表关注网络(GATs)进行了深入研究并广泛使用。现有的GATs通常采用自我注意机制来进行图形边缘关注学习,这需要昂贵的计算。众所周知,Spiking神经网络(SNNS)可以通过将输入信号数据传送到离散的加注列中来进行廉价计算,还可以返回稀有的产出。在SNTs的优点的启发下,我们在这项工作中提议为图表数据显示和学习建立一个新型图形spiking关注网络(GSAT)。与现有的GATs的自我注意机制相比,拟议的GSAT采用一个明显具有能源效率的SNN模块结构。此外,GSAT还可以在自然中恢复稀少的注意系数,从而在选定的邻居上进行特征聚合,从而使GSAT公司能够强有力地运行W.r.t图形边缘噪音。几个数据集的实验结果显示了拟议GSAT模型的有效性、能效和稳健性。