Recent years have seen a surge in research on dynamic graph representation learning, which aims to model temporal graphs that are dynamic and evolving constantly over time. However, current work typically models graph dynamics with recurrent neural networks (RNNs), making them suffer seriously from computation and memory overheads on large temporal graphs. So far, scalability of dynamic graph representation learning on large temporal graphs remains one of the major challenges. In this paper, we present a scalable framework, namely SpikeNet, to efficiently capture the temporal and structural patterns of temporal graphs. We explore a new direction in that we can capture the evolving dynamics of temporal graphs with spiking neural networks (SNNs) instead of RNNs. As a low-power alternative to RNNs, SNNs explicitly model graph dynamics as spike trains of neuron populations and enable spike-based propagation in an efficient way. Experiments on three large real-world temporal graph datasets demonstrate that SpikeNet outperforms strong baselines on the temporal node classification task with lower computational costs. Particularly, SpikeNet generalizes to a large temporal graph (2M nodes and 13M edges) with significantly fewer parameters and computation overheads. Our code is publicly available at https://github.com/EdisonLeeeee/SpikeNet
翻译:近些年来,对动态图形代表性学习的研究急剧增加,研究的目的是模拟具有动态和不断演变的时间图形,然而,目前的工作典型的模型图形动态与经常性神经网络(RNN)的动态,使其在大型时间图形上的计算和记忆间接流中严重受损。迄今为止,大型时间图形上动态图形代表学习的可缩放性仍然是重大挑战之一。在本文中,我们提出了一个可缩放的框架,即SpikeNet,以有效捕捉时间图的时空和结构模式。我们探索了一个新的方向,即我们可以通过Spiking神经网络(SNNNN)而不是RNNS来捕捉时间图形动态变化的动态。作为RNNS的低功率替代品,SNNP明确作为神经人群的快速列车模型动态,并能够高效地进行基于峰值的传播。在三个大型真实世界时间图形数据集上进行的实验表明,SpikNet以较低的计算成本,超越了时间节点分类任务的强基线。尤其是,Spiknet网将一个大型时间图形(2Mnodes)和13Mikekekekee/Odeal)的普通化为大时空/Mexmexmexmeximals。