Recurrent neural network (RNN) and self-attention mechanism (SAM) are the de facto methods to extract spatial-temporal information for temporal graph learning. Interestingly, we found that although both RNN and SAM could lead to a good performance, in practice neither of them is always necessary. In this paper, we propose GraphMixer, a conceptually and technically simple architecture that consists of three components: (1) a link-encoder that is only based on multi-layer perceptrons (MLP) to summarize the information from temporal links, (2) a node-encoder that is only based on neighbor mean-pooling to summarize node information, and (3) an MLP-based link classifier that performs link prediction based on the outputs of the encoders. Despite its simplicity, GraphMixer attains an outstanding performance on temporal link prediction benchmarks with faster convergence and better generalization performance. These results motivate us to rethink the importance of simpler model architecture.
翻译:经常神经网络(RNN)和自留机制(SAM)是提取空间-时空信息用于时间图学习的实际方法。有趣的是,我们发现尽管RNN和SAM都可能导致良好的业绩,但实际上两者都并非永远必要。在本文中,我们提议GreaphMixer是一个概念和技术上简单的结构,由三个组成部分组成:(1) 链接编码器,仅以多层过敏器(MLP)为基础,以总结时间链接中的信息;(2) 节点编码器,仅以邻居平均共享节点信息为基础;(3) 以 MLP 为基础的链接分类器,根据编码器输出进行链接预测。尽管它很简单,但GreaphMixer在时间链接预测基准上取得了杰出的性能,而时间链路则更快地趋同和更好的通用性表现。这些结果促使我们重新思考更简单的模型结构的重要性。