Graph neural networks (GNNs) emerge as a powerful family of representation learning models on graphs. To derive node representations, they utilize a global model that recursively aggregates information from the neighboring nodes. However, different nodes reside at different parts of the graph in different local contexts, making their distributions vary across the graph. Ideally, how a node receives its neighborhood information should be a function of its local context, to diverge from the global GNN model shared by all nodes. To utilize node locality without overfitting, we propose a node-wise localization of GNNs by accounting for both global and local aspects of the graph. Globally, all nodes on the graph depend on an underlying global GNN to encode the general patterns across the graph; locally, each node is localized into a unique model as a function of the global model and its local context. Finally, we conduct extensive experiments on four benchmark graphs, and consistently obtain promising performance surpassing the state-of-the-art GNNs.
翻译:图形神经网络( GNNs) 是一个强大的图形代表学习模型大家庭。 为了获得节点代表, 它们使用一个全球模型, 将相邻的节点信息归结在一起。 但是, 不同的节点位于不同地方的图形的不同部分, 使得其分布在不同的图表中。 理想的情况是, 节点如何接收其周边信息应该取决于其本地环境的功能, 与所有节点共享的全球 GNN 模式不同。 为了在不过度配置的情况下利用节点位置, 我们提议通过计算图形的全球和地方方面, 使GNNs 的节点本地化。 从全球范围看, 图形上的所有节点都依赖于一个基本的全球GNN 来编码图的全局模式; 本地, 每个节点都以独特的模式作为全球模式及其本地环境的函数, 本地化成一个独特的模式。 最后, 我们在四个基准图上进行广泛的实验, 并且始终取得超过全球NNPs的有希望的业绩。