While Graph Neural Networks (GNNs) are powerful models for learning representations on graphs, most state-of-the-art models do not have significant accuracy gain beyond two to three layers. Deep GNNs fundamentally need to address: 1). expressivity challenge due to oversmoothing, and 2). computation challenge due to neighborhood explosion. We propose a simple "deep GNN, shallow sampler" design principle to improve both the GNN accuracy and efficiency -- to generate representation of a target node, we use a deep GNN to pass messages only within a shallow, localized subgraph. A properly sampled subgraph may exclude irrelevant or even noisy nodes, and still preserve the critical neighbor features and graph structures. The deep GNN then smooths the informative local signals to enhance feature learning, rather than oversmoothing the global graph signals into just "white noise". We theoretically justify why the combination of deep GNNs with shallow samplers yields the best learning performance. We then propose various sampling algorithms and neural architecture extensions to achieve good empirical results. Experiments on five large graphs show that our models achieve significantly higher accuracy and efficiency, compared with state-of-the-art.
翻译:虽然图形神经网络(GNN)是图表中学习显示的强大模型,但大多数最先进的模型并没有显著的精确度超过2至3层。深层GNN从根本上需要解决:(1) 透光度挑战,和(2) 社区爆炸造成的计算挑战。我们提出了一个简单的“深GNN,浅采样器”设计原则,以提高GNN的准确性和效率 -- -- 产生目标节点的表示,我们使用深层GNN只在浅、局部子图中传递信息。适当的抽样子图可能排除不相干甚至吵闹的节点,并且仍然保存关键的邻居特征和图形结构。深层GNNN可以使当地信息信号平稳地加强特征学习,而不是将全球图形信号过度移动到“白色噪音”中。我们从理论上解释为什么深层GNNN和浅采样器的结合产生最佳的学习性能。我们然后提出各种抽样算法和神经结构扩展,以取得良好的实证结果。