Graph Neural Networks (GNNs) are a promising deep learning approach for circumventing many real-world problems on graph-structured data. However, these models usually have at least one of four fundamental limitations: over-smoothing, over-fitting, difficult to train, and strong homophily assumption. For example, Simple Graph Convolution (SGC) is known to suffer from the first and fourth limitations. To tackle these limitations, we identify a set of key designs including (D1) dilated convolution, (D2) multi-channel learning, (D3) self-attention score, and (D4) sign factor to boost learning from different types (i.e. homophily and heterophily) and scales (i.e. small, medium, and large) of networks, and combine them into a graph neural network, GPNet, a simple and efficient one-layer model. We theoretically analyze the model and show that it can approximate various graph filters by adjusting the self-attention score and sign factor. Experiments show that GPNet consistently outperforms baselines in terms of average rank, average accuracy, complexity, and parameters on semi-supervised and full-supervised tasks, and achieves competitive performance compared to state-of-the-art model with inductive learning task.
翻译:图表神经网络(GNNs)是绕过图表结构数据中许多现实世界问题的一个很有希望的深深学习方法,可以绕过图表结构数据中许多现实世界问题。然而,这些模型通常至少有四个基本限制之一:过度透拍、过度安装、难以培训、难以培训和强烈的同质假设。例如,已知简单图形革命(SGC)受第一级和第四项限制。为了克服这些限制,我们确定了一系列关键设计,包括(D1) 放大变、(D2) 多通道学习、(D3) 自我注意评分和(D4) 标志因素,以促进不同类型(即同质和异性)网络和规模(即小、中、中、大)网络的学习。例如,简单图形集成(SGC)受到第一级和第四项限制。为了克服这些限制,我们从理论上分析模型,并表明它可以通过调整自我注意得分和签署因素来对各种图表过滤器进行(D1),(D2)多通道学习,(D3)自我注意评分和(D3),以及(D4)促进自我注意得分和(DNet(D3)提高自我评分和(D4)的标志因素的标志要素要素要素要素因素,以促进因素因素因素因素因素,促进学习促进学习的促进学习,促进学习的促进学习,促进学习,促进学习的标志因素(即促进学习的标志因素,促进学习的标志因素(即至少至少至少至少至少至少至少至少至少(即促进学习)的标志因素的标志因素的标志因素的标志因素,促进至少至少至少(即至少为促进学习)和标志因素的标志因素(即至少至少至少至少至少至少至少至少至少至少至少至少至少至少至少至少至少至少至少至少至少至少至少至少(即促进学习),从不同类型(即从不同类型(即不同类型(即不同类型(如不同(即同和偏同和偏同和外)中)的学习)的学习)的学习)的学习,从不同(即不同(即不同(即不同(即不同(即有)的学习)和偏同同同同同同同同同同同同同同同同同同同同同同同同同同同同同同同同同同同异)和大)和大)和大)和成