Graph convolutional neural networks (GCNNs) have received much attention recently, owing to their capability in handling graph-structured data. Among the existing GCNNs, many methods can be viewed as instances of a neural message passing motif; features of nodes are passed around their neighbors, aggregated and transformed to produce better nodes' representations. Nevertheless, these methods seldom use node transition probabilities, a measure that has been found useful in exploring graphs. Furthermore, when the transition probabilities are used, their transition direction is often improperly considered in the feature aggregation step, resulting in an inefficient weighting scheme. In addition, although a great number of GCNN models with increasing level of complexity have been introduced, the GCNNs often suffer from over-fitting when being trained on small graphs. Another issue of the GCNNs is over-smoothing, which tends to make nodes' representations indistinguishable. This work presents a new method to improve the message passing process based on node transition probabilities by properly considering the transition direction, leading to a better weighting scheme in nodes' features aggregation compared to the existing counterpart. Moreover, we propose a novel regularization method termed DropNode to address the over-fitting and over-smoothing issues simultaneously. DropNode randomly discards part of a graph, thus it creates multiple deformed versions of the graph, leading to data augmentation regularization effect. Additionally, DropNode lessens the connectivity of the graph, mitigating the effect of over-smoothing in deep GCNNs. Extensive experiments on eight benchmark datasets for node and graph classification tasks demonstrate the effectiveness of the proposed methods in comparison with the state of the art.
翻译:最近,由于具有处理图表结构数据的能力,这些图形神经网络(GCNN)最近受到了很多关注。在现有的GCNN(GCNN)中,许多方法都可被视为神经信息传递模式的范例;节点的特点在邻居周围传递、汇总和转化,以产生更好的节点表示。然而,这些方法很少使用节点过渡概率,这是在探索图表时发现有用的一种测量方法。此外,当使用过渡概率时,它们的过渡方向常常在特征汇总步骤中被不适当地考虑,从而导致一个效率低的加权办法。此外,尽管引入了大量复杂程度不断提高的GCNNNN模型,但当在小图上培训时,这些节点的特点往往在邻居周围传递,集聚和转换,以产生更好的节点表示,这种方法往往使节点的表达方式变得易变异。这项工作提出了一种新方法,通过正确地考虑过渡方向来改进基于节点转换的低位曲线显示的传递进程,导致更低的加权调整。此外,没有多少的GNNNNNNNNM模型在不具有更复杂程度的特性,因此,因此会降低对正变的GO值的变。