Existing Graph Convolutional Networks (GCNs) are shallow---the number of the layers is usually not larger than 2. The deeper variants by simply stacking more layers, unfortunately perform worse, even involving well-known tricks like weight penalizing, dropout, and residual connections. This paper reveals that developing deep GCNs mainly encounters two obstacles: \emph{over-fitting} and \emph{over-smoothing}. The over-fitting issue weakens the generalization ability on small graphs, while over-smoothing impedes model training by isolating output representations from the input features with the increase in network depth. Hence, we propose DropEdge, a novel technique to alleviate both issues. At its core, DropEdge randomly removes a certain number of edges from the input graphs, acting like a data augmenter and also a message passing reducer. More importantly, DropEdge enables us to recast a wider range of Convolutional Neural Networks (CNNs) from the image field to the graph domain; in particular, we study DenseNet and InceptionNet in this paper. Extensive experiments on several benchmarks demonstrate that our method allows deep GCNs to achieve promising performance, even when the number of layers exceeds 30---the deepest GCN that has ever been proposed.
翻译:现有的图表革命网络(GCN)是浅浅的,层数通常不大于2。 更深层的变种只是堆叠更多的层,但不幸的是表现更差,甚至涉及重量惩罚、辍学和剩余连接等众所周知的把戏。 本文显示,开发深层的GCN主要遇到两个障碍: \ emph{ overformate} 和\ emph{over-soothering} 。 过度适应问题削弱了小图的概括能力,而过度移动则阻碍模型培训,因为随着网络深度的提高,将输入特征与输出特征分离。 因此,我们提议采用DevoEdge, 这是一种缓解这两个问题的新方法。 在其核心, DevoEge 随机地清除了输入图中的某些边缘, 像是数据增强器和传递的信息。 更重要的是, 下降Eget 使我们能够从图像字段到图形域重新播送更广泛的革命神经网络(CN)网络(CN) ; 特别是,我们研究DenseNet和Inception 等新技术, 使得GNet能够不断的深度实验。