We propose Graph Tree Networks (GTNets), a deep graph learning architecture with a new general message passing scheme that originates from the tree representation of graphs. In the tree representation, messages propagate upward from the leaf nodes to the root node, and each node preserves its initial information prior to receiving information from its child nodes (neighbors). We formulate a general propagation rule following the nature of message passing in the tree to update a node's feature by aggregating its initial feature and its neighbor nodes' updated features. Two graph representation learning models are proposed within this GTNet architecture - Graph Tree Attention Network (GTAN) and Graph Tree Convolution Network (GTCN), with experimentally demonstrated state-of-the-art performance on several popular benchmark datasets. Unlike the vanilla Graph Attention Network (GAT) and Graph Convolution Network (GCN) which have the "over-smoothing" issue, the proposed GTAN and GTCN models can go deep as demonstrated by comprehensive experiments and rigorous theoretical analysis.
翻译:我们提出图示树网络(GTNets),这是一个深图学习结构,其新的一般信息传递方案源于图示的树状图示。在树状图示中,信息从叶节向上传播到根节点,每个节点在从儿童节点(邻居)接收信息之前保存其初始信息。我们根据树上传递信息的性质制定了一种一般传播规则,以通过合并其初始特征及其周边节点更新特征来更新节点特征。在这个GTNet结构中,提出了两个图示学习模式――图示树关注网络和图示树革命网络(GTCN),在若干广受欢迎的基准数据集上实验性地展示了最先进的表现。与具有“过度透视”问题的Vanilla图形关注网络和图示动网络(GCN)不同,拟议的GTAN和GCN模型可以通过全面实验和严格的理论分析来深入展示。