We introduce a framework for learning from multiple generated graph views, named graph symbiosis learning (GraphSym). In GraphSym, graph neural networks (GNN) developed in multiple generated graph views can adaptively exchange parameters with each other and fuse information stored in linkage structures and node features. Specifically, we propose a novel adaptive exchange method to iteratively substitute redundant channels in the weight matrix of one GNN with informative channels of another GNN in a layer-by-layer manner. GraphSym does not rely on specific methods to generate multiple graph views and GNN architectures. Thus, existing GNNs can be seamlessly integrated into our framework. On 3 semi-supervised node classification datasets, GraphSym outperforms previous single-graph and multiple-graph GNNs without knowledge distillation, and achieves new state-of-the-art results. We also conduct a series of experiments on 15 public benchmarks, 8 popular GNN models, and 3 graph tasks -- node classification, graph classification, and edge prediction -- and show that GraphSym consistently achieves better performance than existing popular GNNs by 1.9\%$\sim$3.9\% on average and their ensembles. Extensive ablation studies and experiments on the few-shot setting also demonstrate the effectiveness of GraphSym.
翻译:我们引入了从多个生成的图形视图中学习的框架,名为图形共生学习(GraphSym)。在图形中,以多个生成的图形视图中开发的图形神经网络(GNN)能够适应性地相互交换参数,并整合储存在链接结构和节点特性中的信息。具体地说,我们提出一种新的适应性交换方法,以迭代一个GNN的重量矩阵中的冗余渠道,并用另一个GNN的信息渠道逐层地取代另一个GN的信息渠道。图Sym不依赖生成多个图形视图和GNN结构的具体方法。因此,现有的GNNN可以无缝地融入我们的框架中。在3个半监督型节点分类数据集中,GraphSym优于先前的单面图和多版GNNNS,没有知识蒸馏,并且实现了新的最新状态结果。我们还就15个公共基准、8个流行的GNNNM模型和3个图形任务 -- -- 节点分类、图表分类和边缘预测 -- 并显示GSymilly$在现有的GNNS平均和GNMS-9_Blassimal的几度实验中持续取得更好的业绩。