Graph pooling has been increasingly considered for graph neural networks (GNNs) to facilitate hierarchical graph representation learning. Existing graph pooling methods commonly consist of two stages, i.e., selecting the top-ranked nodes and removing the rest nodes to construct a coarsened graph representation. However, local structural information of the removed nodes would be inevitably dropped in these methods, due to the inherent coupling of nodes (location) and their features (signals). In this paper, we propose an enhanced three-stage method via lifting, named LiftPool, to improve hierarchical graph representation by maximally preserving the local structural information in graph pooling. LiftPool introduces an additional stage of graph lifting before graph coarsening to preserve the local information of the removed nodes and decouple the processes of node removing and feature reduction. Specifically, for each node to be removed, its local information is obtained by subtracting the global information aggregated from its neighboring preserved nodes. Subsequently, this local information is aligned and propagated to the preserved nodes to alleviate information loss in graph coarsening. Furthermore, we demonstrate that the proposed LiftPool is localized and permutation-invariant. The proposed graph lifting structure is general to be integrated with existing downsampling-based graph pooling methods. Evaluations on benchmark graph datasets show that LiftPool substantially outperforms the state-of-the-art graph pooling methods in the task of graph classification.
翻译:用于图形神经网络(GNNs)的图形图形拼图集合越来越被考虑,以方便等级图形代表学习。现有的图形集合方法通常由两个阶段组成,即选择最上层节点和删除其余节点,以构建粗化的图形代表。然而,由于节点(位置)及其特征(信号)的内在结合,被删除节点的当地结构信息将不可避免地在这些方法中被丢弃。在本文件中,我们建议通过提升名为Lab Pool的提升三阶段方法,通过最大限度地保存图形集合中的地方结构信息来改进等级图表代表。Treapppool在图形分析之前引入了额外的图表提升阶段,以保存已删除节点和特性缩减过程的本地信息。具体地说,对于每个节点,其本地信息是通过从相邻保存的节点中减去汇总的全球信息来获取的。随后,这一本地信息与保存的节点相匹配并传播,以缓解图表中的信息损失。此外,我们证明拟议的Grevelpopool Pool平面图中的拟议升级图是当前图表的本地化和透化图式图表中的拟议图式。