Graph neural networks (GNN) have been proven to be mature enough for handling graph-structured data on node-level graph representation learning tasks. However, the graph pooling technique for learning expressive graph-level representation is critical yet still challenging. Existing pooling methods either struggle to capture the local substructure or fail to effectively utilize high-order dependency, thus diminishing the expression capability. In this paper we propose HAP, a hierarchical graph-level representation learning framework, which is adaptively sensitive to graph structures, i.e., HAP clusters local substructures incorporating with high-order dependencies. HAP utilizes a novel cross-level attention mechanism MOA to naturally focus more on close neighborhood while effectively capture higher-order dependency that may contain crucial information. It also learns a global graph content GCont that extracts the graph pattern properties to make the pre- and post-coarsening graph content maintain stable, thus providing global guidance in graph coarsening. This novel innovation also facilitates generalization across graphs with the same form of features. Extensive experiments on fourteen datasets show that HAP significantly outperforms twelve popular graph pooling methods on graph classification task with an maximum accuracy improvement of 22.79%, and exceeds the performance of state-of-the-art graph matching and graph similarity learning algorithms by over 3.5% and 16.7%.
翻译:事实已经证明,用于处理关于节点图形代表性学习任务的图表结构数据(GNN)的图表型神经神经网络(GNN)已经足够成熟,足以处理关于节点图形代表性学习任务的图表结构数据。然而,用于学习直观图形层次代表的图形集合技术仍然十分关键,但仍然具有挑战性。现有的集合方法要么是努力捕捉当地亚结构,要么未能有效利用高阶依赖性,从而削弱表达能力。在本文件中,我们提议了一个等级图形层次图形级代表学习框架,这个结构适应性地敏感于图形结构,即与高度依赖性相融合的HAP分组地方子结构。在十四个数据集上进行的广泛的实验显示,HAP明显地超越了近邻的跨部门关注机制,同时有效地捕捉可能包含关键信息的更高排序依赖性。它还学习了一个全球图形内容 GCont,它提取了图表型态特性,以使图前和后分析图表内容保持稳定,从而在图表分析中提供全球指导。这种新型创新还便利了具有相同特征的图表集。关于十四个数据集的广泛实验显示,HAP明显地优于近似近似近似的十二位图像组合组合组合组合化了22.7%的精确度。