Graph-level representation learning is the pivotal step for downstream tasks that operate on the whole graph. The most common approach to this problem heretofore is graph pooling, where node features are typically averaged or summed to obtain the graph representations. However, pooling operations like averaging or summing inevitably cause massive information missing, which may severely downgrade the final performance. In this paper, we argue what is crucial to graph-level downstream tasks includes not only the topological structure but also the distribution from which nodes are sampled. Therefore, powered by existing Graph Neural Networks (GNN), we propose a new plug-and-play pooling module, termed as Distribution Knowledge Embedding (DKEPool), where graphs are rephrased as distributions on top of GNNs and the pooling goal is to summarize the entire distribution information instead of retaining a certain feature vector by simple predefined pooling operations. A DKEPool network de facto disassembles representation learning into two stages, structure learning and distribution learning. Structure learning follows a recursive neighborhood aggregation scheme to update node features where structure information is obtained. Distribution learning, on the other hand, omits node interconnections and focuses more on the distribution depicted by all the nodes. Extensive experiments demonstrate that the proposed DKEPool significantly and consistently outperforms the state-of-the-art methods.
翻译:图形级代表制学习是整个图示中下游任务的关键步骤。 因此, 最常见的方法是图形集用, 其节点特征通常被平均或汇总, 以获得图形显示。 然而, 平均或打字等集用操作不可避免地导致大量信息缺失, 这可能会严重降低最后性能。 在本文中, 我们争论对图形级下游任务至关重要的不仅是表层结构, 也包括用于抽样的节点分布。 因此, 由现有的图形神经网络( GNNN) 授权, 我们提议一个新的插接和播放集合模块, 称为“ 分布知识嵌入” (DKEPool), 将图形重新表述为 GNNP 顶部的分布, 而集合目标是总结整个分布信息, 而不是通过简单定义的集合操作保留某个特性矢量。 DKEPool 网络事实上的分解表达方式正在学习两个阶段, 结构学习和分布学习。 结构学习遵循一个连结社区组合计划, 以更新结构信息获取的节点特征。 分配学习过程在另一手上进行, 持续地展示不重复的配置,, 将所有扩展式的配置式分析, 并展示所有扩展式的配置式, 演示式的配置式,, 以演示式的配置式的配置式的配置式的模型,, 将所有驱动式的配置式的配置式的配置式的配置式的配置式式, 将所有驱动式的模型, 。