In Graph Neural Networks (GNNs), hierarchical pooling operators generate a coarser representation of the input data by creating local summaries of the graph structure and its vertex features. Considerable attention has been devoted to studying the expressive power of message-passing (MP) layers in GNNs, while a study on how pooling operators affect the expressivity of a GNN is still lacking. Additionally, despite the recent advances in the design of effective pooling operators, there is not a principled criterion to compare them. Our work aims to fill this gap by providing sufficient conditions for a pooling operator to fully preserve the expressive power of the MP layers before it. These conditions serve as a universal and theoretically-grounded criterion for choosing among existing pooling operators or designing new ones. Based on our theoretical findings, we reviewed several existing pooling operators and identified those that fail to satisfy the expressiveness assumptions. Finally, we introduced an experimental setup to empirically measure the expressive power of a GNN equipped with pooling layers, in terms of its capability to perform a graph isomorphism test.
翻译:在图形神经网络(GNN)中,分层汇聚操作通过创建图结构和其顶点特征的局部摘要来生成输入数据的更粗略表示。虽然对于消息传递(MP)层在GNN中的表达能力已经得到了广泛关注,但是有关汇聚操作如何影响GNN的表达能力的研究仍然缺乏。此外,尽管最近汇聚操作的有效设计已经取得了巨大进展,但是还没有一个原则性标准来比较它们。我们的工作旨在提供一个充分条件,使汇聚操作在其前面的MP层中完全保持表达能力。这些条件作为选择现有汇聚操作或设计新汇聚操作的通用且有理论基础的标准。基于我们的理论发现,我们回顾了几种现有的汇聚操作,并确定了不能满足表达假设的操作。最后,我们介绍了一个实验设置来在图同构测试方面实证地测量GNN装备有汇聚层的表达能力。