Graph neural networks (GNNs) have limited expressive power, failing to represent many graph classes correctly. While more expressive graph representation learning (GRL) alternatives can distinguish some of these classes, they are significantly harder to implement, may not scale well, and have not been shown to outperform well-tuned GNNs in real-world tasks. Thus, devising simple, scalable, and expressive GRL architectures that also achieve real-world improvements remains an open challenge. In this work, we show the extent to which graph reconstruction -- reconstructing a graph from its subgraphs -- can mitigate the theoretical and practical problems currently faced by GRL architectures. First, we leverage graph reconstruction to build two new classes of expressive graph representations. Secondly, we show how graph reconstruction boosts the expressive power of any GNN architecture while being a (provably) powerful inductive bias for invariances to vertex removals. Empirically, we show how reconstruction can boost GNN's expressive power -- while maintaining its invariance to permutations of the vertices -- by solving seven graph property tasks not solvable by the original GNN. Further, we demonstrate how it boosts state-of-the-art GNN's performance across nine real-world benchmark datasets.
翻译:图像神经网络( GNNs) 的显示力有限, 无法正确代表许多图表类。 虽然更直观的图形代表学习( GRL) 替代方法可以区分其中某些类, 但执行起来要困难得多, 规模可能不高, 并且没有显示在现实世界的任务中优于匹配的 GNNs 。 因此, 设计简单、 可缩放和直观的GRL 结构, 也实现真实世界改善, 仍然是一个开放的挑战。 在这项工作中, 我们展示了图形重建( 从子图中重建一个图表) 能够减轻GRL 结构目前面临的理论和实际问题的程度。 首先, 我们利用图形重建来构建两个新的表达式图形代表。 第二, 我们展示了图形重建如何在任何GNNS结构的表达力上提升任何显示的显示力, 同时( 可能) 使变异性到的GNNS的表象力增强G的表力。 我们展示了重建如何能提升GNN的显示G的表态变化, 同时保持其与变异性以进一步改变。