Graph neural networks (GNNs) are a powerful inductive bias for modelling algorithmic reasoning procedures and data structures. Their prowess was mainly demonstrated on tasks featuring Markovian dynamics, where querying any associated data structure depends only on its latest state. For many tasks of interest, however, it may be highly beneficial to support efficient data structure queries dependent on previous states. This requires tracking the data structure's evolution through time, placing significant pressure on the GNN's latent representations. We introduce Persistent Message Passing (PMP), a mechanism which endows GNNs with capability of querying past state by explicitly persisting it: rather than overwriting node representations, it creates new nodes whenever required. PMP generalises out-of-distribution to more than 2x larger test inputs on dynamic temporal range queries, significantly outperforming GNNs which overwrite states.
翻译:图形神经网络(GNNs)是模拟算法推理程序和数据结构的强烈导导偏差。 它们的精度主要表现在马尔科维亚动态特征的任务上,其中询问任何相关数据结构只取决于其最新状态。 但是,对于许多感兴趣的任务,支持取决于先前状态的有效数据结构查询可能非常有益。 这要求跟踪数据结构的演变过程,给 GNN 的潜在表现带来巨大压力。 我们引入了持久性信息传递(PMP ), 该机制让GNNs具有查询过去状态的能力, 明确坚持它: 而不是过度撰写节点表, 在必要时创建新的节点。 PMP 常规化分配为动态时间范围查询提供超过2x的测试投入, 大大超过覆盖状态的 GNNs 。