We study the numerical and Boolean expressiveness of MPLang, a declarative language that captures the computation of graph neural networks (GNNs) through linear message passing and activation functions. We begin with A-MPLang, the fragment without activation functions, and give a characterization of its expressive power in terms of walk-summed features. For bounded activation functions, we show that (under mild conditions) all eventually constant activations yield the same expressive power - numerical and Boolean - and that it subsumes previously established logics for GNNs with eventually constant activation functions but without linear layers. Finally, we prove the first expressive separation between unbounded and bounded activations in the presence of linear layers: MPLang with ReLU is strictly more powerful for numerical queries than MPLang with eventually constant activation functions, e.g., truncated ReLU. This hinges on subtle interactions between linear aggregation and eventually constant non-linearities, and it establishes that GNNs using ReLU are more expressive than those restricted to eventually constant activations and linear layers.
翻译:本研究探讨了MPLang(一种通过线性消息传递与激活函数捕捉图神经网络计算过程的声明式语言)在数值与布尔表达方面的表达能力。我们首先分析无激活函数片段A-MPLang,基于游走求和特征刻画其表达能力。针对有界激活函数,我们证明(在温和条件下)所有最终恒定的激活函数具有相同的数值与布尔表达能力,且该能力涵盖先前建立的、不含线性层的最终恒定激活函数GNN逻辑体系。最后,我们在线性层存在条件下首次证明了无界激活函数与有界激活函数间的表达能力分离:采用ReLU的MPLang在数值查询方面严格强于采用最终恒定激活函数(如截断ReLU)的MPLang。这一结论依赖于线性聚合与最终恒定非线性函数间的微妙相互作用,从而确立使用ReLU的图神经网络比限于最终恒定激活函数与线性层的网络具有更强的表达能力。