Graph neural networks (GNNs) are deep learning architectures for machine learning problems on graphs. It has recently been shown that the expressiveness of GNNs can be characterised precisely by the combinatorial Weisfeiler-Leman algorithms and by finite variable counting logics. The correspondence has even led to new, higher-order GNNs corresponding to the WL algorithm in higher dimensions. The purpose of this paper is to explain these descriptive characterisations of GNNs.
翻译:图形神经网络(GNNs)是图表中机器学习问题的深层学习结构。最近已经表明,GNNs的表达性可以精确地用组合Weisfeiler-Leman算法和有限的可变计算逻辑来描述。这些通信甚至导致与WL算法相对应的更高层次的新的GNNs。本文的目的是解释GNS的描述性。