While (message-passing) graph neural networks have clear limitations in approximating permutation-equivariant functions over graphs or general relational data, more expressive, higher-order graph neural networks do not scale to large graphs. They either operate on $k$-order tensors or consider all $k$-node subgraphs, implying an exponential dependence on $k$ in memory requirements, and do not adapt to the sparsity of the graph. By introducing new heuristics for the graph isomorphism problem, we devise a class of universal, permutation-equivariant graph networks, which, unlike previous architectures, offer a fine-grained control between expressivity and scalability and adapt to the sparsity of the graph. These architectures lead to vastly reduced computation times compared to standard higher-order graph networks in the supervised node- and graph-level classification and regression regime while significantly improving over standard graph neural network and graph kernel architectures in terms of predictive performance.
翻译:图形神经网络( 透视) 图形神经网络在接近图形或一般关联数据上的异变功能方面有明显的局限性, 更直观、 更高层次的图形神经网络并不比大图表大。 它们或者以千元- 单数振动器操作, 或者考虑所有 $k$- node 子图, 意味着记忆要求中以指数方式依赖 $k美元, 并且不适应图形的偏移性。 通过对图形的形态问题引入新的超常性能, 我们设计了一种通用的、 超常性- 等异性图形网络, 与以前的结构不同, 它在表达性和可缩放性之间提供了细微的控制, 并适应了图形的偏狭性。 这些结构导致计算时间大大缩短, 与受监管的节点和图形级分类和回归体系中的标准更高层次的图形网络相比, 同时大大改进标准的图形神经网络和图形内核结构的预测性能。