Graph Neural Networks (GNNs), neural network architectures targeted to learning representations of graphs, have become a popular learning model for prediction tasks on nodes, graphs and configurations of points, with wide success in practice. This article summarizes a selection of the emerging theoretical results on approximation and learning properties of widely used message passing GNNs and higher-order GNNs, focusing on representation, generalization and extrapolation. Along the way, it summarizes mathematical connections.
翻译:神经网络图(GNNs)是旨在学习图解的神经网络结构,已成为对节点、图表和点的配置进行预测任务的流行学习模式,在实践中取得了广泛成功,文章总结了一些关于通过GNNs和高阶GNNs广泛使用的信息的近似和学习属性的新兴理论结果,重点是代表性、一般化和外推。此外,它总结了数学连接。