Network data can be conveniently modeled as a graph signal, where data values are assigned to nodes of a graph that describes the underlying network topology. Successful learning from network data is built upon methods that effectively exploit this graph structure. In this work, we leverage graph signal processing to characterize the representation space of graph neural networks (GNNs). We discuss the role of graph convolutional filters in GNNs and show that any architecture built with such filters has the fundamental properties of permutation equivariance and stability to changes in the topology. These two properties offer insight about the workings of GNNs and help explain their scalability and transferability properties which, coupled with their local and distributed nature, make GNNs powerful tools for learning in physical networks. We also introduce GNN extensions using edge-varying and autoregressive moving average graph filters and discuss their properties. Finally, we study the use of GNNs in recommender systems and learning decentralized controllers for robot swarms.
翻译:网络数据可以方便地建模为图形信号, 数据值被分配到描述基本网络地形的图形节点上。 从网络数据中成功学习是建立在有效利用此图形结构的方法之上的。 在这项工作中, 我们利用图形信号处理来描述图形神经网络( GNN) 的显示空间。 我们讨论GNN中图形变动过滤器的作用, 并显示任何用这种过滤器建造的建筑结构都具有调和等性和稳定性的基本特性, 以适应地形的变化。 这两个属性可以洞察 GNNs 的运作情况, 并帮助解释其可缩放性和可转移性。 这些特性加上其本地性和分布性, 使 GNNS 成为在物理网络中学习的强大工具。 我们还采用边变和自动递增式移动平均图形过滤器来引入 GNNN, 并讨论其属性。 最后, 我们研究GNNs在推荐器系统中的使用情况, 并学习对机器人温的分散控制器。