The aim of this work is to develop a fully-distributed algorithmic framework for training graph convolutional networks (GCNs). The proposed method is able to exploit the meaningful relational structure of the input data, which are collected by a set of agents that communicate over a sparse network topology. After formulating the centralized GCN training problem, we first show how to make inference in a distributed scenario where the underlying data graph is split among different agents. Then, we propose a distributed gradient descent procedure to solve the GCN training problem. The resulting model distributes computation along three lines: during inference, during back-propagation, and during optimization. Convergence to stationary solutions of the GCN training problem is also established under mild conditions. Finally, we propose an optimization criterion to design the communication topology between agents in order to match with the graph describing data relationships. A wide set of numerical results validate our proposal. To the best of our knowledge, this is the first work combining graph convolutional neural networks with distributed optimization.
翻译:这项工作的目的是为培训图象卷变网络开发一个完全分布的算法框架(GCNs)。拟议方法能够利用输入数据中有意义的关系结构,输入数据是由一组通过分散的网络地形进行交流的代理人收集的。在提出集中的GCN培训问题之后,我们首先展示如何在分布式假设中作出推论,其中基础数据图表在不同代理人之间分布在一起。然后,我们提出一个分布式的梯度下降程序,以解决GCN培训问题。由此形成的模型按照三条线进行计算:在推断期间、后向分析期间和优化期间。在温和的条件下,也确定了GCN培训问题固定解决办法的趋同性。最后,我们提出了设计各种代理人之间通信结构的优化标准,以便与描述数据关系的图表相匹配。一组广泛的数字结果证实了我们的提议。据我们所知,这是将图形卷神经网络与分布式优化相结合的第一个工作。