When designing large-scale distributed controllers, the information-sharing constraints between sub-controllers, as defined by a communication topology interconnecting them, are as important as the controller itself. Controllers implemented using dense topologies typically outperform those implemented using sparse topologies, but it is also desirable to minimize the cost of controller deployment. Motivated by the above, we introduce a compact but expressive graph recurrent neural network (GRNN) parameterization of distributed controllers that is well suited for distributed controller and communication topology co-design. Our proposed parameterization enjoys a local and distributed architecture, similar to previous Graph Neural Network (GNN)-based parameterizations, while further naturally allowing for joint optimization of the distributed controller and communication topology needed to implement it. We show that the distributed controller/communication topology co-design task can be posed as an $\ell_1$-regularized empirical risk minimization problem that can be efficiently solved using stochastic gradient methods. We run extensive simulations to study the performance of GRNN-based distributed controllers and show that (a) they achieve performance comparable to GNN-based controllers while having fewer free parameters, and (b) our method allows for performance/communication density tradeoff curves to be efficiently approximated.
翻译:在设计大型分布式控制器时,分控器之间的信息共享限制,由通信表层相互连接,其定义的分控器之间的信息共享限制,与控制器本身一样重要。使用密集的表层操作的主计长通常比使用稀有的表层操作的更优性能强,但也希望最大限度地降低控制器部署的成本。受上述因素的推动,我们引入了分布式控制器的缩略图经常性神经网络(GNN)参数化,这非常适合分布式控制器和通信表层共同设计。我们拟议的参数化具有一个本地和分布式结构,类似于以前基于图形神经网络(GNN)的参数化,同时进一步自然地允许对分布式控制器和通信表层进行联合优化,以尽可能降低使用分散式控制器实施的成本。我们表明,分布式控制器/通信表层共同设计的任务可以是一个耗资1美元的固定化的实验性能风险最小化问题,而这种风险可以通过随机性梯度梯度方法有效解决。我们进行了广泛的模拟,以研究基于GNNNN的分布式控制器的性操作器的性能和显示(a)它们实现可与GNNNNN的准确性能的精确性控制器的精确度相比,同时使我们的性能能够使性能具有可与GNNNNNC/精确度的精确度具有较低的精确度。