Distributed algorithms have been playing an increasingly important role in many applications such as machine learning, signal processing, and control. Significant research efforts have been devoted to developing and analyzing new algorithms for various applications. In this work, we provide a fresh perspective to understand, analyze, and design distributed optimization algorithms. Through the lens of multi-rate feedback control, we show that a wide class of distributed algorithms, including popular decentralized/federated schemes, can be viewed as discretizing a certain continuous-time feedback control system, possibly with multiple sampling rates, such as decentralized gradient descent, gradient tracking, and federated averaging. This key observation not only allows us to develop a generic framework to analyze the convergence of the entire algorithm class. More importantly, it also leads to an interesting way of designing new distributed algorithms. We develop the theory behind our framework and provide examples to highlight how the framework can be used in practice.
翻译:在机器学习、信号处理和控制等许多应用中,分布式算法一直发挥着日益重要的作用。大量研究工作致力于为各种应用开发和分析新的算法。在这项工作中,我们提供了一个新的视角来理解、分析和设计分布式优化算法。通过多率反馈控制镜片,我们显示,可以将广泛的分布式算法,包括流行的分散/联合计划,视为将某种连续时间反馈控制系统分散开来,可能采用多种取样率,如分散梯度下降、梯度跟踪和联合平均率。这一关键观察不仅使我们能够开发一个通用框架来分析整个算法类的趋同。更重要的是,它还导致一种有趣的设计新的分布式算法的方法。我们在框架背后发展了理论,并提供了范例,以突出框架在实践中的应用。