This paper deals with a network of computing agents aiming to solve an online optimization problem in a distributed fashion, i.e., by means of local computation and communication, without any central coordinator. We propose the gradient tracking with adaptive momentum estimation (GTAdam) distributed algorithm, which combines a gradient tracking mechanism with first and second order momentum estimates of the gradient. The algorithm is analyzed in the online setting for strongly convex cost functions with Lipschitz continuous gradients. We provide an upper bound for the dynamic regret given by a term related to the initial conditions, and another term related to the temporal variations of the objective functions. Moreover, a linear convergence rate is guaranteed in the static set-up. The algorithm is tested on a time-varying classification problem, on a (moving) target localization problem and in a stochastic optimization setup from image classification. In these numerical experiments from multi-agent learning, GTAdam outperforms state-of-the-art distributed optimization methods.
翻译:本文涉及一个计算代理器网络,目的是以分布式的方式解决在线优化问题,即通过本地计算和通信,而没有任何中央协调员。我们建议使用适应性动力估计(GTAdam)分布式算法跟踪梯度,该算法将梯度跟踪机制与梯度的第一和第二顺序动力估计相结合。该算法在网上设置中与Lipschitz连续梯度进行强烈连线成本函数分析。我们为与初始条件有关的术语给出的动态遗憾和与目标函数的时间变化相关的术语提供了一个上限。此外,在静态设置中保证了线性趋同率。该算法在时间变化的分类问题、(移动)目标本地化问题和图像分类的随机优化设置中进行了测试。在这些多试剂学习的数值实验中,GTAdam超越了分布式最佳方法的状态。