In this paper, we investigate a distributed aggregative optimization problem in a network, where each agent has its own local cost function which depends not only on the local state variable but also on an aggregated function of state variables from all agents. To accelerate the optimization process, we combine heavy ball and Nesterov's accelerated methods with distributed aggregative gradient tracking, and propose two novel algorithms named DAGT-HB and DAGT-NES for solving the distributed aggregative optimization problem. We analyse that the DAGT-HB and DAGT-NES algorithms can converge to an optimal solution at a global $\mathbf{R}-$linear convergence rate when the objective function is smooth and strongly convex, and when the parameters (e.g., step size and momentum coefficients) are selected within certain ranges. A numerical experiment on the optimal placement problem is given to verify the effectiveness and superiority of our proposed algorithms.
翻译:在本文中,我们研究了一个网络中的分布式聚合优化问题,其中每个代理都有自己的局部成本函数,这个函数不仅取决于本地状态变量,还取决于所有代理的状态变量的聚合函数。为了加速优化过程,我们将重球和Nesterov加速方法与分布式聚合梯度跟踪相结合,提出了两种新算法DAGT-HB和DAGT-NES,用于解决分布式聚合优化问题。我们分析了当目标函数是光滑和强凸时,当参数(例如步长和动量系数)在一定范围内选择时,DAGT-HB和DAGT-NES算法可以以全局 $\mathbf{R}-$ 线性收敛率收敛到最优解。我们对最优放置问题进行了数值实验,以验证我们提出的算法的有效性和优越性。