Distributed algorithms to solve linear equations in multi-agent networks have attracted great research attention and many iteration-based distributed algorithms have been developed. The convergence speed is a key factor to be considered for distributed algorithms, and it is shown dependent on the spectral radius of the iteration matrix. However, the iteration matrix is determined by the network structure and is hardly pre-tuned, making the iterative-based distributed algorithms may converge very slowly when the spectral radius is close to 1. In contrast, in centralized optimization, the Conjugate Gradient (CG) is a widely adopted idea to speed up the convergence of the centralized solvers, which can guarantee convergence in fixed steps. In this paper, we propose a general distributed implementation of CG, called DCG. DCG only needs local communication and local computation, while inheriting the characteristic of fast convergence. DCG guarantees to converge in $4Hn$ rounds, where $H$ is the maximum hop number of the network and $n$ is the number of nodes. We present the applications of DCG in solving the least square problem and network localization problem. The results show the convergence speed of DCG is three orders of magnitude faster than the widely used Richardson iteration method.
翻译:在多试剂网络中,解决线性方程式的分布式算法引起了极大的研究关注,并且已经开发了许多基于迭代分布式算法。趋同速度是分配式算法需要考虑的一个关键因素,它取决于迭代矩阵的光谱半径。然而,迭代矩阵是由网络结构决定的,几乎无法预先调整,使迭代分布式算法在光谱半径接近于1时可能会非常缓慢地趋同。相比之下,在集中优化方面,共振梯度(CG)是一个广泛采用的想法,以加速集中式求解器的趋同,这可以保证固定步骤的趋同。在本文件中,我们建议普遍执行共振成的CG,称为DCG。DCG只需要本地通信和本地计算,同时继承快速趋同的特性。DCG保证以4Hn圆回合组合,其中美元是网络的最大跳数,美元是点数。我们展示了DCG在解决最不平方问题和网络本地化问题方面的应用。我们展示的DCG应用程序,其速度比Richard使用的方法要快。