This paper considers the multi-agent linear least-squares problem in a server-agent network. In this problem, the system comprises multiple agents, each having a set of local data points, that are connected to a server. The goal for the agents is to compute a linear mathematical model that optimally fits the collective data points held by all the agents, without sharing their individual local data points. This goal can be achieved, in principle, using the server-agent variant of the traditional iterative gradient-descent method. The gradient-descent method converges linearly to a solution, and its rate of convergence is lower bounded by the conditioning of the agents' collective data points. If the data points are ill-conditioned, the gradient-descent method may require a large number of iterations to converge. We propose an iterative pre-conditioning technique that mitigates the deleterious effect of the conditioning of data points on the rate of convergence of the gradient-descent method. We rigorously show that the resulting pre-conditioned gradient-descent method, with the proposed iterative pre-conditioning, achieves superlinear convergence when the least-squares problem has a unique solution. In general, the convergence is linear with improved rate of convergence in comparison to the traditional gradient-descent method and the state-of-the-art accelerated gradient-descent methods. We further illustrate the improved rate of convergence of our proposed algorithm through experiments on different real-world least-squares problems in both noise-free and noisy computation environment.
翻译:本文探讨了服务器- 试剂网络中多试剂线性最不平方的问题。 在此问题上, 系统由多个代理商组成, 每个都有一组本地数据点, 与服务器连接。 代理商的目标是计算一个线性数学模型, 最符合所有代理商持有的集体数据点, 但不分享他们各自的本地数据点。 这一目标原则上可以实现, 使用传统的迭代梯度- 白白度方法的服务器- 试样方法。 梯度- 白度方法直线走向解决方案, 其趋同率因代理商集体数据点的调节而降低。 如果数据点不完善, 梯度- 白度方法可能需要大量迭代数才能趋同。 我们提出一个迭代式的前调技术, 减轻数据点对梯度- 白度方法趋同率的调和率的调和率变化。 我们严格地表明, 由此产生的成熟的梯度- 白度方法随着拟议的迭代前调, 其趋同率的调整速度因代理人集体数据点的调节而更趋同速度, 当最不完善的数据点时, 渐趋同于最不平的渐趋同率的递增的递化方法, 。