This paper considers the multi-agent distributed linear least-squares problem. The system comprises multiple agents, each agent with a locally observed set of data points, and a common server with whom the agents can interact. The agents' goal is to compute a linear model that best fits the collective data points observed by all the agents. In the server-based distributed settings, the server cannot access the data points held by the agents. The recently proposed Iteratively Pre-conditioned Gradient-descent (IPG) method has been shown to converge faster than other existing distributed algorithms that solve this problem. In the IPG algorithm, the server and the agents perform numerous iterative computations. Each of these iterations relies on the entire batch of data points observed by the agents for updating the current estimate of the solution. Here, we extend the idea of iterative pre-conditioning to the stochastic settings, where the server updates the estimate and the iterative pre-conditioning matrix based on a single randomly selected data point at every iteration. We show that our proposed Iteratively Pre-conditioned Stochastic Gradient-descent (IPSG) method converges linearly in expectation to a proximity of the solution. Importantly, we empirically show that the proposed IPSG method's convergence rate compares favorably to prominent stochastic algorithms for solving the linear least-squares problem in server-based networks.
翻译:本文审视了多试剂分布的线性最小平方问题。 系统由多个代理商组成, 每个代理商都有一套当地观察的数据点, 和一个共同的服务器, 代理商可以与之互动。 代理商的目标是计算一个最适合所有代理商所观测的集体数据点的线性模型。 在基于服务器的分布式设置中, 服务器无法访问代理商持有的数据点。 最近提出的“ 暂时性预设的渐变色( IPG) ” 方法已经显示比其他现有的分布式算法更快地聚合, 以解决这一问题。 在IPG 算法中, 服务器和代理商进行多次迭接式计算。 每个迭代都依赖于代理商所观测到的整批数据点来更新当前解决方案的估计点。 在此, 我们将迭代前调节的理念推广到感应设置中, 服务器在每次反复随机选择的数据点的基础上更新了估算值和迭代前调节矩阵。 我们展示了我们提议的“ 预设定的静性梯性梯分解”, 以及代理商进行多次的迭联式计算方法, 显示我们所推荐的直径性直线性比方法的离式方法,, 直观地展示了我们对正正正趋式的离式的路径方法的离式方法, 。