This paper focuses on the distributed algorithm design for general smooth non-convex finite-sum optimization, which has wide applications in signal processing and machine learning communities. In distributed setting, large number of samples are allocated to multiple agents in the network. Each agent computes local stochastic gradient and communicates with its neighbors to seek for the global optimum. In this paper, we develop a modified variance reduction scheme to deal with the variance by stochastic gradients. Combining gradient tracking and variance reduction schemes, this paper proposes a distributed iterative algorithm, GT-VR, to solve large-scale non-convex finite-sum optimization over multi-agent networks. A complete and rigorous proof shows that the GT-VR algorithm converges to first-order stationary points with $O(\frac{1}{k})$ convergence rate. In addition, we provide the complexity analysis of the proposed algorithm. Compared with some existing distributed algorithms, the proposed algorithm has lower iteration and communication complexity. Experimental results of state-of-the-art algorithms and GT-VR verify the efficiency of the proposed algorithm by making comparison.
翻译:本文侧重于用于一般平滑的非电解槽定数优化的分布式算法设计,该算法在信号处理和机器学习社区中有广泛的应用。 在分布式设置中,大量样本被分配给网络中的多个代理商。 每个代理商计算了本地随机梯度,并与邻居进行通信,以寻求全球最佳效益。 在本文中,我们开发了经修改的减少差异计划,以应对随机梯度梯度的差异。 结合了梯度跟踪和减少差异计划,本文件提出了一种分布式迭代算法,GT-VR,以解决对多试剂网络的大规模非电流定数优化。完整和严格的证据表明,GT-VR算法与美元(frac{1 ⁇ k})汇合为一阶固定点。此外,我们提供了对拟议算法的复杂性分析。与一些现有的分布式算法相比,拟议的算法降低了迭代法和通信的复杂性。 州级算法和GT-VR的实验结果通过比较核实了拟议算法的效率。