This paper proposes a distributed stochastic projection-free algorithm for large-scale constrained finite-sum optimization whose constraint set is complicated such that the projection onto the constraint set can be expensive. The global cost function is allocated to multiple agents, each of which computes its local stochastic gradients and communicates with its neighbors to solve the global problem. Stochastic gradient methods enable low computational cost, while they are hard and slow to converge due to the variance caused by random sampling. To construct a convergent distributed stochastic projection-free algorithm, this paper incorporates a variance reduction technique and gradient tracking technique in the Frank-Wolfe update. We develop a sampling rule for the variance reduction technique to reduce the variance introduced by stochastic gradients. Complete and rigorous proofs show that the proposed distributed projection-free algorithm converges with a sublinear convergence rate and enjoys superior complexity guarantees for both convex and non-convex objective functions. By comparative simulations, we demonstrate the convergence and computational efficiency of the proposed algorithm.
翻译:本文提出了大规模限制有限和优化的分布式随机投影算法,该算法的制约性规定十分复杂,因此,投射到约束性组可能很昂贵。全球成本函数分配给多个代理商,其中每个代理商计算其本地的随机梯度,并与邻国进行沟通以解决全球问题。 蒸发式梯度方法可以降低计算成本,而由于随机抽样造成的差异,它们很难和缓慢地会合。 要构建一个集中式的分散式随机投影算法,本文件在弗兰克-沃费更新版中包含了一个减少差异技术和梯度跟踪技术。我们为减少差异技术开发了一个取样规则,以减少由随机梯度带来的差异。完整和严格的证据表明,拟议的分布式无预测性算法与亚线性趋同率一致,并且对锥体和非凝固性客观功能都享有更高的复杂性保证。通过比较模拟,我们展示了拟议算法的趋同性和计算效率。