Motivated by the interest in communication-efficient methods for distributed machine learning, we consider the communication complexity of minimising a sum of $d$-dimensional functions $\sum_{i = 1}^N f_i (x)$, where each function $f_i$ is held by a one of the $N$ different machines. Such tasks arise naturally in large-scale optimisation, where a standard solution is to apply variants of (stochastic) gradient descent. As our main result, we show that $\Omega( Nd \log d / \varepsilon)$ bits in total need to be communicated between the machines to find an additive $\epsilon$-approximation to the minimum of $\sum_{i = 1}^N f_i (x)$. The results holds for deterministic algorithms, and randomised algorithms under some restrictions on the parameter values. Importantly, our lower bounds require no assumptions on the structure of the algorithm, and are matched within constant factors for strongly convex objectives by a new variant of quantised gradient descent. The lower bounds are obtained by bringing over tools from communication complexity to distributed optimisation, an approach we hope will find further use in future.
翻译:基于对分布式机器学习的通信效率方法的兴趣,我们考虑的是,最小化以美元为单位的一元函数的通信复杂性。 美元=1 ⁇ N f_i (x)$,其中每个函数由不同的机器之一持有,美元=i美元。这种任务自然地出现在大规模优化中,标准解决方案是应用(随机)梯度下降的变体。作为我们的主要结果,我们表明,在机器之间总共需要最小化美元(美元==1 ⁇ N f_i (x)美元)比特,以便找到一个添加值= ⁇ N f_i (x)美元,其中每个函数由不同的机器持有。这种任务在大规模优化中自然产生,标准解决方案是应用(随机)梯度下降的变体。我们的主要结果是,我们较低的约束值不需要对算法结构作任何假设,而是在固定因素中通过一种新的四分化梯度方法来匹配。我们从一个新的变式的变式的后期通信工具将获得一个更低约束式的算法。