Hogwild! implements asynchronous Stochastic Gradient Descent (SGD) where multiple threads in parallel access a common repository containing training data, perform SGD iterations and update shared state that represents a jointly learned (global) model. We consider big data analysis where training data is distributed among local data sets in a heterogeneous way -- and we wish to move SGD computations to local compute nodes where local data resides. The results of these local SGD computations are aggregated by a central "aggregator" which mimics Hogwild!. We show how local compute nodes can start choosing small mini-batch sizes which increase to larger ones in order to reduce communication cost (round interaction with the aggregator). We improve state-of-the-art literature and show $O(\sqrt{K}$) communication rounds for heterogeneous data for strongly convex problems, where $K$ is the total number of gradient computations across all local compute nodes. For our scheme, we prove a \textit{tight} and novel non-trivial convergence analysis for strongly convex problems for {\em heterogeneous} data which does not use the bounded gradient assumption as seen in many existing publications. The tightness is a consequence of our proofs for lower and upper bounds of the convergence rate, which show a constant factor difference. We show experimental results for plain convex and non-convex problems for biased (i.e., heterogeneous) and unbiased local data sets.
翻译:Hogwild! 执行“ 同步的斯托切梯梯底 ” 。 在平行存取包含培训数据的共同存储库的多个线条中, 执行 SGD 迭代并更新共享状态, 代表一个共同学习的( 全球) 模式。 我们考虑大数据分析, 培训数据在本地数据集中以不同的方式分布 - 我们希望将 SGD 计算结果移到本地数据所在的本地计算节点。 这些本地 SGD 计算结果由中央“ 聚合器” 汇总, 并模仿 Hogwild! 我们展示了本地计算中心如何开始选择小小的迷你批量大小, 并进行 SGDD 迭代换, 更新了共享大小, 以降低通信成本( 与聚合器的全局互动 ) 。 我们改进了最新数据状态, 并展示了本地所有计算节点的梯度计算总数 。 对于我们的方案, 我们证明了一个不偏差的计算器 。 和新颖的非三端的精确度, 数据递合率分析显示我们不断的精确度 的精确度 。 的精确度 的模型显示, 的精确度 。