We analyze the complexity of biased stochastic gradient methods (SGD), where individual updates are corrupted by deterministic, i.e. biased error terms. We derive convergence results for smooth (non-convex) functions and give improved rates under the Polyak-Lojasiewicz condition. We quantify how the magnitude of the bias impacts the attainable accuracy and the convergence rates (sometimes leading to divergence). Our framework covers many applications where either only biased gradient updates are available, or preferred, over unbiased ones for performance reasons. For instance, in the domain of distributed learning, biased gradient compression techniques such as top-k compression have been proposed as a tool to alleviate the communication bottleneck and in derivative-free optimization, only biased gradient estimators can be queried. We discuss a few guiding examples that show the broad applicability of our analysis.
翻译:我们分析偏差的梯度方法的复杂性,其中个人更新因确定性(即偏差错误术语)而腐蚀。我们为平滑(非混凝土)功能取得趋同结果,并在Polyak-Lojasiewicz条件下提高费率。我们量化偏差的程度如何影响可实现的准确率和趋同率(有时导致差异)。我们的框架包括许多应用,其中要么只有偏差的梯度更新可用,要么出于性能原因偏好于不偏向性更新。例如,在分布式学习领域,提出了偏差的梯度压缩技术,如顶层压缩技术,作为缓解通信瓶颈和无衍生物优化的工具,只有偏差的梯度估计者才能被查询。我们讨论了一些指导性例子,表明我们的分析具有广泛适用性。