Decentralized learning has been studied intensively in recent years motivated by its wide applications in the context of federated learning. The majority of previous research focuses on the offline setting in which the objective function is static. However, the offline setting becomes unrealistic in numerous machine learning applications that witness the change of massive data. In this paper, we propose \emph{decentralized online} algorithm for convex and continuous DR-submodular optimization, two classes of functions that are present in a variety of machine learning problems. Our algorithms achieve performance guarantees comparable to those in the centralized offline setting. Moreover, on average, each participant performs only a \emph{single} gradient computation per time step. Subsequently, we extend our algorithms to the bandit setting. Finally, we illustrate the competitive performance of our algorithms in real-world experiments.
翻译:近些年来,基于在联合学习背景下的广泛应用,对分散化学习进行了深入的研究,以往的研究大多侧重于目标功能静止的离线设置。然而,在大量数据变化的众多机器学习应用程序中,离线设置变得不切实际。在本文中,我们提议对共和和连续DR-次模块优化采用计算法,这是在各种机器学习问题中存在的两种功能类别。我们的算法实现了与中央离线设置中类似的性能保障。此外,平均而言,每个参与者仅按时间步骤进行梯度计算。随后,我们将我们的算法扩大到波段环境。最后,我们展示了我们算法在现实世界实验中的竞争性表现。