We consider the problem of learning a nonlinear function over a network of learners in a fully decentralized fashion. Online learning is additionally assumed, where every learner receives continuous streaming data locally. This learning model is called a fully distributed online learning (or a fully decentralized online federated learning). For this model, we propose a novel learning framework with multiple kernels, which is named DOMKL. The proposed DOMKL is devised by harnessing the principles of an online alternating direction method of multipliers and a distributed Hedge algorithm. We theoretically prove that DOMKL over T time slots can achieve an optimal sublinear regret, implying that every learner in the network can learn a common function which has a diminishing gap from the best function in hindsight. Our analysis also reveals that DOMKL yields the same asymptotic performance of the state-of-the-art centralized approach while keeping local data at edge learners. Via numerical tests with real datasets, we demonstrate the effectiveness of the proposed DOMKL on various online regression and time-series prediction tasks.
翻译:我们考虑的是在一个完全分散的学习者网络上学习非线性功能的问题。 在线学习是另外假设的, 每个学习者都可以在当地获得连续流数据。 这个学习模式被称为完全分布的在线学习( 或完全分散的在线联合学习 ) 。 对于这个模式,我们提出一个具有多个内核的新学习框架,称为DOMKL。 提议的DOMKL 设计方法是利用一种在线交替的乘数法和分布式的套数算法的原则。 我们理论上证明,T时间段的DOMKL 能够实现最佳的子线性遗憾, 意味着网络的每个学习者都能学习一个共同功能, 与后视中的最佳功能相比, 差距正在缩小。 我们的分析还显示, DOMKL 在将本地数据保存在边缘学习者身上的同时, DOMKL 生成了同样相似的集中方法。 我们用真实的数据集进行 Via 数字测试, 我们展示了提议的 DOMKL 在各种在线回归和时间序列预测任务上的有效性 。