Online federated learning (OFL) becomes an emerging learning framework, in which edge nodes perform online learning with continuous streaming local data and a server constructs a global model from the aggregated local models. Online multiple kernel learning (OMKL), using a preselected set of P kernels, can be a good candidate for OFL framework as it has provided an outstanding performance with a low-complexity and scalability. Yet, an naive extension of OMKL into OFL framework suffers from a heavy communication overhead that grows linearly with P. In this paper, we propose a novel multiple kernel-based OFL (MK-OFL) as a non-trivial extension of OMKL, which yields the same performance of the naive extension with 1/P communication overhead reduction. We theoretically prove that MK-OFL achieves the optimal sublinear regret bound when compared with the best function in hindsight. Finally, we provide the numerical tests of our approach on real-world datasets, which suggests its practicality.
翻译:在线联结学习(OFL)成为新兴学习框架,在这种框架中,边缘节点通过不断流流的当地数据进行在线学习,服务器根据综合的地方模型构建了一个全球模型。在线多内核学习(OMKL),使用预选的一套P内核,可以成为OFL框架的良好候选对象,因为它提供了低复杂性和可缩缩度的出色表现。然而,OMKL向OFL框架的天真延伸,受到与P直线增长的庞大通信管理负担的影响。在本文中,我们提出了一个基于多内核的新的多内核 OFL(MK-OFL),作为OMKL的非三边扩展,它产生天性扩展的同样效果,减少1/P通信间接费用。我们理论上证明,MK-OFL在与远视的最佳功能相比,实现了最佳的亚线性遗憾。最后,我们提供了我们对于现实世界数据集方法的数字测试,表明其实用性。