This paper focuses on online kernel learning over a decentralized network. Each agent in the network receives continuous streaming data locally and works collaboratively to learn a nonlinear prediction function that is globally optimal in the reproducing kernel Hilbert space with respect to the total instantaneous costs of all agents. In order to circumvent the curse of dimensionality issue in traditional online kernel learning, we utilize random feature (RF) mapping to convert the non-parametric kernel learning problem into a fixed-length parametric one in the RF space. We then propose a novel learning framework named Online Decentralized Kernel learning via Linearized ADMM (ODKLA) to efficiently solve the online decentralized kernel learning problem. To further improve the communication efficiency, we add the quantization and censoring strategies in the communication stage and develop the Quantized and Communication-censored ODKLA (QC-ODKLA) algorithm. We theoretically prove that both ODKLA and QC-ODKLA can achieve the optimal sublinear regret $\mathcal{O}(\sqrt{T})$ over $T$ time slots. Through numerical experiments, we evaluate the learning effectiveness, communication, and computation efficiencies of the proposed methods.
翻译:本文侧重于在分散的网络中在线内核学习。 网络的每个代理机构在当地接收连续流数据,并合作学习一个非线性预测功能, 相对于所有代理机构的瞬间总成本, 在复制核心内核 Hilbert 空间中, 这是一种全球最佳的非线性预测功能。 为了避免传统在线内核学习中维度问题的诅咒, 我们使用随机特征( RF) 映射将非参数内核学习问题转换成RF空间的固定长度参数。 我们然后提议一个名为在线分散的内核学习的新学习框架, 通过线性ADMM (ODKLA) 来有效解决在线分散的内核学习问题。 为了进一步提高通信效率, 我们添加了通信阶段的量化和审查战略, 并开发了定量和通信 ODKLA (QC- ODKLA) 算法。 我们理论上证明ODKLA和QC- ODKLA 都能够实现最佳的子线性子学习 以美元计数价 和我们所提议的数字化的计算 方法。