In order to meet the requirements for performance, safety, and latency in many IoT applications, intelligent decisions must be made right here right now at the network edge. However, the constrained resources and limited local data amount pose significant challenges to the development of edge AI. To overcome these challenges, we explore continual edge learning capable of leveraging the knowledge transfer from previous tasks. Aiming to achieve fast and continual edge learning, we propose a platform-aided federated meta-learning architecture where edge nodes collaboratively learn a meta-model, aided by the knowledge transfer from prior tasks. The edge learning problem is cast as a regularized optimization problem, where the valuable knowledge learned from previous tasks is extracted as regularization. Then, we devise an ADMM based federated meta-learning algorithm, namely ADMM-FedMeta, where ADMM offers a natural mechanism to decompose the original problem into many subproblems which can be solved in parallel across edge nodes and the platform. Further, a variant of inexact-ADMM method is employed where the subproblems are `solved' via linear approximation as well as Hessian estimation to reduce the computational cost per round to $\mathcal{O}(n)$. We provide a comprehensive analysis of ADMM-FedMeta, in terms of the convergence properties, the rapid adaptation performance, and the forgetting effect of prior knowledge transfer, for the general non-convex case. Extensive experimental studies demonstrate the effectiveness and efficiency of ADMM-FedMeta, and showcase that it substantially outperforms the existing baselines.
翻译:为了满足对许多IOT应用的性能、安全和延缓性的要求,必须现在就在网络边缘做出明智的决定。然而,有限的资源和有限的当地数据数量对开发边缘AI提出了重大挑战。为了克服这些挑战,我们探索能够利用先前任务的知识转移的连续边缘学习。为了实现快速和持续的边际学习,我们提议了一个平台辅助的联邦元学习架构,在先前任务的知识转移的帮助下,边缘节点合作学习一个元模型。边缘学习问题被描绘成一个常规化的优化问题,从先前任务中获取的宝贵知识被作为正规化。然后,我们设计了一个基于ADMMMMM的混合元学习算法,即ADMM-F-FedMeta提供了一种自然机制,将原始问题分解成许多子题,可以在边缘节点和平台平行地平行地解决。此外,当子问题通过直线性近似值的实验性能提取ADMM-M的快速性能分析结果,我们用ADM-M的当前直径直径法方法来降低成本。