Federated learning (FL) has emerged as a promising learning paradigm in which only local model parameters (gradients) are shared. Private user data never leaves the local devices thus preserving data privacy. However, recent research has shown that even when local data is never shared by a user, exchanging model parameters without protection can also leak private information. Moreover, in wireless systems, the frequent transmission of model parameters can cause tremendous bandwidth consumption and network congestion when the model is large. To address this problem, we propose a new FL framework with efficient over-the-air parameter aggregation and strong privacy protection of both user data and models. We achieve this by introducing pairwise cancellable random artificial noises (PCR-ANs) on end devices. As compared to existing over-the-air computation (AirComp) based FL schemes, our design provides stronger privacy protection. We analytically show the secrecy capacity and the convergence rate of the proposed wireless FL aggregation algorithm.
翻译:联邦学习(FL)已经成为一个充满希望的学习范例,只有本地模型参数(梯度)才能共享。私人用户数据从未离开本地设备,因此保护了数据隐私。然而,最近的研究表明,即使本地数据从未由用户共享,在没有保护的情况下交换模型参数也会泄漏私人信息。此外,在无线系统中,当模型规模巨大时,频繁传输模型参数可能造成巨大的带宽消耗和网络拥堵。为了解决这一问题,我们提议一个新的FL框架,其中高效地超空参数聚合,并强有力地保护用户数据和模型的隐私。我们通过在终端设备上引入双向可切换的随机人工噪音(PCR-ANs)来实现这一点。与现有的基于空上计算(AirComp)的FL计划相比,我们的设计提供了更强的隐私保护。我们用分析方式展示了拟议的无线FL汇总算法的保密能力和聚合率。