Federated Learning Networks (FLNs) have been envisaged as a promising paradigm to collaboratively train models among mobile devices without exposing their local privacy data. Due to the need for frequent model updates and communications, FLNs are vulnerable to various attacks (e.g., eavesdropping attacks, inference attacks, poisoning attacks, and backdoor attacks). Balancing privacy protection with efficient distributed model training is a key challenge for FLNs. Existing countermeasures incur high computation costs and are only designed for specific attacks on FLNs. In this paper, we bridge this gap by proposing the Covert Communication-based Federated Learning (CCFL) approach. Based on the emerging communication security technique of covert communication which hides the existence of wireless communication activities, CCFL can degrade attackers' capability of extracting useful information from the FLN training protocol, which is a fundamental step for most existing attacks, and thereby holistically enhances the privacy of FLNs. We experimentally evaluate CCFL extensively under real-world settings in which the FL latency is optimized under given security requirements. Numerical results demonstrate the significant effectiveness of the proposed approach in terms of both training efficiency and communication security.
翻译:联邦学习网络(FLN)被认为是在不披露当地隐私数据的情况下在移动设备之间合作培训模型的一个很有希望的模式,因为需要经常提供模式更新和通信,FLN很容易受到各种攻击(例如偷听攻击、推断攻击、中毒攻击和后门攻击等),平衡隐私保护与有效分布模式培训是FLN面临的一个关键挑战,现有的对策需要高昂的计算费用,而且只针对对FLN的具体攻击而设计。在本文中,我们提议基于隐蔽通信的联邦学习(CCFL)方法,以弥补这一差距。基于隐蔽通信的新兴通信安全技术,隐藏无线通信活动的存在,CCFLL可以降低攻击者从FLN培训协议中获取有用信息的能力,这是大多数现有攻击的基本步骤,从而全面提高FLN的隐私。我们实验性地评估现实世界环境中的CCFLL,在其中根据安全要求优化了FLL Lant。Ne值,在拟议的安全培训中展示了拟议方法的显著效率。