We propose and implement a Privacy-preserving Federated Learning (PPFL) framework for mobile systems to limit privacy leakages in federated learning. Leveraging the widespread presence of Trusted Execution Environments (TEEs) in high-end and mobile devices, we utilize TEEs on clients for local training, and on servers for secure aggregation, so that model/gradient updates are hidden from adversaries. Challenged by the limited memory size of current TEEs, we leverage greedy layer-wise training to train each model's layer inside the trusted area until its convergence. The performance evaluation of our implementation shows that PPFL can significantly improve privacy while incurring small system overheads at the client-side. In particular, PPFL can successfully defend the trained model against data reconstruction, property inference, and membership inference attacks. Furthermore, it can achieve comparable model utility with fewer communication rounds (0.54x) and a similar amount of network traffic (1.002x) compared to the standard federated learning of a complete model. This is achieved while only introducing up to ~15% CPU time, ~18% memory usage, and ~21% energy consumption overhead in PPFL's client-side.
翻译:我们提出并实施了移动系统保护隐私联邦学习框架(PPFL),以限制联邦学习中的隐私泄漏;利用高端和移动设备中广泛存在的可信赖的执行环境(TEE)来利用高端和移动设备中的信任执行环境(TEEs),我们利用用户的TEEs进行当地培训,利用服务器进行安全汇总,以便向对手隐藏模型/渐进更新。由于当前TEE的记忆体积有限,我们利用贪婪的层培训来在信任区域内培训每个模型层,直到其趋同。对执行情况进行的业绩评估表明,PPPFL能够大大改善隐私,同时在客户方面产生小型系统管理。特别是,PPFL能够成功地维护经过培训的模型,防止数据重建、财产推断和归属攻击。此外,它能够实现类似的模型效用,其通信轮子(0.54x)和网络交通量(1.002x)少于一个完整模型的标准封存学习量。在PPPPU-15%的时间、~18%的记忆边和~21客户的节能消耗量方面,这仅实现了~21的节能。