We propose and implement a Privacy-preserving Federated Learning ($PPFL$) framework for mobile systems to limit privacy leakages in federated learning. Leveraging the widespread presence of Trusted Execution Environments (TEEs) in high-end and mobile devices, we utilize TEEs on clients for local training, and on servers for secure aggregation, so that model/gradient updates are hidden from adversaries. Challenged by the limited memory size of current TEEs, we leverage greedy layer-wise training to train each model's layer inside the trusted area until its convergence. The performance evaluation of our implementation shows that $PPFL$ can significantly improve privacy while incurring small system overheads at the client-side. In particular, $PPFL$ can successfully defend the trained model against data reconstruction, property inference, and membership inference attacks. Furthermore, it can achieve comparable model utility with fewer communication rounds (0.54$\times$) and a similar amount of network traffic (1.002$\times$) compared to the standard federated learning of a complete model. This is achieved while only introducing up to ~15% CPU time, ~18% memory usage, and ~21% energy consumption overhead in $PPFL$'s client-side.
翻译:我们提议并实施一个保护隐私的联邦学习框架(PPFL$),以限制联邦学习中的隐私泄漏。利用高端和移动设备中广泛存在的信任执行环境(TEE),我们利用TEE对客户进行当地培训,在服务器上使用TEE对客户进行安全汇总,以便向对手隐藏模型/渐进更新信息。由于当前TEE的记忆体积有限,我们利用贪婪的层培训在受信任区域内对每个模型层进行培训,直到其趋同。我们实施的业绩评估表明,$PPFL$可以大大改善隐私,同时在客户方面产生小型系统管理费用。特别是,$PPFL$能够成功地维护经过培训的模式,防止数据重建、财产推断和归属攻击。此外,它能够以较少的通信轮数(0.54美元每小时)和类似数量的网络交通流量(0.002美元每小时),而完全模型的标准联邦化学习则显示,在客户方面,仅引入了高达~15 %的消费成本-18美元的存储客户端节节节节节节节中,只有高达~15美元。