Federated Learning (FL) opens new perspectives for training machine learning models while keeping personal data on the users premises. Specifically, in FL, models are trained on the users devices and only model updates (i.e., gradients) are sent to a central server for aggregation purposes. However, the long list of inference attacks that leak private data from gradients, published in the recent years, have emphasized the need of devising effective protection mechanisms to incentivize the adoption of FL at scale. While there exist solutions to mitigate these attacks on the server side, little has been done to protect users from attacks performed on the client side. In this context, the use of Trusted Execution Environments (TEEs) on the client side are among the most proposing solutions. However, existing frameworks (e.g., DarkneTZ) require statically putting a large portion of the machine learning model into the TEE to effectively protect against complex attacks or a combination of attacks. We present GradSec, a solution that allows protecting in a TEE only sensitive layers of a machine learning model, either statically or dynamically, hence reducing both the TCB size and the overall training time by up to 30% and 56%, respectively compared to state-of-the-art competitors.
翻译:联邦学习联合会(FL)为培训机器学习模式开辟了新的视角,同时将个人数据保存在用户的房舍上,为培训机器学习模式开辟了新的视角。具体来说,FL对用户设备进行了培训,只有模型更新模型(如梯度)被发送到中央服务器,以便汇总目的;然而,近年来公布的大量从梯度中泄漏私人数据的推论攻击清单强调,需要设计有效的保护机制,以激励大规模采用FL。虽然在服务器方面有缓解这些攻击的解决方案,但在保护用户免受客户方袭击方面却没有做多少工作。在这方面,在客户方使用信任的执行环境(如梯度)是最可行的解决办法之一。然而,现有的框架(如DarkneTZ)需要将机器学习模型的大部分静态地放在TEE中,以有效保护人们不受复杂攻击或各种攻击的组合。我们介绍了格拉德西,这一解决方案只允许在TEE中保护机器学习模式的敏感层,无论是静态还是动态层面,从而将TCB级的大小和总体培训时间分别减少到TCB级的30%和总体培训的状态。