Free-rider attacks against federated learning consist in dissimulating participation to the federated learning process with the goal of obtaining the final aggregated model without actually contributing with any data. This kind of attacks is critical in sensitive applications of federated learning, where data is scarce and the model has high commercial value. We introduce here the first theoretical and experimental analysis of free-rider attacks on federated learning schemes based on iterative parameters aggregation, such as FedAvg or FedProx, and provide formal guarantees for these attacks to converge to the aggregated models of the fair participants. We first show that a straightforward implementation of this attack can be simply achieved by not updating the local parameters during the iterative federated optimization. As this attack can be detected by adopting simple countermeasures at the server level, we subsequently study more complex disguising schemes based on stochastic updates of the free-rider parameters. We demonstrate the proposed strategies on a number of experimental scenarios, in both iid and non-iid settings. We conclude by providing recommendations to avoid free-rider attacks in real world applications of federated learning, especially in sensitive domains where security of data and models is critical.
翻译:自由驾驶者对联邦学习的进攻,包括模拟参加联邦学习进程,目标是在不实际贡献任何数据的情况下获得最后综合模型。这种攻击对于联邦学习的敏感应用至关重要,因为这方面的数据稀少,而且模型具有很高的商业价值。我们在此根据FedAvg或FedProx等迭代参数汇总,对联邦学习计划的自由驾驶者攻击进行首次理论和实验分析,并为这些攻击提供正式保证,使之与公平参与者的综合模型汇合。我们首先表明,通过在迭接联合优化期间不更新当地参数,就可以直接实施这一攻击。由于在服务器一级采取简单的对策可以检测到这种攻击,我们随后根据自由驾驶者参数的随机更新来研究更为复杂的否认计划。我们展示了在各种实验情景上的拟议战略,既在i-i-i-i-i-i-en-en-en-en-en-en-两种环境,我们最后通过提供建议,避免在现实世界应用联邦学习过程中,特别是在数据和模型安全至关重要的敏感领域,在现实世界应用中发生自由驾驶攻击。