Federated Learning (FL) is a distributed learning paradigm that enables mutually untrusting clients to collaboratively train a common machine learning model. Client data privacy is paramount in FL. At the same time, the model must be protected from poisoning attacks from adversarial clients. Existing solutions address these two problems in isolation. We present FedPerm, a new FL algorithm that addresses both these problems by combining a novel intra-model parameter shuffling technique that amplifies data privacy, with Private Information Retrieval (PIR) based techniques that permit cryptographic aggregation of clients' model updates. The combination of these techniques further helps the federation server constrain parameter updates from clients so as to curtail effects of model poisoning attacks by adversarial clients. We further present FedPerm's unique hyperparameters that can be used effectively to trade off computation overheads with model utility. Our empirical evaluation on the MNIST dataset demonstrates FedPerm's effectiveness over existing Differential Privacy (DP) enforcement solutions in FL.
翻译:联邦学习(FL)是一种分布式学习模式,使互不信任的客户能够合作训练一个共同的机器学习模式。客户数据隐私在FL中至关重要。 同时,该模式必须受到保护,免受对抗性客户的中毒攻击。现有的解决方案孤立地解决了这两个问题。我们介绍了FedPerm,这是一个新的FL算法,它通过结合一种新型的模型内参数拼接技术来解决这些问题,这种技术扩大了数据隐私,而以私人信息检索(PIR)为基础的技术可以对客户的模型更新进行加密汇总。这些技术的结合进一步帮助联邦服务器限制客户的参数更新,以便减少敌对性客户的典型中毒攻击的影响。我们进一步介绍了FedPerm的独特超参数,可用于有效地用模型工具交换计算间接费用。我们对MedPermIS数据集的经验评估表明,FedPerm对FL现有的不同隐私执行解决方案的有效性。