Federated Learning (FL) is designed to protect the data privacy of each client during the training process by transmitting only models instead of the original data. However, the trained model may memorize certain information about the training data. With the recent legislation on right to be forgotten, it is crucially essential for the FL model to possess the ability to forget what it has learned from each client. We propose a novel federated unlearning method to eliminate a client's contribution by subtracting the accumulated historical updates from the model and leveraging the knowledge distillation method to restore the model's performance without using any data from the clients. This method does not have any restrictions on the type of neural networks and does not rely on clients' participation, so it is practical and efficient in the FL system. We further introduce backdoor attacks in the training process to help evaluate the unlearning effect. Experiments on three canonical datasets demonstrate the effectiveness and efficiency of our method.
翻译:联邦学习(FL)旨在通过只传送模型而不是原始数据来保护培训过程中每个客户的数据隐私,但是,经过培训的模型可以对培训数据的某些信息进行记忆化。由于最近关于权利的立法被遗忘,FL模式必须具备忘记它从每个客户那里学到的东西的能力。我们提出了一个新的联邦式不学习方法,通过从模型中减去累积的历史更新并利用知识蒸馏方法恢复模型的性能而不使用客户的任何数据来消除客户的贡献。这一方法对神经网络的类型没有任何限制,也不依赖客户的参与,因此在FL系统中是实用和有效的。我们在培训过程中进一步引入后门攻击,以帮助评估未学习的效果。在三个罐头数据集上进行的实验显示了我们方法的有效性和效率。