With privacy legislation empowering users with the right to be forgotten, it has become essential to make a model forget about some of its training data. We explore the problem of removing any client's contribution in federated learning (FL). During FL rounds, each client performs local training to learn a model that minimizes the empirical loss on their private data. We propose to perform unlearning at the client (to be erased) by reversing the learning process, i.e., training a model to \emph{maximize} the local empirical loss. In particular, we formulate the unlearning problem as a constrained maximization problem by restricting to an $\ell_2$-norm ball around a suitably chosen reference model to help retain some knowledge learnt from the other clients' data. This allows the client to use projected gradient descent to perform unlearning. The method does neither require global access to the data used for training nor the history of the parameter updates to be stored by the aggregator (server) or any of the clients. Experiments on the MNIST dataset show that the proposed unlearning method is efficient and effective.
翻译:由于隐私立法赋予了用户被遗忘的权利,因此有必要让一个模式忘记其一些培训数据。我们探讨了消除任何客户在联合学习(FL)方面的贡献的问题。在FL回合中,每个客户都进行当地培训,学习一个模式,最大限度地减少其私人数据的经验损失。我们提议通过扭转学习过程,即培训一个模式,使当地经验损失达到 emph{maxim化。特别是,我们把未学习问题发展成一个限制最大化的问题,在适当选择的参考模型周围限制在$\ell_2$-norm球上,以帮助保留从其他客户数据学到的一些知识。这使客户能够使用预测的梯度下降进行不学习。这种方法既不要求全球访问用于培训的数据,也不要求由聚合器(服务器)或任何客户存储的参数更新历史。对MNIST数据集的实验表明,拟议的不学习方法是有效和高效的。