The aim of Machine Unlearning (MU) is to provide theoretical guarantees on the removal of the contribution of a given data point from a training procedure. Federated Unlearning (FU) consists in extending MU to unlearn a given client's contribution from a federated training routine. Current FU approaches are generally not scalable, and do not come with sound theoretical quantification of the effectiveness of unlearning. In this work we present Informed Federated Unlearning (IFU), a novel efficient and quantifiable FU approach. Upon unlearning request from a given client, IFU identifies the optimal FL iteration from which FL has to be reinitialized, with unlearning guarantees obtained through a randomized perturbation mechanism. The theory of IFU is also extended to account for sequential unlearning requests. Experimental results on different tasks and dataset show that IFU leads to more efficient unlearning procedures as compared to basic re-training and state-of-the-art FU approaches.
翻译:机器反学习 (MU) 旨在提供理论保证,以从训练过程中删除给定数据点的贡献。联邦反学习 (FU) 是将 MU 扩展到从联邦培训程序中反学习给定客户端的贡献。当前的FU方法通常不可扩展,并且没有对反学习有效性的含义量化。 在本研究中,我们提出了一种新的高效和量化FU方法,称为'Informed Federated Unlearning' (IFU)。在收到给定客户端的反学习请求时,IFU通过随机扰动机制识别最佳FL迭代,以便从该FL重新初始化,同时保证客户端中学习的证明。IFU理论也扩展到考虑顺序反学习请求。在不同的任务和数据集上进行的实验结果表明,IFU相对于基本重新培训和最先进的FU方法可以导致更高效的反学习过程。