The aim of Machine Unlearning (MU) is to provide theoretical guarantees on the removal of the contribution of a given data point from a training procedure. Federated Unlearning (FU) consists in extending MU to unlearn a given client's contribution from a federated training routine. Current FU approaches are generally not scalable, and do not come with sound theoretical quantification of the effectiveness of unlearning. In this work we present Informed Federated Unlearning (IFU), a novel efficient and quantifiable FU approach. Upon unlearning request from a given client, IFU identifies the optimal FL iteration from which FL has to be reinitialized, with unlearning guarantees obtained through a randomized perturbation mechanism. The theory of IFU is also extended to account for sequential unlearning requests. Experimental results on different tasks and dataset show that IFU leads to more efficient unlearning procedures as compared to basic re-training and state-of-the-art FU approaches.
翻译:“机离学”(MU)的目的是为从培训程序中删除某一数据点的贡献提供理论保障。“联离学(FU)”是指将MU扩大到某个客户从联合培训常规中不透露的某个客户的贡献。目前的“FU”方法一般无法推广,对未学习的效果没有合理的理论量化。在这项工作中,我们提出了“FIF(IFU)”方法,这是一种新颖的高效和量化的“FU”方法。在向某个客户提出“不学习”要求后,IFU确定了最佳的“FL”循环方法,必须从中重新推出“FL”(FL),通过随机的“ERB”机制获得“不学习保证”。“FU”理论还扩展,以考虑到连续的“不学习”请求。不同任务和数据集的实验结果表明,IFU与基本再培训和“FU”方法相比,提高了不学习程序的效率。