Federated learning (FL) enables multiple clients to train a machine learning model collaboratively without exchanging their local data. Federated unlearning is an inverse FL process that aims to remove a specified target client's contribution in FL to satisfy the user's right to be forgotten. Most existing federated unlearning algorithms require the server to store the history of the parameter updates, which is not applicable in scenarios where the server storage resource is constrained. In this paper, we propose a simple-yet-effective subspace based federated unlearning method, dubbed SFU, that lets the global model perform gradient ascent in the orthogonal space of input gradient spaces formed by other clients to eliminate the target client's contribution without requiring additional storage. Specifically, the server first collects the gradients generated from the target client after performing gradient ascent, and the input representation matrix is computed locally by the remaining clients. We also design a differential privacy method to protect the privacy of the representation matrix. Then the server merges those representation matrices to get the input gradient subspace and updates the global model in the orthogonal subspace of the input gradient subspace to complete the forgetting task with minimal model performance degradation. Experiments on MNIST, CIFAR10, and CIFAR100 show that SFU outperforms several state-of-the-art (SOTA) federated unlearning algorithms by a large margin in various settings.
翻译:联邦学习( FL) 使多个客户能够在不交换本地数据的情况下合作培训机器学习模式。 联邦不学习是一个反FL进程, 目的是消除特定目标客户在FL中的贡献, 以满足用户被遗忘的权利。 多数现有的联邦不学习算法要求服务器存储参数更新的历史, 这不适用于服务器存储资源受限的情景。 在本文件中, 我们提出一个简单且有效的基于 federerated 的不学习方法, 称为 SFU, 让全球模型在输入梯度空间的或thooo空间中运行渐变率, 以便消除其他客户在FL中的贡献, 以满足用户被遗忘的权利。 具体地说, 服务器首先收集目标客户在使用梯度后产生的梯度更新历史更新历史, 而其余客户则在本地进行计算。 我们还设计了一种不同的隐私方法来保护代表矩阵的隐私。 然后服务器合并了这些代表矩阵, 以获取输入梯度子空间, 更新了其他客户在输入梯度的输入梯度次空间中或输入梯度的输入梯度空间中的全球模型, 消除目标客户端客户端贡献客户端贡献。 IMFARFAR 的演示分度, 度分数 工作表现, 。 将完成一些最小的SLFARFARFARFAR,, 将完成一个最小的SLFARFARFAL 工作, 格式,,, 度变形变形变形变形变式, 。</s>