Federated learning (FL) is a collaborative learning paradigm where participants jointly train a powerful model without sharing their private data. One desirable property for FL is the implementation of the right to be forgotten (RTBF), i.e., a leaving participant has the right to request to delete its private data from the global model. However, unlearning itself may not be enough to implement RTBF unless the unlearning effect can be independently verified, an important aspect that has been overlooked in the current literature. In this paper, we prompt the concept of verifiable federated unlearning, and propose VeriFi, a unified framework integrating federated unlearning and verification that allows systematic analysis of the unlearning and quantification of its effect, with different combinations of multiple unlearning and verification methods. In VeriFi, the leaving participant is granted the right to verify (RTV), that is, the participant notifies the server before leaving, then actively verifies the unlearning effect in the next few communication rounds. The unlearning is done at the server side immediately after receiving the leaving notification, while the verification is done locally by the leaving participant via two steps: marking (injecting carefully-designed markers to fingerprint the leaver) and checking (examining the change of the global model's performance on the markers). Based on VeriFi, we conduct the first systematic and large-scale study for verifiable federated unlearning, considering 7 unlearning methods and 5 verification methods. Particularly, we propose a more efficient and FL-friendly unlearning method, and two more effective and robust non-invasive-verification methods. We extensively evaluate VeriFi on 7 datasets and 4 types of deep learning models. Our analysis establishes important empirical understandings for more trustworthy federated unlearning.
翻译:联邦学习(FL)是一个合作学习模式,参与者在其中联合培训强大的模型,而不分享其私人数据。对于FL来说,一个理想的属性是落实被遗忘的权利(RTBF),即离任参与者有权请求从全球模型中删除其私人数据。然而,退职参与者本身可能不足以实施RTBF,除非不学习效应可以独立核实,这是当前文献中忽略的一个重要方面。在本文中,我们提出可核查的Federalfered 退出学习的概念,并提议VeriFi,这是一个统一框架,整合了Federered 的未学习和核查,以便系统分析其效果,从而系统分析不学习和量化。在VeriFiFiFi中,退职参与者有权核实(RTV),也就是说,退职参与者在离开之前通知服务器,然后在接下来的几轮交流回合中积极核实未学习效果。在服务器方面立即进行不学习,而退职参与者则通过不前两个步骤在当地进行核查:标记(输入FIL)不学习的不学习和量化方法,在基础上进行更深入的大规模学习,在基础上进行更精确的学习。我们研究,在基础上进行更深入的测试,在基础上进行更深入的检验,在基础上进行更精确的检验,在基础上进行更精确的检验,在基础的检验,在基础上进行。