Federated Learning (FL) recently emerges as a paradigm to train a global machine learning model across distributed clients without sharing raw data. Knowledge Graph (KG) embedding represents KGs in a continuous vector space, serving as the backbone of many knowledge-driven applications. As a promising combination, federated KG embedding can fully take advantage of knowledge learned from different clients while preserving the privacy of local data. However, realistic problems such as data heterogeneity and knowledge forgetting still remain to be concerned. In this paper, we propose FedLU, a novel FL framework for heterogeneous KG embedding learning and unlearning. To cope with the drift between local optimization and global convergence caused by data heterogeneity, we propose mutual knowledge distillation to transfer local knowledge to global, and absorb global knowledge back. Moreover, we present an unlearning method based on cognitive neuroscience, which combines retroactive interference and passive decay to erase specific knowledge from local clients and propagate to the global model by reusing knowledge distillation. We construct new datasets for assessing realistic performance of the state-of-the-arts. Extensive experiments show that FedLU achieves superior results in both link prediction and knowledge forgetting.
翻译:最近,联邦学习联盟(FL)作为一个范例,在不分享原始数据的情况下,在分布客户中培训全球机器学习模式,而没有共享原始数据。知识图(KG)嵌入是一个连续矢量空间中的KG,是许多知识驱动的应用的支柱。作为一个有希望的组合,联盟的KG嵌入可以充分利用从不同客户学到的知识,同时保护当地数据的隐私。然而,数据异质性和忘记知识等现实问题仍然值得关注。在本文件中,我们提议FDLU,这是一个新的FLU框架,用于多样化的KG嵌入学习和不学习。为了应对由数据异质性导致的本地优化和全球趋同之间的漂移,我们提议共同进行知识蒸,将本地知识转移到全球,吸收全球知识。此外,我们提出一种基于认知神经科学的不学习方法,将追溯性干扰和被动衰减结合起来,以删除当地客户的具体知识,并通过再利用知识蒸馏向全球模型传播。我们为评估当前艺术作品的实际表现而设计新的FDLU,我们建立新的数据集。广泛的实验表明,美联想既要忘记了预测和知识的高级结果。