Federated learning (FL) is an emerging practical framework for effective and scalable machine learning among multiple participants, such as end users, organizations and companies. However, most existing FL or distributed learning frameworks have not well addressed two important issues together: collaborative fairness and adversarial robustness (e.g. free-riders and malicious participants). In conventional FL, all participants receive the global model (equal rewards), which might be unfair to the high-contributing participants. Furthermore, due to the lack of a safeguard mechanism, free-riders or malicious adversaries could game the system to access the global model for free or to sabotage it. In this paper, we propose a novel Robust and Fair Federated Learning (RFFL) framework to achieve collaborative fairness and adversarial robustness simultaneously via a reputation mechanism. RFFL maintains a reputation for each participant by examining their contributions via their uploaded gradients (using vector similarity) and thus identifies non-contributing or malicious participants to be removed. Our approach differentiates itself by not requiring any auxiliary/validation dataset. Extensive experiments on benchmark datasets show that RFFL can achieve high fairness and is very robust to different types of adversaries while achieving competitive predictive accuracy.
翻译:联邦学习(FL)是多个参与者(如最终用户、组织和公司)之间有效和可扩缩的机器学习的新兴实用框架,这些参与者包括最终用户、组织和公司,但是,大多数现有的FL或分布式学习框架没有很好地共同解决两个重要问题:协作的公平性和对抗性强健性(如免费搭车者和恶意参与者)。在传统FL中,所有参与者都得到全球模式(同等报酬),这可能对高出价参与者不公平。此外,由于缺乏保障机制,自由搭车者或恶意对手可以玩弄系统,免费或破坏全球模式。在本文件中,我们提出了一个新的Robust和公平联邦学习(RFFL)框架,以便通过声誉机制同时实现协作的公平性和对抗性强健性。RFFL通过上传梯度(使用病媒相似性)审查每个参与者的贡献,从而确定非出价或恶意参与者将被删除。我们的方法因不要求任何辅助/验证数据集而有所区别。关于基准数据集的广泛实验表明,RFFL可以实现高度的公平性和高度可靠地预测不同类型敌人的准确性。