Federated Learning (FL) has emerged as a promising practical framework for effective and scalable distributed machine learning. However, most existing FL or distributed learning frameworks have not addressed two important issues well together: collaborative fairness and robustness to non-contributing participants (e.g. free-riders, adversaries). In particular, all participants can receive the 'same' access to the global model, which is obviously unfair to the high-contributing participants. Furthermore, due to the lack of a safeguard mechanism, free-riders or malicious adversaries could game the system to access the global model for free or to sabotage it. By identifying the underlying similarity between these two issues, we investigate them simultaneously and propose a novel Robust and Fair Federated Learning (RFFL) framework which utilizes reputation scores to address both issues, thus ensuring the high-contributing participants are rewarded with high-performing models while the low- or non-contributing participants can be detected and removed. Furthermore, our approach differentiates itself by not requiring any auxiliary dataset for the reputation calculation. Extensive experiments on benchmark datasets demonstrate that RFFL achieves high fairness, is robust against several types of adversaries, delivers comparable accuracy to the conventional federated framework and outperforms the Standalone framework.
翻译:联邦学习联合会(FL)已成为有效和可扩缩分布式机器学习的一个有希望的切实框架,但是,大多数现有的FL或分布式学习框架都没有很好地共同解决两个重要问题:协作公平,对非参与者(如搭便车者、对手等)保持稳健;特别是,所有参与者都可以获得“同样”的全球模式,这显然对高捐助国参与者来说不公平;此外,由于缺乏保障机制,免费搭便车者或恶意对手可以玩弄系统,免费进入或破坏全球模式;通过查明这两个问题之间的根本相似性,我们同时调查这两个问题,并提议一个利用声誉评分解决这两个问题的新颖的Robust和公平联邦学习框架(RFFL),从而确保高捐助国参与者得到高绩效模型的奖励,同时可以探测和删除低捐助国或非捐助国参与者;此外,我们的方法有所区别,不需要任何辅助数据来计算声誉。关于基准数据集的广泛实验表明,RFFL实现了高度公平性,对若干类型的敌对者来说是强有力的,能够向传统框架提供可比的准确性。