Federated learning is a newly emerging distributed learning framework that facilitates the collaborative training of a shared global model among distributed participants with their privacy preserved. However, federated learning systems are vulnerable to Byzantine attacks from malicious participants, who can upload carefully crafted local model updates to degrade the quality of the global model and even leave a backdoor. While this problem has received significant attention recently, current defensive schemes heavily rely on various assumptions, such as a fixed Byzantine model, availability of participants' local data, minority attackers, IID data distribution, etc. To relax those constraints, this paper presents Robust-FL, the first prediction-based Byzantine-robust federated learning scheme where none of the assumptions is leveraged. The core idea of the Robust-FL is exploiting historical global model to construct an estimator based on which the local models will be filtered through similarity detection. We then cluster local models to adaptively adjust the acceptable differences between the local models and the estimator such that Byzantine users can be identified. Extensive experiments over different datasets show that our approach achieves the following advantages simultaneously: (i) independence of participants' local data, (ii) tolerance of majority attackers, (iii) generalization to variable Byzantine model.
翻译:联邦学习是一个新兴的分布式学习框架,它便利了分布式参与者之间合作培训共享全球模式的共享全球模式,并保护了他们的隐私;然而,联邦学习系统很容易受到恶意参与者的拜占庭攻击,他们可以上传精心设计的当地模型更新,以降低全球模型的质量,甚至留下一个后门。虽然这个问题最近受到极大关注,但当前的防御计划严重依赖各种假设,如固定的拜占庭模型、参与者当地数据的可用性、少数群体攻击者、ID数据发布等。为了放松这些限制,本文介绍了基于预测的第一个拜占庭-罗 ⁇ 联合学习计划,即第一个基于Byzantine-robust的预测性学习计划,而其中没有利用任何假设。Robust-Federerate 学习计划的核心理念正在利用历史的全球模型来构建一个估算模型,在此基础上,地方模型将通过类似检测过滤地方模型。然后我们将地方模型和估计师之间的可接受差异集中起来加以调整,这样Byzantine用户就可以被识别。对不同的模型进行广泛的实验,对不同的模型进行广泛的实验,显示我们的方法能够同时取得多数的容忍度。