While recent works have indicated that federated learning (FL) is vulnerable to poisoning attacks by compromised clients, we show that these works make a number of unrealistic assumptions and arrive at somewhat misleading conclusions. For instance, they often use impractically high percentages of compromised clients or assume unrealistic capabilities for the adversary. We perform the first critical analysis of poisoning attacks under practical production FL environments by carefully characterizing the set of realistic threat models and adversarial capabilities. Our findings are rather surprising: contrary to the established belief, we show that FL, even without any defenses, is highly robust in practice. In fact, we go even further and propose novel, state-of-the-art poisoning attacks under two realistic threat models, and show via an extensive set of experiments across three benchmark datasets how (in)effective poisoning attacks are, especially when simple defense mechanisms are used. We correct previous misconceptions and give concrete guidelines that we hope will encourage our community to conduct more accurate research in this space and build stronger (and more realistic) attacks and defenses.
翻译:虽然最近的著作表明,联合学习(FL)很容易受到受损害的客户的毒害攻击,但我们却表明,这些作品做出了一些不切实际的假设,并得出了一些误导性的结论,例如,这些作品往往使用不切实际的高比例的被损害客户,或者对对手采取不切实际的能力。我们在实际生产FL环境下对中毒攻击进行了第一次批判性分析,仔细描述一套现实的威胁模式和对抗能力。我们的调查结果相当令人惊讶:与既定的信念相反,我们显示FL,即使没有任何防御,在实践中也非常强大。事实上,我们更进一步,用两种现实的威胁模式提出新的、最先进的中毒攻击,并通过一系列广泛的实验,在三个基准数据集中展示(有效的)中毒攻击是如何发生的,特别是在使用简单的防御机制时。我们纠正以前的错误,并给出具体的指导方针,我们希望这将鼓励我们的社区在这个空间进行更准确的研究,建立更强大的(更现实的)攻击和防御。