Federated learning enables a global machine learning model to be trained collaboratively by distributed, mutually non-trusting learning agents who desire to maintain the privacy of their training data and their hardware. A global model is distributed to clients, who perform training, and submit their newly-trained model to be aggregated into a superior model. However, federated learning systems are vulnerable to interference from malicious learning agents who may desire to prevent training or induce targeted misclassification in the resulting global model. A class of Byzantine-tolerant aggregation algorithms has emerged, offering varying degrees of robustness against these attacks, often with the caveat that the number of attackers is bounded by some quantity known prior to training. This paper presents Simeon: a novel approach to aggregation that applies a reputation-based iterative filtering technique to achieve robustness even in the presence of attackers who can exhibit arbitrary behaviour. We compare Simeon to state-of-the-art aggregation techniques and find that Simeon achieves comparable or superior robustness to a variety of attacks. Notably, we show that Simeon is tolerant to sybil attacks, where other algorithms are not, presenting a key advantage of our approach.
翻译:联邦学习使全球机器学习模式能够通过分布式、互不信任的学习人员进行协作培训,他们希望保持培训数据及其硬件的隐私。全球模式被分发给客户,他们进行培训,并提交新培训模式,以汇总为优等模式。不过,联邦学习系统容易受到恶意学习人员的干扰,他们可能希望防止培训,或诱使由此形成的全球模式有目标的错误分类。一种Byzantine耐受性综合算法已经出现,这些算法对这些袭击提供了不同程度的稳健性,往往告诫说攻击者人数受培训前已知数量的约束。本文介绍Simeon:一种采用基于声誉的迭代过滤技术的新型集成方法,以达到强健性,即使攻击者可能表现出任意行为。我们把Simeon比作最先进的汇总技术,并发现Simeon取得了与各种袭击相近或优越的强健性。值得注意的是,我们表明Simeon对Sybil攻击具有容忍性,而其他算法则不具有关键优势。