Federated learning is a prominent framework that enables clients (e.g., mobile devices or organizations) to train a collaboratively global model under a central server's orchestration while keeping local training datasets' privacy. However, the aggregation step in federated learning is vulnerable to adversarial attacks as the central server cannot manage clients' behavior. Therefore, the global model's performance and convergence of the training process will be affected under such attacks.To mitigate this vulnerability issue, we propose a novel robust aggregation algorithm inspired by the truth inference methods in crowdsourcing via incorporating the worker's reliability into aggregation. We evaluate our solution on three real-world datasets with a variety of machine learning models. Experimental results show that our solution ensures robust federated learning and is resilient to various types of attacks, including noisy data attacks, Byzantine attacks, and label flipping attacks.
翻译:联邦学习是一个突出的框架,它使客户(例如移动设备或组织)能够在中央服务器的管弦下,在保持当地培训数据集的隐私的同时,在中央服务器的管弦下,培训全球协作模式。然而,联邦学习的综合步骤很容易受到对抗性攻击,因为中央服务器无法管理客户的行为。因此,全球模式的绩效和培训过程的趋同将受到这种攻击的影响。为了减轻这一脆弱性问题,我们提议了一种新的强有力的综合算法,这种算法是由通过将工人的可靠性纳入集成的众包的真相推断方法所启发的。我们用各种机器学习模型来评价我们三个真实世界数据集的解决方案。实验结果显示,我们的解决方案确保了有力的联邦学习,并且能够适应各种类型的攻击,包括噪音数据攻击、拜赞廷攻击和标签翻转攻击。