Federated Learning (FL) is a paradigm in Machine Learning (ML) that addresses data privacy, security, access rights and access to heterogeneous information issues by training a global model using distributed nodes. Despite its advantages, there is an increased potential for cyberattacks on FL-based ML techniques that can undermine the benefits. Model-poisoning attacks on FL target the availability of the model. The adversarial objective is to disrupt the training. We propose attestedFL, a defense mechanism that monitors the training of individual nodes through state persistence in order to detect a malicious worker. A fine-grained assessment of the history of the worker permits the evaluation of its behavior in time and results in innovative detection strategies. We present three lines of defense that aim at assessing if the worker is reliable by observing if the node is really training, advancing towards a goal. Our defense exposes an attacker's malicious behavior and removes unreliable nodes from the aggregation process so that the FL process converge faster. Through extensive evaluations and against various adversarial settings, attestedFL increased the accuracy of the model between 12% to 58% under different scenarios such as attacks performed at different stages of convergence, attackers colluding and continuous attacks.
翻译:联邦学习联盟(FL)是机器学习(ML)的一个范例,它涉及数据隐私、安全、访问权以及利用分布式节点培训全球模型获得不同信息的问题。尽管它有其优势,但对基于FL的ML技术进行网络攻击的可能性却越来越大,这种攻击可能会损害其效益。对FL的模拟打击目标是提供模型。对FL的攻击的目标是破坏培训。对抗性目标是破坏训练。我们提议一个经证明的FL,即一个国防机制,通过国家持久性监测单个节点的培训,以发现恶意工人。对工人的历史进行细微评估,使得能够及时评价其行为,并在创新的探测战略中取得结果。我们提出了三条防御线,目的是通过观察节点是否真正在培训,朝着一个目标前进来评估工人是否可靠。我们的防御暴露了攻击者的恶意行为,并消除了不可靠的集合过程的节点,从而加快FL过程的趋同速度。通过广泛的评价和针对各种敌对性环境,证明FLL在不同的情景下将模型的精确度从12%提高到58%,例如在不同阶段进行的攻击、攻击、攻击和连续的联。