Federated learning (FL), as a type of distributed machine learning frameworks, is vulnerable to external attacks on FL models during parameters transmissions. An attacker in FL may control a number of participant clients, and purposely craft the uploaded model parameters to manipulate system outputs, namely, model poisoning (MP). In this paper, we aim to propose effective MP algorithms to combat state-of-the-art defensive aggregation mechanisms (e.g., Krum and Trimmed mean) implemented at the server without being noticed, i.e., covert MP (CMP). Specifically, we first formulate the MP as an optimization problem by minimizing the Euclidean distance between the manipulated model and designated one, constrained by a defensive aggregation rule. Then, we develop CMP algorithms against different defensive mechanisms based on the solutions of their corresponding optimization problems. Furthermore, to reduce the optimization complexity, we propose low complexity CMP algorithms with a slight performance degradation. In the case that the attacker does not know the defensive aggregation mechanism, we design a blind CMP algorithm, in which the manipulated model will be adjusted properly according to the aggregated model generated by the unknown defensive aggregation. Our experimental results demonstrate that the proposed CMP algorithms are effective and substantially outperform existing attack mechanisms.
翻译:联邦学习(FL)作为一种分布式机器学习框架,在参数传输期间很容易受到外部攻击FL模型。FL攻击者可能控制一些参与者客户,并特意设计上传模型参数来操纵系统产出,即模型中毒(MP)。在本文中,我们的目标是提出有效的MP算法,以打击服务器上实施的最先进的防御性聚合机制(如Krum和Trimmed平均值),而没有注意到防御性聚合机制,即秘密MP(CMP),从而很容易受到外部攻击。具体地说,我们首先将MP设计成一个优化问题,最大限度地减少被操纵模型与指定模型之间的欧cliidean距离,并受防御性集合规则的限制。然后,我们根据相应的优化问题的解决办法,针对不同的防御机制开发CMP算法。此外,为了降低优化复杂性,我们提出了低的CMP算法,同时略有性能退化。如果攻击者不知道防御性聚合机制,我们设计了一个盲人CMP算法,根据未知的防御性聚合模型进行适当调整。我们的实验结果显示,现有的CMP算法是有效的。