Federated learning (FL) goes beyond traditional, centralized machine learning by distributing model training among a large collection of edge clients. These clients cooperatively train a global, e.g., cloud-hosted, model without disclosing their local, private training data. The global model is then shared among all the participants which use it for local predictions. In this paper, we put forward a novel attacker model aiming at turning FL systems into covert channels to implement a stealth communication infrastructure. The main intuition is that, during federated training, a malicious sender can poison the global model by submitting purposely crafted examples. Although the effect of the model poisoning is negligible to other participants, and does not alter the overall model performance, it can be observed by a malicious receiver and used to transmit a single bit.
翻译:联邦学习(FL)超越了传统的中央机器学习,在大批边缘客户中分配示范培训。这些客户合作培训了一个全球性的模型,例如云载模型,不披露其本地和私人的培训数据。然后,全球模型由所有使用该模型进行当地预测的参与者共享。在本文中,我们提出了一个新的攻击者模型,旨在将FL系统变成秘密渠道,以实施隐形通信基础设施。主要的直觉是,在联合培训期间,恶意发送者可以通过提交有意制作的例子毒害全球模型。虽然模型中毒的影响对其他参与者来说微不足道,而且不会改变整个模型的性能,但是它可以被恶意接收者观察到,用来传输一小部分。