Decentralized federated learning (DFL) enables clients (e.g., hospitals and banks) to jointly train machine learning models without a central orchestration server. In each global training round, each client trains a local model on its own training data and then they exchange local models for aggregation. In this work, we propose SelfishAttack, a new family of attacks to DFL. In SelfishAttack, a set of selfish clients aim to achieve competitive advantages over the remaining non-selfish ones, i.e., the final learnt local models of the selfish clients are more accurate than those of the non-selfish ones. Towards this goal, the selfish clients send carefully crafted local models to each remaining non-selfish one in each global training round. We formulate finding such local models as an optimization problem and propose methods to solve it when DFL uses different aggregation rules. Theoretically, we show that our methods find the optimal solutions to the optimization problem. Empirically, we show that SelfishAttack successfully increases the accuracy gap (i.e., competitive advantage) between the final learnt local models of selfish clients and those of non-selfish ones. Moreover, SelfishAttack achieves larger accuracy gaps than poisoning attacks when extended to increase competitive advantages.
翻译:去中心化联邦学习(DFL)使得各客户端(如医院、银行)能够在无需中央协调服务器的情况下协同训练机器学习模型。在每一轮全局训练中,各客户端基于自身训练数据训练本地模型,随后交换本地模型以进行聚合。本文提出SelfishAttack,一种针对DFL的新型攻击体系。在SelfishAttack中,一组自私客户端旨在对剩余非自私客户端形成竞争优势,即自私客户端最终习得的本地模型比非自私客户端的模型具有更高准确率。为实现该目标,自私客户端在每轮全局训练中向每个非自私客户端发送精心构造的本地模型。我们将此类本地模型的构建问题形式化为优化问题,并针对DFL采用的不同聚合规则提出求解方法。理论分析表明,我们的方法能够找到该优化问题的最优解。实验证明,SelfishAttack能有效扩大自私客户端与非自私客户端最终习得本地模型之间的准确率差距(即竞争优势)。此外,当扩展至增强竞争优势时,SelfishAttack相比现有投毒攻击能产生更大的准确率差距。