In this work, we carry out the first, in-depth, privacy analysis of Decentralized Learning -- a collaborative machine learning framework aimed at circumventing the main limitations of federated learning. We identify the decentralized learning properties that affect users' privacy and we introduce a suite of novel attacks for both passive and active decentralized adversaries. We demonstrate that, contrary to what is claimed by decentralized learning proposers, decentralized learning does not offer any security advantages over more practical approaches such as federated learning. Rather, it tends to degrade users' privacy by increasing the attack surface and enabling any user in the system to perform powerful privacy attacks such as gradient inversion, and even gain full control over honest users' local model. We also reveal that, given the state of the art in protections, privacy-preserving configurations of decentralized learning require abandoning any possible advantage over the federated setup, completely defeating the objective of the decentralized approach.
翻译:在这项工作中,我们首先对分散学习进行深入的隐私分析,这是一个合作机器学习框架,旨在绕过联合学习的主要限制。我们确定影响用户隐私的分散学习特性,并对被动和主动分散的对手进行一系列新的攻击。我们证明,与分散学习建议者的说法相反,分散学习并不比联合学习等更实际的方法带来任何安全优势。相反,分散学习往往通过增加攻击面,使系统中的任何用户都能进行强大的隐私攻击,例如梯度反转,甚至完全控制诚实用户的本地模式。我们还表明,鉴于保护方面的先进经验,分散学习的隐私保护配置需要放弃联合会设置的任何可能优势,完全破坏分散方法的目标。