Secure aggregation is a cryptographic protocol that securely computes the aggregation of its inputs. It is pivotal in keeping model updates private in federated learning. Indeed, the use of secure aggregation prevents the server from learning the value and the source of the individual model updates provided by the users, hampering inference and data attribution attacks. In this work, we show that a malicious server can easily elude secure aggregation as if the latter were not in place. We devise two different attacks capable of inferring information on individual private training datasets, independently of the number of users participating in the secure aggregation. This makes them concrete threats in large-scale, real-world federated learning applications. The attacks are generic and equally effective regardless of the secure aggregation protocol used. They exploit a vulnerability of the federated learning protocol caused by incorrect usage of secure aggregation and lack of parameter validation. Our work demonstrates that current implementations of federated learning with secure aggregation offer only a "false sense of security".
翻译:安全聚合是一个加密协议,可以安全地计算其投入的总和,对于在联合学习中私下保持模型更新至关重要。事实上,使用安全聚合使服务器无法了解用户提供的个人模型更新的价值和来源,从而妨碍推断和数据归属攻击。在这项工作中,我们表明恶意服务器可以很容易地避免安全聚合,仿佛后者没有到位。我们设计了两种不同的攻击,可以推断个人私人培训数据集的信息,而不管参与安全集合的用户数量多。这使得他们在大规模、实际世界联合学习应用程序中受到具体的威胁。攻击是通用的,同样有效,不管使用安全聚合协议的情况如何。他们利用不正确使用安全聚合和参数验证造成的联合学习协议的脆弱性。我们的工作表明,目前采用安全聚合的联邦学习方法进行的安全聚合只能提供一种“安全感”。