Secure aggregation is a cryptographic protocol that securely computes the aggregation of its inputs. It is pivotal in keeping model updates private in federated learning. Indeed, the use of secure aggregation prevents the server from learning the value and the source of the individual model updates provided by the users, hampering inference and data attribution attacks. In this work, we show that a malicious server can easily elude secure aggregation as if the latter were not in place. We devise two different attacks capable of inferring information on individual private training datasets, independently of the number of users participating in the secure aggregation. This makes them concrete threats in large-scale, real-world federated learning applications. The attacks are generic and do not target any specific secure aggregation protocol. They are equally effective even if the secure aggregation protocol is replaced by its ideal functionality that provides the perfect level of security. Our work demonstrates that secure aggregation has been incorrectly combined with federated learning and that current implementations offer only a "false sense of security".
翻译:安全聚合是一个加密协议,可以安全地计算其投入的总和。 它对于在联合学习中私下进行模型更新至关重要。 事实上,使用安全聚合使服务器无法了解用户提供的单个模型更新的价值和来源,从而妨碍推断和数据归属攻击。 在这项工作中,我们证明恶意服务器可以很容易地避免安全聚合,仿佛后者没有到位。 我们设计了两种不同的攻击,可以推断个人私人培训数据集的信息,而不管参与安全集合的用户数量多寡。这使得它们成为大规模、实际世界联合学习应用程序中的具体威胁。这些攻击是通用的,并不针对任何具体的安全聚合协议。即使安全聚合协议被其提供完美安全水平的理想功能所取代,它们也同样有效。我们的工作表明,安全聚合与联合与联合学习的不正确,而且目前的实施只提供了一种“安全感 ” 。