Security and privacy are important concerns in machine learning. End user devices often contain a wealth of data and this information is sensitive and should not be shared with servers or enterprises. As a result, federated learning was introduced to enable machine learning over large decentralized datasets while promising privacy by eliminating the need for data sharing. However, prior work has shown that shared gradients often contain private information and attackers can gain knowledge either through malicious modification of the architecture and parameters or by using optimization to approximate user data from the shared gradients. Despite this, most attacks have so far been limited in scale of number of clients, especially failing when client gradients are aggregated together using secure model aggregation. The attacks that still function are strongly limited in the number of clients attacked, amount of training samples they leak, or number of iterations they take to be trained. In this work, we introduce MANDRAKE, an attack that overcomes previous limitations to directly leak large amounts of client data even under secure aggregation across large numbers of clients. Furthermore, we break the anonymity of aggregation as the leaked data is identifiable and directly tied back to the clients they come from. We show that by sending clients customized convolutional parameters, the weight gradients of data points between clients will remain separate through aggregation. With an aggregation across many clients, prior work could only leak less than 1% of images. With the same number of non-zero parameters, and using only a single training iteration, MANDRAKE leaks 70-80% of data samples.
翻译:安全和隐私是机器学习中的重要问题。最终用户设备通常包含丰富的数据,这些信息是敏感的,不应与服务器或企业共享。因此,联邦学习的出现使得可以在分散的大型数据集上进行机器学习,并承诺通过消除数据共享来确保隐私。但是,先前的研究表明,共享的梯度通常包含私有信息,攻击者可以通过恶意修改架构和参数或使用优化从共享的梯度中近似用户数据来获得知识。尽管如此,大多数攻击目前在客户端数量方面受到限制,特别是在使用安全模型聚合时失败。仍然有效的攻击在客户端数量、泄漏的训练样本数量或训练所需时间方面受到强烈的限制。在本研究中,我们引入MADRAC,这是一种攻击方式,它克服了先前的限制,即使在大量客户端之间使用安全的聚合方法,也可以直接泄漏大量客户数据。此外,我们打破了聚合的匿名性,因为泄漏的数据是可识别的,并直接与它们来自的客户端相关联。我们展示了通过向客户端发送定制的卷积参数,数据点的权重梯度在聚合中将保持分离。使用许多客户端的聚合,之前的工作只能泄漏不到1%的图像。使用相同数量的非零参数,并且只使用单个训练迭代,MADRAC泄漏了70-80%的数据样本。