Secure aggregation promises a heightened level of privacy in federated learning, maintaining that a server only has access to a decrypted aggregate update. Within this setting, linear layer leakage methods are the only data reconstruction attacks able to scale and achieve a high leakage rate regardless of the number of clients or batch size. This is done through increasing the size of an injected fully-connected (FC) layer. However, this results in a resource overhead which grows larger with an increasing number of clients. We show that this resource overhead is caused by an incorrect perspective in all prior work that treats an attack on an aggregate update in the same way as an individual update with a larger batch size. Instead, by attacking the update from the perspective that aggregation is combining multiple individual updates, this allows the application of sparsity to alleviate resource overhead. We show that the use of sparsity can decrease the model size overhead by over 327$\times$ and the computation time by 3.34$\times$ compared to SOTA while maintaining equivalent total leakage rate, 77% even with $1000$ clients in aggregation.
翻译:安全聚合在联邦学习中具有更高的隐私保护水平,使得服务器仅能访问解密的聚合更新。在这种情况下,线性层泄漏方法是唯一能够扩展并实现高泄漏率的数据重构攻击,无论客户端数量或批量大小如何。这是通过增加注入的全连接(FC)层的大小来实现的。然而,这会导致资源开销,随着客户端数量的增加而不断增加。我们展示了这种资源开销是由于所有先前工作中的不正确视角造成的,这些工作将聚合更新的攻击视为具有更大批量大小的单个更新上的攻击。相反,通过从组合多个单个更新的聚合的角度来攻击更新,这允许应用稀疏性以减轻资源开销。我们展示了稀疏性的使用可以将模型大小开销减少超过327倍,计算时间减少3.34倍,同时保持等价的总泄漏率,即使聚合$1000$个客户端,总泄漏率仍为77%。