Despite the great potential of Federated Learning (FL) in large-scale distributed learning, the current system is still subject to several privacy issues due to the fact that local models trained by clients are exposed to the central server. Consequently, secure aggregation protocols for FL have been developed to conceal the local models from the server. However, we show that, by manipulating the client selection process, the server can circumvent the secure aggregation to learn the local models of a victim client, indicating that secure aggregation alone is inadequate for privacy protection. To tackle this issue, we leverage blockchain technology to propose a verifiable client selection protocol. Owing to the immutability and transparency of blockchain, our proposed protocol enforces a random selection of clients, making the server unable to control the selection process at its discretion. We present security proofs showing that our protocol is secure against this attack. Additionally, we conduct several experiments on an Ethereum-like blockchain to demonstrate the feasibility and practicality of our solution.
翻译:尽管联邦学习联合会(FL)在大规模分布式学习方面具有巨大的潜力,但由于客户所培训的本地模型暴露在中央服务器上,目前系统仍面临若干隐私问题,因此,已经为FL制定了安全汇总协议,将本地模型隐藏在服务器上,然而,我们表明,通过操纵客户选择程序,服务器可以绕过安全汇总,学习受害人客户的本地模型,表明仅靠安全汇总不足以保护隐私。为了解决这一问题,我们利用阻塞技术提出可核查的客户选择协议。由于链条不易移动和透明,我们提议的协议强制随机选择客户,使服务器无法自行控制选择过程。我们出示安全证据,表明我们的协议对这次袭击是安全的。此外,我们还在类似Ethereum的链条上进行了几次试验,以证明我们解决方案的可行性和实用性。