Privacy-preserving federated learning enables a population of distributed clients to jointly learn a shared model while keeping client training data private, even from an untrusted server. Prior works do not provide efficient solutions that protect against collusion attacks in which parties collaborate to expose an honest client's model parameters. We present an efficient mechanism based on oblivious distributed differential privacy that is the first to protect against such client collusion, including the "Sybil" attack in which a server preferentially selects compromised devices or simulates fake devices. We leverage the novel privacy mechanism to construct a secure federated learning protocol and prove the security of that protocol. We conclude with empirical analysis of the protocol's execution speed, learning accuracy, and privacy performance on two data sets within a realistic simulation of 5,000 distributed network clients.
翻译:保护隐私联合会的学习使分布式客户群能够共同学习一个共享模式,同时保持客户培训数据私密,甚至从一个不受信任的服务器获得这些数据。先前的工程并不能提供有效的解决方案,防止串通袭击,使各方合作暴露诚实客户的模式参数。我们提出了一个基于隐蔽分布式差异隐私的有效机制,这是防止客户串通的第一个机制,包括“Sybil”式袭击,服务器优先选择妥协装置或模拟假装置。我们利用新的隐私机制构建一个安全的联合学习协议,并证明协议的安全性。我们最后在5,000个分布式网络客户的现实模拟中,对协议的执行速度、学习准确性和两套数据集的隐私性能进行了实证分析。