Establishing how a set of learners can provide privacy-preserving federated learning in a fully decentralized (peer-to-peer, no coordinator) manner is an open problem. We propose the first privacy-preserving consensus-based algorithm for the distributed learners to achieve decentralized global model aggregation in an environment of high mobility, where the communication graph between the learners may vary between successive rounds of model aggregation. In particular, in each round of global model aggregation, the Metropolis-Hastings method is applied to update the weighted adjacency matrix based on the current communication topology. In addition, the Shamir's secret sharing scheme is integrated to facilitate privacy in reaching consensus of the global model. The paper establishes the correctness and privacy properties of the proposed algorithm. The computational efficiency is evaluated by a simulation built on a federated learning framework with a real-word dataset.
翻译:确定一组学习者如何以完全分散化的方式(同龄人相比,没有协调者)提供保护隐私的联合学习是一个尚未解决的问题。我们建议为分布式学习者提出第一种基于隐私的基于共识的算法,以便在高度流动性的环境中实现分散式全球模型汇总,在这一环境中,学习者之间的交流图在连续几轮模型汇总中可能有所不同。特别是在每一轮全球模型汇总中,都道波利斯-哈斯廷斯方法都用于更新基于当前通信地形学的加权对称矩阵。此外,Shamir的秘密共享计划是一体化的,目的是便利隐私,以达成全球模型的共识。该文件确定了拟议算法的正确性和隐私特性。计算效率是通过一个以真实语言数据集组成的联合学习框架进行的模拟来评估的。