We propose a novel secure aggregation scheme based on seed-homomorphic pseudo-random generator (SHPRG) to prevent private training data leakage from model-related information in Federated Learning systems. Our constructions leverage the homomorphic property of SHPRG to simplify the masking and demasking scheme, which entails a linear overhead while revealing nothing beyond the aggregation result against colluding entities. Additionally, our scheme is resilient to dropouts without extra overhead. We experimentally demonstrate our scheme significantly improves the efficiency to 20 times over baseline, especially in the more realistic case in which the number of clients and model size become large and a certain percentage of clients drop out from the system.
翻译:我们提议采用基于种子多变假随机生成器的新的安全汇总计划,以防止联邦学习系统模型相关信息的私人培训数据泄漏。 我们的建筑利用SHPRG的同质特性简化遮掩和除假冒计划,该计划涉及线性间接费用,但除了对串通实体的汇总结果外,没有披露任何其他任何内容。此外,我们的计划对无额外间接费用的辍学者具有适应力。我们实验性地证明,我们的计划大大提高了效率,超过基线20倍,特别是在客户数量和模型规模大以及一定比例的客户退出系统的情况下。