Federated Learning enables one to jointly train a machine learning model across distributed clients holding sensitive datasets. In real-world settings, this approach is hindered by expensive communication and privacy concerns. Both of these challenges have already been addressed individually, resulting in competing optimisations. In this article, we tackle them simultaneously for one of the first times. More precisely, we adapt compression-based federated techniques to additive secret sharing, leading to an efficient secure aggregation protocol, with an adaptable security level. We prove its privacy against malicious adversaries and its correctness in the semi-honest setting. Experiments on deep convolutional networks demonstrate that our secure protocol achieves high accuracy with low communication costs. Compared to prior works on secure aggregation, our protocol has a lower communication and computation costs for a similar accuracy.
翻译:联邦学习联合会(Federal Learning)让一个人能够共同训练一个机器学习模式,让拥有敏感数据集的分布式客户使用。在现实世界环境中,这种方法受到昂贵的通信和隐私关切的阻碍。这两种挑战都已经单独解决,导致相互竞争的优化。在本篇文章中,我们第一次同时解决这些问题。更确切地说,我们将压缩的联邦技术用于添加秘密共享,导致一个高效的安全聚合协议,并具有适应性的安全级别。我们证明了它针对恶意对手的隐私和在半诚实环境中的正确性。对深层革命网络的实验表明,我们的安全协议以较低的通信成本实现了高度的准确性。与先前的安全聚合工程相比,我们的协议的通信成本和计算成本也比较低。