We propose Byzantine-robust federated learning protocols with nearly optimal statistical rates. In contrast to prior work, our proposed protocols improve the dimension dependence and achieve a tight statistical rate in terms of all the parameters for strongly convex losses. We benchmark against competing protocols and show the empirical superiority of the proposed protocols. Finally, we remark that our protocols with bucketing can be naturally combined with privacy-guaranteeing procedures to introduce security against a semi-honest server. The code for evaluation is provided in https://github.com/wanglun1996/secure-robust-federated-learning.
翻译:我们提出了带有几乎最优统计速率的拜占庭容错联邦学习协议。与以往的研究不同,我们的协议改善了维度依赖性,并在强凸损失的所有参数方面实现了紧密的统计速率。我们与竞争协议进行了基准测试,并展示了所提出协议的实证优越性。最后,我们指出与分桶相结合的协议可以自然地与保护隐私的程序相结合,以介绍对半诚实服务器的安全性。评估代码在https://github.com/wanglun1996/secure-robust-federated-learning提供。