This paper investigates the robustness of over-the-air federated learning to Byzantine attacks. The simple averaging of the model updates via over-the-air computation makes the learning task vulnerable to random or intended modifications of the local model updates of some malicious clients. We propose a robust transmission and aggregation framework to such attacks while preserving the benefits of over-the-air computation for federated learning. For the proposed robust federated learning, the participating clients are randomly divided into groups and a transmission time slot is allocated to each group. The parameter server aggregates the results of the different groups using a robust aggregation technique and conveys the result to the clients for another training round. We also analyze the convergence of the proposed algorithm. Numerical simulations confirm the robustness of the proposed approach to Byzantine attacks.
翻译:本文调查对拜占庭攻击的超空联合学习的稳健性。 通过超空计算, 模型更新的简单平均率使得学习任务容易受到一些恶意客户对本地模型更新的随机或有意修改。 我们提议对此类攻击建立强有力的传输和汇总框架, 同时保留对联合学习的超空计算的好处。 对于拟议的强空联合学习, 参与的客户随机分为一组, 并分配给每个组。 参数服务器使用强力汇总技术汇总不同组的结果, 并将结果传递给客户进行另一轮培训。 我们还分析了拟议算法的趋同。 数值模拟证实了拟议对拜占庭攻击的正确性。