Federated Learning (FL) enables clients to collaboratively train a global model without sharing their private data. However, the presence of malicious (Byzantine) clients poses significant challenges to the robustness of FL, particularly when data distributions across clients are heterogeneous. In this paper, we propose a novel Byzantine-robust FL optimization problem that incorporates adaptive weighting into the aggregation process. Unlike conventional approaches, our formulation treats aggregation weights as learnable parameters, jointly optimizing them alongside the global model parameters. To solve this optimization problem, we develop an alternating minimization algorithm with strong convergence guarantees under adversarial attack. We analyze the Byzantine resilience of the proposed objective. We evaluate the performance of our algorithm against state-of-the-art Byzantine-robust FL approaches across various datasets and attack scenarios. Experimental results demonstrate that our method consistently outperforms existing approaches, particularly in settings with highly heterogeneous data and a large proportion of malicious clients.
翻译:联邦学习(FL)使客户端能够在无需共享私有数据的情况下协作训练全局模型。然而,恶意(拜占庭)客户端的存在对FL的鲁棒性构成了重大挑战,尤其是在客户端间数据分布异构的情况下。本文提出了一种新颖的拜占庭鲁棒FL优化问题,将自适应加权机制融入聚合过程。与传统方法不同,我们的公式将聚合权重视为可学习参数,与全局模型参数联合优化。为解决此优化问题,我们开发了一种具有强收敛保证的交替最小化算法,可在对抗攻击下稳定运行。我们分析了所提目标的拜占庭容错能力,并在多种数据集和攻击场景下,将算法性能与最先进的拜占庭鲁棒FL方法进行比较。实验结果表明,我们的方法始终优于现有方法,尤其在数据高度异构且恶意客户端占比较高的场景中表现突出。