Recent studies have shown that, like traditional machine learning, federated learning (FL) is also vulnerable to adversarial attacks. To improve the adversarial robustness of FL, few federated adversarial training (FAT) methods have been proposed to apply adversarial training locally before global aggregation. Although these methods demonstrate promising results on independent identically distributed (IID) data, they suffer from training instability issues on non-IID data with label skewness, resulting in much degraded natural accuracy. This tends to hinder the application of FAT in real-world applications where the label distribution across the clients is often skewed. In this paper, we study the problem of FAT under label skewness, and firstly reveal one root cause of the training instability and natural accuracy degradation issues: skewed labels lead to non-identical class probabilities and heterogeneous local models. We then propose a Calibrated FAT (CalFAT) approach to tackle the instability issue by calibrating the logits adaptively to balance the classes. We show both theoretically and empirically that the optimization of CalFAT leads to homogeneous local models across the clients and much improved convergence rate and final performance.
翻译:最近的研究显示,与传统的机器学习一样,联合学习(FL)也容易受到对抗性攻击。为了提高FL的对抗性强力,提议在全球汇总之前在当地应用联合对抗性训练(FAT)方法的不多。虽然这些方法显示独立分布相同的(IID)数据取得有希望的结果,但它们在非IID数据方面面临着培训不稳定问题,标签偏差导致自然精确度大大降低。这往往妨碍FAT在现实世界应用程序中的应用,因为客户的标签分布往往被扭曲。我们本文研究FAT问题,在标签偏斜下,我们首先揭示了培训不稳定和自然精确度退化问题的一个根源:扭曲标签导致非同级概率和混杂的地方模型。我们然后提出一种校准FAT(CalFAT)方法,通过调整对逻辑进行适应性调整来解决不稳定性问题,以平衡各个课程。我们从理论上和从经验上都表明,CalFAT的优化导致客户之间均匀的当地模型,并大大改进了合并率和最后率。