Fairness and robustness are two important concerns for federated learning systems. In this work, we identify that robustness to data and model poisoning attacks and fairness, measured as the uniformity of performance across devices, are competing constraints in statistically heterogeneous networks. To address these constraints, we propose employing a simple, general framework for personalized federated learning, Ditto, and develop a scalable solver for it. Theoretically, we analyze the ability of Ditto to achieve fairness and robustness simultaneously on a class of linear problems. Empirically, across a suite of federated datasets, we show that Ditto not only achieves competitive performance relative to recent personalization methods, but also enables more accurate, robust, and fair models relative to state-of-the-art fair or robust baselines.
翻译:公平性和稳健性是联邦学习系统的两个重要问题。 在这项工作中,我们发现,数据及模型中毒攻击的稳健性和公平性,以各种装置的性能统一度来衡量,是统计多样性网络中相互竞争的制约因素。 为了解决这些制约因素,我们提议为个人化联合会学习建立一个简单、一般的框架,Ditto, 并为此开发一个可扩缩的解决方案。理论上, 我们分析Ditto在一系列线性问题上同时实现公平和稳健性的能力。 在一组联邦数据集中,我们生动地表明,我们不仅能够取得与最近的个性化方法相比的竞争性业绩,而且能够比最先进的公平或稳健的基线建立更准确、更稳健、更公平的模型。