Fairness and robustness are two important concerns for federated learning systems. In this work, we identify that robustness to data and model poisoning attacks and fairness, measured as the uniformity of performance across devices, are competing constraints in statistically heterogeneous networks. To address these constraints, we propose employing a simple, general framework for personalized federated learning, Ditto, that can inherently provide fairness and robustness benefits, and develop a scalable solver for it. Theoretically, we analyze the ability of Ditto to achieve fairness and robustness simultaneously on a class of linear problems. Empirically, across a suite of federated datasets, we show that Ditto not only achieves competitive performance relative to recent personalization methods, but also enables more accurate, robust, and fair models relative to state-of-the-art fair or robust baselines.
翻译:公平性和稳健性是联邦学习系统的两个重要问题。 在这项工作中,我们发现,数据及模型中毒攻击和公平性强健性(以各种装置的性能统一衡量)是统计多样性网络中相互竞争的制约因素。 为了解决这些制约因素,我们提议为个人化的联邦学习(Ditto)建立一个简单、一般的框架,这个框架可以内在地提供公平和稳健性的好处,并为它开发一个可伸缩的解决方案。理论上,我们分析了Ditto在一系列线性问题上同时实现公平和稳健的能力。 在一个联邦数据集中,我们经常地表明,Ditto不仅能够实现与最近的个性化方法相比的竞争性业绩,而且还能够实现与最先进的公平或稳健的基线相比的更准确、稳健和公正的模型。