We present a federated learning framework that is designed to robustly deliver good predictive performance across individual clients with heterogeneous data. The proposed approach hinges upon a superquantile-based learning objective that captures the tail statistics of the error distribution over heterogeneous clients. We present a stochastic training algorithm which interleaves differentially private client reweighting steps with federated averaging steps. The proposed algorithm is supported with finite time convergence guarantees that cover both convex and non-convex settings. Experimental results on benchmark datasets for federated learning demonstrate that our approach is competitive with classical ones in terms of average error and outperforms them in terms of tail statistics of the error.
翻译:我们提出了一个联邦学习框架,旨在向拥有不同数据的客户提供良好的预测性业绩。拟议方法取决于一个基于超量化的学习目标,该目标捕捉不同客户差错分布的尾部统计数据。我们提出了一个随机培训算法,该算法将不同的私人客户重加权步骤与联邦平均步骤相隔开来。提议的算法得到了涵盖 convex 和非 convex 设置的有限时间趋同保证的支持。关于联邦学习基准数据集的实验结果表明,我们的方法在平均错误方面与传统数据具有竞争力,在错误的尾部统计方面优于传统数据。