We present a federated learning framework that is designed to robustly deliver good predictive performance across individual clients with heterogeneous data. The proposed approach hinges upon a superquantile-based learning objective that captures the tail statistics of the error distribution over heterogeneous clients. We present a stochastic training algorithm that interleaves differentially private client filtering with federated averaging steps. We prove finite time convergence guarantees for the algorithm: $O(1/\sqrt{T})$ in the nonconvex case in $T$ communication rounds and $O(\exp(-T/\kappa^{3/2}) + \kappa/T)$ in the strongly convex case with local condition number $\kappa$. Experimental results on benchmark datasets for federated learning demonstrate that our approach is competitive with classical ones in terms of average error and outperforms them in terms of tail statistics of the error.
翻译:我们提出了一个联邦学习框架,目的是在拥有不同数据的客户中强有力地提供良好的预测性业绩。拟议的方法取决于一个基于超量的学习目标,该目标捕捉不同客户差错分布的尾部统计。我们提出了一个随机化培训算法,将不同的私人客户与平均平均步骤隔开。我们证明算法的有限时间趋同保证:美元(1/\sqrt{T})美元(美元),在美元通信回合中的非convex案,美元(=====T/\kappa_3/2})+\kappa/T)美元(美元),在强烈的 convex案中,以当地条件编号为$\kappa$。关于联邦化学习的基准数据集的实验结果显示,我们的方法在平均错误方面与经典数据相比是竞争性的,在错误的尾数统计方面则超过传统数据。