We present a class of methods for robust, personalized federated learning, called Fed+, that unifies many federated learning algorithms. The principal advantage of this class of methods is to better accommodate the real-world characteristics found in federated training, such as the lack of IID data across parties, the need for robustness to outliers or stragglers, and the requirement to perform well on party-specific datasets. We achieve this through a problem formulation that allows the central server to employ robust ways of aggregating the local models while keeping the structure of local computation intact. Without making any statistical assumption on the degree of heterogeneity of local data across parties, we provide convergence guarantees for Fed+ for convex and non-convex loss functions under different (robust) aggregation methods. The Fed+ theory is also equipped to handle heterogeneous computing environments including stragglers without additional assumptions; specifically, the convergence results cover the general setting where the number of local update steps across parties can vary. We demonstrate the benefits of Fed+ through extensive experiments across standard benchmark datasets.
翻译:我们提出了一套稳健的、个性化的联邦学习方法,称为Fed+,将许多联邦学习算法统一起来。这一类方法的主要优点是更好地适应在联邦培训中发现的现实世界特点,例如,各方缺乏ID数据,对异端或分流者需要有稳健性,以及需要很好地利用具体缔约方的数据集。我们通过一种问题配置来实现这一点,使中央服务器能够采用稳健的方法汇集当地模型,同时保持当地计算结构的完整。我们不就各方当地数据异性的程度作出任何统计假设,为Fed+在不同(罗布斯)汇总方法下对 convex 和非convex 损失功能提供趋同保证。Fed+ 理论还有能力处理混合的计算环境,包括没有额外假设的累加器;具体地说,趋同结果涵盖了缔约方本地更新步骤的数目可以不同的一般性设定。我们通过对标准基准数据集进行广泛的试验,展示了Fed+的好处。