We present a class of methods for robust, personalized federated learning, called Fed+, that unifies many federated learning algorithms. The principal advantage of this class of methods is to better accommodate the real-world characteristics found in federated training, such as the lack of IID data across parties, the need for robustness to outliers or stragglers, and the requirement to perform well on party-specific datasets. We achieve this through a problem formulation that allows the central server to employ robust ways of aggregating the local models while keeping the structure of local computation intact. Without making any statistical assumption on the degree of heterogeneity of local data across parties, we provide convergence guarantees for Fed+ for convex and non-convex loss functions and robust aggregation. The Fed+ theory is also equipped to handle heterogeneous computing environments including stragglers without additional assumptions; specifically, the convergence results cover the general setting where the number of local update steps across parties can vary. We demonstrate the benefits of Fed+ through extensive experiments across standard benchmark datasets as well as on a challenging real-world problem in financial portfolio management where the heterogeneity of party-level data can lead to training failure in standard federated learning approaches.
翻译:我们提出了一套稳健的、个性化的联邦学习方法,称为Fed+,它统一了许多联邦学习算法。这一类方法的主要优点是更好地适应在联邦培训中发现的现实世界特点,例如缔约方之间缺乏IID数据,需要对异端或分流者保持稳健,以及要求对具体缔约方的数据集进行良好表现。我们通过一种问题配置来做到这一点,使中央服务器能够采用稳健的方法汇集当地模型,同时保持当地计算结构的完整。在不就各方当地数据异性的程度作出任何统计假设的情况下,我们为Fed+为 convex和非convex损失功能以及稳健的聚合提供趋同保证。FD+理论还有能力处理多种计算环境,包括不附加假设的螺旋桨;具体地说,趋同结果涵盖了缔约方之间地方更新步骤的数目可以不同的总体环境。我们通过在标准基准数据集中进行广泛的试验,以及就金融组合管理中具有挑战性的现实问题,即学习了联邦组合标准的数据标准,从而证明了Fed+的好处。