In this paper, we develop two new algorithms, called, \textbf{FedDR} and \textbf{asyncFedDR}, for solving a fundamental nonconvex optimization problem in federated learning. Our algorithms rely on a novel combination between a nonconvex Douglas-Rachford splitting method, randomized block-coordinate strategies, and asynchronous implementation. Unlike recent methods in the literature, e.g., FedSplit and FedPD, our algorithms update only a subset of users at each communication round, and possibly in an asynchronous mode, making them more practical. These new algorithms also achieve communication efficiency and more importantly can handle statistical and system heterogeneity, which are the two main challenges in federated learning. Our convergence analysis shows that the new algorithms match the communication complexity lower bound up to a constant factor under standard assumptions. Our numerical experiments illustrate the advantages of the proposed methods compared to existing ones using both synthetic and real datasets.
翻译:在本文中,我们开发了两种新的算法,称为 \ textbf{FedDR} 和\ textbf{asyncFedDR}, 以解决联邦化学习中一个基本的非中央化优化问题。 我们的算法依赖于非中央化道格拉斯-拉克福德分裂法、随机化的区块坐标策略和不同步实施之间的新组合。 与文献中的最新方法不同, 比如 FedSplit 和 FedPDD, 我们的算法只更新了每轮通信中的用户群, 并且可能以非同步模式更新了用户群, 使其更加实用。 这些新的算法还可以实现通信效率, 更重要的是能够处理统计和系统差异性, 这是联邦化学习中的两大挑战。 我们的趋同分析表明, 新的算法在标准假设下, 将通信的复杂性下下下, 与一个不变的参数相匹配。 我们的数字实验显示了拟议方法与现有方法相比, 使用合成和真实数据集的优势。