We develop two new algorithms, called, FedDR and asyncFedDR, for solving a fundamental nonconvex composite optimization problem in federated learning. Our algorithms rely on a novel combination between a nonconvex Douglas-Rachford splitting method, randomized block-coordinate strategies, and asynchronous implementation. They can also handle convex regularizers. Unlike recent methods in the literature, e.g., FedSplit and FedPD, our algorithms update only a subset of users at each communication round, and possibly in an asynchronous manner, making them more practical. These new algorithms can handle statistical and system heterogeneity, which are the two main challenges in federated learning, while achieving the best known communication complexity. In fact, our new algorithms match the communication complexity lower bound up to a constant factor under standard assumptions. Our numerical experiments illustrate the advantages of our methods over existing algorithms on synthetic and real datasets.
翻译:我们开发了两种新的算法,称为 FedDR 和 AsyncFedDR, 用于解决联合学习中一个基本的非Convex复合优化问题。 我们的算法依赖于非Convex Douglas-Rachford分解法、随机的区块协调策略和非同步执行法之间的新组合。 它们也可以处理 convex正规化者。 与文献中最近的方法不同, 比如 FedSplit 和 FedPDD, 我们的算法只在每轮通信中更新一组用户, 并且可能以不同步的方式更新, 使其更加实用。 这些新的算法可以处理统计和系统异质性, 这是在联合学习过程中的两大挑战, 而同时实现最已知的通信复杂性。 事实上, 我们的新算法将通信的复杂程度与标准假设下的一个不变因素相匹配。 我们的数字实验显示了我们在合成和真实数据集上比现有算法的优势。