Personalized federated learning (FL) facilitates collaborations between multiple clients to learn personalized models without sharing private data. The mechanism mitigates the statistical heterogeneity commonly encountered in the system, i.e., non-IID data over different clients. Existing personalized algorithms generally assume all clients volunteer for personalization. However, potential participants might still be reluctant to personalize models since they might not work well. In this case, clients choose to use the global model instead. To avoid making unrealistic assumptions, we introduce the personalization rate, measured as the fraction of clients willing to train personalized models, into federated settings and propose DyPFL. This dynamically personalized FL technique incentivizes clients to participate in personalizing local models while allowing the adoption of the global model when it performs better. We show that the algorithmic pipeline in DyPFL guarantees good convergence performance, allowing it to outperform alternative personalized methods in a broad range of conditions, including variation in heterogeneity, number of clients, local epochs, and batch sizes.
翻译:个人化个人化学习(FL)促进多个客户之间的合作,以学习个性化模型,而不必分享私人数据。这一机制缓解了系统中常见的统计差异性,即不同客户之间非IID数据。现有的个性化算法通常假定所有客户都自愿实现个性化;然而,潜在的参与者可能仍然不愿意将模型个人化,因为它们可能工作不善。在这种情况下,客户选择使用全球模型。为了避免作出不现实的假设,我们引入个性化率,即愿意培训个性化模型的客户比例,纳入Federal环境并提出DyPFL。这种动态个性化FL技术激励客户参与本地模型的个人化,同时允许客户在表现更好时采用全球模型。我们表明,DyPFL的算法管道保证了良好的趋同性,使其能够在广泛的条件下超越个人化的替代方法,包括异性性、客户数量、本地用户和批量尺寸的差异。