In collaborative or federated learning, model personalization can be a very effective strategy to deal with heterogeneous training data across clients. We introduce WAFFLE (Weighted Averaging For Federated LEarning), a personalized collaborative machine learning algorithm based on SCAFFOLD. SCAFFOLD uses stochastic control variates to converge towards a model close to the globally optimal model even in tasks where the distribution of data and labels across clients is highly skewed. In contrast, WAFFLE uses the Euclidean distance between clients' updates to weigh their individual contributions and thus minimize the trained personalized model loss on the specific agent of interest. Through a series of experiments, we compare our proposed new method to two recent personalized federated learning methods, Weight Erosion and APFL, as well as two global learning methods, federated averaging and SCAFFOLD. We evaluate our method using two categories of non-identical client data distributions (concept shift and label skew) on two benchmark image data sets, MNIST and CIFAR10. Our experiments demonstrate the effectiveness of WAFFLE compared with other methods, as it achieves or improves accuracy with faster convergence.
翻译:在合作或联合学习中,模型个性化可以成为处理不同客户不同培训数据的一个非常有效的战略,我们采用了基于SCAFFOLD的个性化协作机学习算法WAFFL(对联邦利差的认知),这是一种基于SCAFFOLD的个性化协作机学习算法。SCAFFOLD使用随机控制变异,将模型集中到接近全球最佳模式的模型上,即使客户之间数据和标签分配高度偏斜,也采用模型个人化模式,相比之下,WAFFLE利用客户最新更新的Eucliidean距离来权衡其个人贡献,从而最大限度地减少特定利益代理人经过培训的个人化模型损失。我们通过一系列实验,将我们拟议的新方法与两种最新的个性化联邦化学习方法(Weight Erosion和APFLF)以及两种全球学习方法(Federated 平均平均和SCAFFFOLLD)进行了比较。我们用两种非同客户数据分配方法(概念转换和标签Skew)的两类方法来评估我们的方法。在两种基准图像数据集(MNI和CIFAR10)上,我们的实验显示比WAFFLE的精确性将更快地加以改进。