Classical federated learning approaches yield significant performance degradation in the presence of Non-IID data distributions of participants. When the distribution of each local dataset is highly different from the global one, the local objective of each client will be inconsistent with the global optima which incur a drift in the local updates. This phenomenon highly impacts the performance of clients. This is while the primary incentive for clients to participate in federated learning is to obtain better personalized models. To address the above-mentioned issue, we present a new algorithm, FLIS, which groups the clients population in clusters with jointly trainable data distributions by leveraging the inference similarity of clients' models. This framework captures settings where different groups of users have their own objectives (learning tasks) but by aggregating their data with others in the same cluster (same learning task) to perform more efficient and personalized federated learning. We present experimental results to demonstrate the benefits of FLIS over the state-of-the-art benchmarks on CIFAR-100/10, SVHN, and FMNIST datasets. Our code is available at https://github.com/MMorafah/FLIS.
翻译:典型的联邦学习方法导致参与者在非国际开发组织数据分布上出现显著的性能退化。当每个本地数据集的分布与全球数据集有很大不同时,每个客户的本地目标将不符合全球选择,因为全球选择导致本地更新的流动。这一现象对客户的绩效影响很大。这是客户参与联合会学习的主要动力是获得更好的个性化模式。为了解决上述问题,我们提出了一个新的算法,即FLIS, 利用客户模型的推断性来将客户群体分组为可联合培训的数据发布组。这个框架捕捉了不同用户群体有自己目标(学习任务)的设置,但将其数据与同一组的其他人(同样的学习任务)合并,以进行更有效、更个性化的联邦化学习。我们提出了实验结果,以展示FLIS对CIFAR-100/10、SVHN和FMNIST数据集的先进基准的好处。我们的代码可在 https://github.com/MMAFLA/FLA/FLAS。