Federated learning aims to collaboratively train models without accessing their client's local private data. The data may be Non-IID for different clients and thus resulting in poor performance. Recently, personalized federated learning (PFL) has achieved great success in handling Non-IID data by enforcing regularization in local optimization or improving the model aggregation scheme on the server. However, most of the PFL approaches do not take into account the unfair competition issue caused by the imbalanced data distribution and lack of positive samples for some classes in each client. To address this issue, we propose a novel and generic PFL framework termed Federated Averaging via Binary Classification, dubbed FedABC. In particular, we adopt the ``one-vs-all'' training strategy in each client to alleviate the unfair competition between classes by constructing a personalized binary classification problem for each class. This may aggravate the class imbalance challenge and thus a novel personalized binary classification loss that incorporates both the under-sampling and hard sample mining strategies is designed. Extensive experiments are conducted on two popular datasets under different settings, and the results demonstrate that our FedABC can significantly outperform the existing counterparts.
翻译:联邦学习的目的是合作培训模型,而没有查阅客户的当地私人数据。数据可能是针对不同客户的非IID,因而造成业绩不佳。最近,个性化联邦学习(PFL)通过在本地优化中实行正规化或改进服务器上的模型汇总计划,在处理非IID数据方面取得了巨大成功。然而,大部分PFL方法没有考虑到不平衡的数据分布和每个客户中某些类别缺乏正面样本造成的不公平竞争问题。为了解决这个问题,我们提议了一个名为通过二进制分类(称为Fedbed FedABC)的FFFL框架。我们特别在每一个客户中采用了“one-vs-all”培训战略,通过为每个类别建立一个个性化的二进制分类问题来缓解各班之间的不公平竞争。这可能会加剧阶级不平衡的挑战,从而设计出一个新的个人化的二进制分类损失,既包括抽样调查不足,也包括硬性抽样采矿战略。在不同的环境下对两种流行数据集进行了广泛的实验,结果表明我们的FDABC能够大大超越现有的对应方。