Federated learning allows multiple clients to collaborate to train high-performance deep learning models while keeping the training data locally. However, when the local data of all clients are not independent and identically distributed (i.e., non-IID), it is challenging to implement this form of efficient collaborative learning. Although significant efforts have been dedicated to addressing this challenge, the effect on the image classification task is still not satisfactory. In this paper, we propose FedProc: prototypical contrastive federated learning, which is a simple and effective federated learning framework. The key idea is to utilize the prototypes as global knowledge to correct the local training of each client. We design a local network architecture and a global prototypical contrastive loss to regulate the training of local models, which makes local objectives consistent with the global optima. Eventually, the converged global model obtains a good performance on non-IID data. Experimental results show that, compared to state-of-the-art federated learning methods, FedProc improves the accuracy by $1.6\%\sim7.9\%$ with acceptable computation cost.
翻译:联邦学习使多个客户能够合作培训高性能深层次学习模式,同时保留当地的培训数据。然而,当所有客户的当地数据不独立且分布相同(即非IID)时,实施这种高效合作学习的形式具有挑战性。尽管已经做出巨大努力来应对这一挑战,但对图像分类任务的影响仍然不尽如人意。在本文中,我们提议FedProc:模拟对比式联合学习,这是一个简单而有效的联邦化学习框架。关键思想是利用原型作为全球知识来纠正每个客户的本地培训。我们设计了一个地方网络架构和全球典型对比性损失,以规范当地模型的培训,使当地目标与全球opima相一致。最终,趋同的全球模型在非IID数据上取得了良好的表现。实验结果表明,与最先进的联邦化学习方法相比,FedProc改进了1.6-sim7.9美元的准确度,并增加了可接受的计算成本。