Federated Learning (FL) is a paradigm that aims to support loosely connected clients in learning a global model collaboratively with the help of a centralized server. The most popular FL algorithm is Federated Averaging (FedAvg), which is based on taking weighted average of the client models, with the weights determined largely based on dataset sizes at the clients. In this paper, we propose a new approach, termed Federated Node Selection (FedNS), for the server's global model aggregation in the FL setting. FedNS filters and re-weights the clients' models at the node/kernel level, hence leading to a potentially better global model by fusing the best components of the clients. Using collaborative image classification as an example, we show with experiments from multiple datasets and networks that FedNS can consistently achieve improved performance over FedAvg.
翻译:联邦学习(FL)是一个范例,旨在支持松散的客户在中央服务器的帮助下合作学习全球模型,最受欢迎的FL算法是Fededer Average(FedAvg),该算法基于对客户模型的加权平均值,其加权权重主要根据客户的数据集大小确定。在本文中,我们提出了一个新的方法,称为Feded Node 选择(FedNS),用于服务器在FL设置中的全球模型汇总。FedNS过滤器和在节点/内核一级对客户模型进行重新加权,从而通过利用客户的最佳组成部分,形成一个可能更好的全球模型。我们以合作图像分类为例,通过多个数据集和网络的实验,显示FedNS能够不断改善FedAvg的性能。