The uneven distribution of local data across different edge devices (clients) results in slow model training and accuracy reduction in federated learning. Naive federated learning (FL) strategy and most alternative solutions attempted to achieve more fairness by weighted aggregating deep learning models across clients. This work introduces a novel non-IID type encountered in real-world datasets, namely cluster-skew, in which groups of clients have local data with similar distributions, causing the global model to converge to an over-fitted solution. To deal with non-IID data, particularly the cluster-skewed data, we propose FedDRL, a novel FL model that employs deep reinforcement learning to adaptively determine each client's impact factor (which will be used as the weights in the aggregation process). Extensive experiments on a suite of federated datasets confirm that the proposed FedDRL improves favorably against FedAvg and FedProx methods, e.g., up to 4.05% and 2.17% on average for the CIFAR-100 dataset, respectively.
翻译:本地数据在不同边缘设备(客户)之间分布不均,导致模型培训缓慢,联合学习的精确度降低。联盟学习策略和大多数替代解决方案试图通过对客户的深层次学习模型进行加权整合,实现更加公平。这项工作引入了在现实世界数据集(即集群-skew)中遇到的新型非IID型数据,即集群-skew,其中客户群体拥有类似分布的本地数据,从而导致全球模型趋向于过于适合的解决方案。为了处理非IID数据,特别是集群偏斜数据,我们提议FedDRL,这是一个采用深层强化学习的新型FL模型,用于适应性地确定每个客户的影响要素(将在汇总过程中用作权重)。关于一组联邦数据集的广泛实验证实,拟议的FedDRL与FedAvg和FedProx方法相比,如分别比CIFAR-100数据集平均高出4.05%和2.17%。