Federated learning (FL) collaboratively trains a shared global model depending on multiple local clients, while keeping the training data decentralized in order to preserve data privacy. However, standard FL methods ignore the noisy client issue, which may harm the overall performance of the shared model. We first investigate critical issue caused by noisy clients in FL and quantify the negative impact of the noisy clients in terms of the representations learned by different layers. We have the following two key observations: (1) the noisy clients can severely impact the convergence and performance of the global model in FL, and (2) the noisy clients can induce greater bias in the deeper layers than the former layers of the global model. Based on the above observations, we propose Fed-NCL, a framework that conducts robust federated learning with noisy clients. Specifically, Fed-NCL first identifies the noisy clients through well estimating the data quality and model divergence. Then robust layer-wise aggregation is proposed to adaptively aggregate the local models of each client to deal with the data heterogeneity caused by the noisy clients. We further perform the label correction on the noisy clients to improve the generalization of the global model. Experimental results on various datasets demonstrate that our algorithm boosts the performances of different state-of-the-art systems with noisy clients. Our code is available on https://github.com/TKH666/Fed-NCL
翻译:联邦学习联合会(FL)合作培训一个取决于多个当地客户的共享全球模式,同时保持培训数据分散化,以维护数据隐私。然而,标准的FL方法忽视了噪音的客户问题,这可能会损害共享模式的总体绩效。我们首先调查FL中吵闹的客户引起的关键问题,从不同层次的表述中量化噪音客户的负面影响。我们提出以下两项关键意见:(1)吵闹的客户可以严重影响FL全球模式的趋同和性能,(2)吵闹的客户可以在更深层次上产生比以前全球模式的一层更大的偏差。根据上述意见,我们建议Fed-NCL,这是一个与吵闹的客户进行强有力联合学习的框架。具体地说,Fed-NCL首先通过很好地估计数据质量和模型差异来确定噪音客户。然后,提议强有力的层-错错合并,将每个客户的当地模式与噪音客户造成的数据异性。我们进一步进行关于噪音客户的标签更正,以改进全球模型的通用性能。我们现有的数据-Ral-F客户的实验性能显示我们不同的系统。