Robustness is becoming another important challenge of federated learning in that the data collection process in each client is naturally accompanied by noisy labels. However, it is far more complex and challenging owing to varying levels of data heterogeneity and noise over clients, which exacerbates the client-to-client performance discrepancy. In this work, we propose a robust federated learning method called FedRN, which exploits k-reliable neighbors with high data expertise or similarity. Our method helps mitigate the gap between low- and high-performance clients by training only with a selected set of clean examples, identified by their ensembled mixture models. We demonstrate the superiority of FedRN via extensive evaluations on three real-world or synthetic benchmark datasets. Compared with existing robust training methods, the results show that FedRN significantly improves the test accuracy in the presence of noisy labels.
翻译:由于每个客户的数据收集过程自然都配有吵闹标签,因此结对学习的强力正在成为结对学习的另一个重要挑战,然而,由于客户的数据差异和噪音程度不同,这要复杂得多,挑战性要大得多,这加剧了客户对客户的业绩差异。在这项工作中,我们建议采用一个强有力的结对学习方法,即FedRN,它利用具有高数据专长或类似的K-可靠邻居。我们的方法有助于缩小低效和高性能客户之间的差距,只培训一组经他们混合混合混合模型确定的选定清洁范例。我们通过对三个真实世界或合成基准数据集进行广泛评价,显示了FedRN的优势。与现有的稳健的培训方法相比,结果显示,FedRN在出现吵闹的标签时大大提高了测试的准确性。