In federated learning (FL), model aggregation has been widely adopted for data privacy. In recent years, assigning different weights to local models has been used to alleviate the FL performance degradation caused by differences between local datasets. However, when various defects make the FL process unreliable, most existing FL approaches expose weak robustness. In this paper, we propose the DEfect-AwaRe federated soft actor-critic (DearFSAC) to dynamically assign weights to local models to improve the robustness of FL. The deep reinforcement learning algorithm soft actor-critic is adopted for near-optimal performance and stable convergence. Besides, an auto-encoder is trained to output low-dimensional embedding vectors that are further utilized to evaluate model quality. In the experiments, DearFSAC outperforms three existing approaches on four datasets for both independent and identically distributed (IID) and non-IID settings under defective scenarios.
翻译:在联合学习(FL)中,模型汇总被广泛采用,用于数据隐私。近年来,对当地模型给予不同的权重,以缓解当地数据集差异造成的FL性能退化。然而,当各种缺陷使FL进程不可靠时,大多数现有的FL方法暴露出薄弱的强力。在本文中,我们提议Deffect-Aware联合软性行为者-critic(亲爱的FSAC)动态地为当地模型分配权重,以提高FL的稳健性。对于接近最佳的性能和稳定的趋同,采用了深度强化软性行为者-critic 学习算法。此外,对自动编码器进行了关于输出低维嵌入矢量的培训,这些矢量被进一步用于评估模型质量。在试验中,DearFSAC在有缺陷的假设情景下,在独立和相同分布的(IID)和非IID设置下,在四个数据集上比现有的三种方法。