Federated learning (FL) is a popular technique for training a global model on data distributed across client devices. Like other distributed training techniques, FL is susceptible to straggler (slower or failed) clients. Recent work has proposed to address this through device-to-device (D2D) offloading, which introduces privacy concerns. In this paper, we propose a novel straggler-optimal approach for coded matrix computations which can significantly reduce the communication delay and privacy issues introduced from D2D data transmissions in FL. Moreover, our proposed approach leads to a considerable improvement of the local computation speed when the generated data matrix is sparse. Numerical evaluations confirm the superiority of our proposed method over baseline approaches.
翻译:联邦学习(FL)是培训全球跨客户设备分布数据模型的流行技术,与其他分布式培训技术一样,FL很容易被排挤(慢于或失败)客户所利用。最近的工作提议通过设备到设备(D2D)卸载(D2D)来解决这一问题,这引起了隐私方面的关注。在本文件中,我们提议了一种新型的strggler-最优化的编码矩阵计算方法,该方法可以大大减少通信延迟和从FLD数据传输中引入的隐私问题。此外,我们提出的方法导致在生成的数据矩阵稀少时当地计算速度大幅提高。数字评估证实了我们拟议方法优于基线方法。</s>