Distributed Mean Estimation (DME) is a central building block in federated learning, where clients send local gradients to a parameter server for averaging and updating the model. Due to communication constraints, clients often use lossy compression techniques to compress the gradients, resulting in estimation inaccuracies. DME is more challenging when clients have diverse network conditions, such as constrained communication budgets and packet losses. In such settings, DME techniques often incur a significant increase in the estimation error leading to degraded learning performance. In this work, we propose a robust DME technique named EDEN that naturally handles heterogeneous communication budgets and packet losses. We derive appealing theoretical guarantees for EDEN and evaluate it empirically. Our results demonstrate that EDEN consistently improves over state-of-the-art DME techniques.
翻译:分布式平均估算(DME)是联合学习的核心基石,客户将本地梯度发送到参数服务器,以平均和更新模型。由于通信限制,客户往往使用损失压缩技术压缩梯度,导致估计不准确。当客户网络条件不同,例如通信预算和包装损失等时,DME更具挑战性。在这种环境下,DME技术往往导致估计错误的大幅增加,导致学习表现退化。在这项工作中,我们提议采用名为EDEN的强有力的DME技术,自然处理各种通信预算和包损失。我们为ENDE寻求理论保证,并用经验评估。我们的结果显示EDEN不断改进最先进的DME技术。