Federated learning has attracted attention in recent years for collaboratively training data on distributed devices with privacy-preservation. The limited network capacity of mobile and IoT devices has been seen as one of the major challenges for cross-device federated learning. Recent solutions have been focusing on threshold-based client selection schemes to guarantee the communication efficiency. However, we find this approach can cause biased client selection and results in deteriorated performance. Moreover, we find that the challenge of network limit may be overstated in some cases and the packet loss is not always harmful. In this paper, we explore the loss tolerant federated learning (LT-FL) in terms of aggregation, fairness, and personalization. We use ThrowRightAway (TRA) to accelerate the data uploading for low-bandwidth-devices by intentionally ignoring some packet losses. The results suggest that, with proper integration, TRA and other algorithms can together guarantee the personalization and fairness performance in the face of packet loss below a certain fraction (10%-30%).
翻译:近些年来,联邦学习吸引了对有隐私保护的分布装置的协作培训数据的关注。移动和IoT装置网络能力有限,被视为跨设备联合学习的主要挑战之一。最近的解决办法一直侧重于基于门槛的客户选择计划,以保障通信效率。然而,我们发现这种做法可能导致客户选择偏颇,并导致性能恶化。此外,我们发现网络限制的挑战在某些情况下可能过高,而且包损失并不总是有害。在本文中,我们探讨了在聚合、公平和个性化方面松散式联合学习(LT-FL)的损失。我们利用“抛光路”(TRAway)来加快低带宽度构件数据上传的速度,故意忽略某些包损失。结果显示,如果适当整合,TRA和其他算法可以共同保证个人化和公平性,面对低于一定比例(10%-30%)的包损失。