Many compression techniques have been proposed to reduce the communication overhead of Federated Learning training procedures. However, these are typically designed for compressing model updates, which are expected to decay throughout training. As a result, such methods are inapplicable to downlink (i.e., from the parameter server to clients) compression in the cross-device setting, where heterogeneous clients $\textit{may appear only once}$ during training and thus must download the model parameters. In this paper, we propose a new framework ($\texttt{DoCoFL}$) for downlink compression in the cross-device federated learning setting. Importantly, $\texttt{DoCoFL}$ can be seamlessly combined with many uplink compression schemes, rendering it suitable for bi-directional compression. Through extensive evaluation, we demonstrate that $\texttt{DoCoFL}$ offers significant bi-directional bandwidth reduction while achieving competitive accuracy to that of $\texttt{FedAvg}$ without compression.
翻译:已经提出了许多压缩技术,以减少联邦学习联合会培训程序的通信间接费用,然而,这些压缩技术通常是为压缩模型更新而设计的,预计在整个培训过程中这些更新将逐渐衰落,因此,这些方法不适用于跨设备设置的下行链路(从参数服务器到客户)压缩,因为不同客户在培训期间只出现一次(textit{可能只是一次)美元,因此必须下载模型参数。在本文件中,我们提议一个新的框架($\textt{DoCoFLL}$),用于跨设备联合学习设置中的下行链路压缩。重要的是,$\textt{DoCoFL}$可以与许多上行链路压缩计划完全结合,使之适合双向压缩。通过广泛的评估,我们证明$\tt{DoCoFL}$提供了重大的双向双向带宽降低,同时实现不压缩的 $textt{FedAvg}的竞争性精度。