Reducing communication overhead in federated learning (FL) is challenging but crucial for large-scale distributed privacy-preserving machine learning. While methods utilizing sparsification or others can largely lower the communication overhead, the convergence rate is also greatly compromised. In this paper, we propose a novel method, named single-step synthetic features compressor (3SFC), to achieve communication-efficient FL by directly constructing a tiny synthetic dataset based on raw gradients. Thus, 3SFC can achieve an extremely low compression rate when the constructed dataset contains only one data sample. Moreover, 3SFC's compressing phase utilizes a similarity-based objective function so that it can be optimized with just one step, thereby considerably improving its performance and robustness. In addition, to minimize the compressing error, error feedback (EF) is also incorporated into 3SFC. Experiments on multiple datasets and models suggest that 3SFC owns significantly better convergence rates compared to competing methods with lower compression rates (up to 0.02%). Furthermore, ablation studies and visualizations show that 3SFC can carry more information than competing methods for every communication round, further validating its effectiveness.
翻译:减少联谊学习(FL)中的通信管理费用是挑战性的,但对于大规模分布式的保密机器学习至关重要。虽然使用封隔或其他方法可以大大降低通信管理费用,但汇合率也大为受损。在本文中,我们提议采用名为单步合成特征压缩机(SFC)的新颖方法,通过直接根据粗坡构建一个小型合成数据集来实现通信效率的FL。因此,3SFC在构建的数据集只包含一个数据样本时,可以达到极低的压缩率。此外,3SFC压缩阶段使用类似基于目标的功能,以便仅用一步就能优化其性能和稳健度。此外,为了尽量减少压缩错误,错误反馈(EF)也被纳入了3SFC。对多个数据集和模型的实验表明,3SFC拥有比低压缩率(高达0.02%)的竞争性方法要高得多的趋同率。此外,一个研究和可视化显示,3SFC在每轮通信中都拥有比竞争性方法更多的信息。</s>