Federated learning aims to protect users' privacy while performing data analysis from different participants. However, it is challenging to guarantee the training efficiency on heterogeneous systems due to the various computational capabilities and communication bottlenecks. In this work, we propose FedSkel to enable computation-efficient and communication-efficient federated learning on edge devices by only updating the model's essential parts, named skeleton networks. FedSkel is evaluated on real edge devices with imbalanced datasets. Experimental results show that it could achieve up to 5.52$\times$ speedups for CONV layers' back-propagation, 1.82$\times$ speedups for the whole training process, and reduce 64.8% communication cost, with negligible accuracy loss.
翻译:联邦学习的目的是保护用户的隐私,同时对不同参与者进行数据分析,然而,由于各种计算能力和通信瓶颈,保证不同系统的培训效率具有挑战性;在这项工作中,我们提议FedSkel只更新模型的基本部件,即称为骨骼网络,以便能够在边端设备上进行计算效率和通信效率的联邦学习;FedSkel在实际边缘设备上用不平衡的数据集进行评估;实验结果表明,它可以达到5.52亿元的超速,用于COV层的回路改造;在整个培训过程中达到1.82亿元的超速;降低64.8%的通信成本,而准确性损失微乎其微。