Federated learning (FL) is able to manage edge devices to cooperatively train a model while maintaining the training data local and private. One common assumption in FL is that all edge devices share the same machine learning model in training, for example, identical neural network architecture. However, the computation and store capability of different devices may not be the same. Moreover, reducing communication overheads can improve the training efficiency though it is still a challenging problem in FL. In this paper, we propose a novel FL method, called FedHe, inspired by knowledge distillation, which can train heterogeneous models and support asynchronous training processes with significantly reduced communication overheads. Our analysis and experimental results demonstrate that the performance of our proposed method is better than the state-of-the-art algorithms in terms of communication overheads and model accuracy.
翻译:联邦学习组织(FL)能够管理边端设备,在保持当地和私人培训数据的同时合作培训模型。FL的一个共同假设是,所有边设备在培训中都拥有相同的机器学习模式,例如相同的神经网络结构。然而,不同设备的计算和储存能力可能并不相同。此外,降低通信管理费可以提高培训效率,尽管在FL仍然是一个具有挑战性的问题。在本文中,我们提出了一个新型的FL方法,名为FedHe,受知识蒸馏的启发,它可以培训多种模型,支持不同步的培训过程,大大减少通信管理费。我们的分析和实验结果表明,我们拟议方法的性能比通信管理费和模型精度方面最先进的算法要好。