Federated learning (FL) is a privacy-preserving distributed machine learning technique that trains models without having direct access to the original data generated on devices. Since devices may be resource constrained, offloading can be used to improve FL performance by transferring computational workload from devices to edge servers. However, due to mobility, devices participating in FL may leave the network during training and need to connect to a different edge server. This is challenging because the offloaded computations from edge server need to be migrated. In line with this assertion, we present FedFly, which is, to the best of our knowledge, the first work to migrate a deep neural network (DNN) when devices move between edge servers during FL training. Our empirical results on the CIFAR-10 dataset, with both balanced and imbalanced data distribution support our claims that FedFly can reduce training time by up to 33% when a device moves after 50% of the training is completed, and by up to 45% when 90% of the training is completed when compared to state-of-the-art offloading approach in FL. FedFly has negligible overhead of 2 seconds and does not compromise accuracy. Finally, we highlight a number of open research issues for further investigation. FedFly can be downloaded from https://github.com/qub-blesson/FedFly
翻译:联邦学习(FL)是一种保护隐私的分布式机器学习技术,可以对模型进行不直接访问设备上产生的原始数据的培训。由于设备可能受到资源限制,因此卸载可以通过将计算工作量从设备转移到边缘服务器来提高FL性能。然而,由于流动性,参加FL的装置在培训期间可能离开网络,需要连接到不同的边缘服务器。这是具有挑战性的,因为边缘服务器卸载的计算需要迁移。根据这一说法,我们介绍FedFly,即根据我们所知,当设备在FL培训期间移动边缘服务器之间时,首次迁移深神经网络(DNN),我们关于CIFAR-10数据集的经验性结果既平衡又不平衡,支持我们的说法,即FedFly在50%的培训完成后,如果设备移动后,培训时间会缩短到33%,如果与FL的状态-艺术卸载方法相比,则完成90%的培训,这是我们所知的,当设备在FL. FF. FedFly在边缘服务器之间移动期间移动时,首次迁移深神经网络网络。我们在CIF-10数据集上可忽略不为2秒钟的间接,不能妥协性研究。最后,F.F.Qqublexblemissmissolmissolmismissolmissmissolmissolmissmissmissmissmissional