Federated learning (FL) is a privacy-preserving distributed machine learning technique that trains models while keeping all the original data generated on devices locally. Since devices may be resource constrained, offloading can be used to improve FL performance by transferring computational workload from devices to edge servers. However, due to mobility, devices participating in FL may leave the network during training and need to connect to a different edge server. This is challenging because the offloaded computations from edge server need to be migrated. In line with this assertion, we present FedFly, which is, to the best of our knowledge, the first work to migrate a deep neural network (DNN) when devices move between edge servers during FL training. Our empirical results on the CIFAR10 dataset, with both balanced and imbalanced data distribution, support our claims that FedFly can reduce training time by up to 33% when a device moves after 50% of the training is completed, and by up to 45% when 90% of the training is completed when compared to state-of-the-art offloading approach in FL. FedFly has negligible overhead of up to two seconds and does not compromise accuracy. Finally, we highlight a number of open research issues for further investigation. FedFly can be downloaded from https://github.com/qub-blesson/FedFly.
翻译:联邦学习(FL)是一种保护隐私的分布式机器学习技术,它用来培训模型,同时保留当地设备上产生的所有原始数据。由于设备可能受到资源限制,卸载可以通过将计算工作量从设备转移到边缘服务器来改善FL性能。然而,由于流动性,参加FL的装置在培训期间可能离开网络,需要连接到不同的边缘服务器。这是具有挑战性的,因为边缘服务器卸载的计算需要迁移。根据这一说法,我们向FFFFL介绍FedFly,这是在FL培训期间设备在边缘服务器之间移动时将深神经网络(DNN)迁移的首次工作。我们在CIFAR10数据集上的经验结果,既平衡又不平衡的数据分布,支持我们的说法,即FedFly在50%的培训完成后,如果设备移动后需要将培训时间减少到33%,而如果与FL的状态卸载方法相比,则完成90%的培训,这是我们所知道的,当设备在FDFL培训期间移动到边缘服务器之间移动时,首次迁移一个深神经网络(DFly)网络网络。我们在CIF/qublear roup must sudisudistress real subismission for fsubildal subissubilding subild