Federated Learning (FL) allows edge devices to collaboratively learn a shared prediction model while keeping their training data on the device, thereby decoupling the ability to do machine learning from the need to store data in the cloud. Despite the algorithmic advancements in FL, the support for on-device training of FL algorithms on edge devices remains poor. In this paper, we present an exploration of on-device FL on various smartphones and embedded devices using the Flower framework. We also evaluate the system costs of on-device FL and discuss how this quantification could be used to design more efficient FL algorithms.
翻译:联邦学习联盟(FL)允许边缘设备合作学习共同预测模型,同时将其培训数据保留在该设备上,从而将机器学习的能力与在云中存储数据的必要性脱钩。尽管FL的算法进步,但边缘设备FL算法的在设备上对FL算法的在设备上的培训支持仍然不足。在本文中,我们介绍了利用花卉框架对各种智能手机和嵌入装置的在设备上安装的FL的探索。我们还评估了在设备FL的系统成本,并讨论了如何利用这一量化方法设计更高效的FL算法。