Federated learning (FL) on deep neural networks facilitates new applications at the edge, especially for wearable and Internet-of-Thing devices. Such devices capture a large and diverse amount of data, but they have memory, compute, power, and connectivity constraints which hinder their participation in FL. We propose Centaur, a multitier FL framework, enabling ultra-constrained devices to efficiently participate in FL on large neural nets. Centaur combines two major ideas: (i) a data selection scheme to choose a portion of samples that accelerates the learning, and (ii) a partition-based training algorithm that integrates both constrained and powerful devices owned by the same user. Evaluations, on four benchmark neural nets and three datasets, show that Centaur gains ~10% higher accuracy than local training on constrained devices with ~58% energy saving on average. Our experimental results also demonstrate the superior efficiency of Centaur when dealing with imbalanced data, client participation heterogeneity, and various network connection probabilities.
翻译:在深神经网络上的联邦学习(FL)有助于在边缘进行新的应用,特别是在可磨损和互联网上安装装置。这些装置可以捕捉大量多样的数据,但具有记忆、计算、功率和连通性限制,妨碍他们参加FL。我们提议Centaur,这是一个多层次FL框架,使超受限制的装置能够有效地参加大型神经网的FL。Centaur综合了两个主要想法:(一) 数据选择计划,选择一部分样本,加速学习;(二) 基于分区的培训算法,将同一用户拥有的受限和强力装置结合起来。对四个基准神经网和三个数据集的评价显示,Centaur平均节能~58%的限制装置的精确度比当地培训高出10%。我们的实验结果还表明,Centaur在处理不平衡数据、客户参与异质和各种网络连接方面的效率较高。