Devices participating in federated learning (FL) typically have heterogeneous communication, computation, and memory resources. However, in synchronous FL, all devices need to finish training by the same deadline dictated by the server. Our results show that training a smaller subset of the neural network (NN) at constrained devices, i.e., dropping neurons/filters as proposed by state of the art, is inefficient, preventing these devices to make an effective contribution to the model. This causes unfairness w.r.t the achievable accuracies of constrained devices, especially in cases with a skewed distribution of class labels across devices. We present a novel FL technique, CoCoFL, which maintains the full NN structure on all devices. To adapt to the devices' heterogeneous resources, CoCoFL freezes and quantizes selected layers, reducing communication, computation, and memory requirements, whereas other layers are still trained in full precision, enabling to reach a high accuracy. Thereby, CoCoFL efficiently utilizes the available resources on devices and allows constrained devices to make a significant contribution to the FL system, increasing fairness among participants (accuracy parity) and significantly improving the final accuracy of the model.
翻译:参加联合学习(FL)的设备通常具有不同的通信、计算和记忆资源。 但是,在同步的FL中,所有设备都需要在服务器规定的相同期限之前完成培训。我们的结果表明,在受限制的设备(即根据最新技术建议丢弃神经元/过滤器)上培训神经网络的一小部分(NN)是效率低下的,防止这些设备为模型做出有效贡献。这造成了限制装置的可实现的不公性,特别是在各装置之间分类标签分布偏斜的情况下。我们提出了新的FL技术(CoCoFLL),该技术在所有装置上维护全NNE结构。为了适应该装置的多元资源,CoCoFLF冻结和量化选定的层,减少通信、计算和记忆要求,而其他层仍然受到全面精准的培训,从而能够达到很高的精确度。因此,CoFLFL有效使用设备上的现有资源,并允许受限制的设备为FL系统做出重大贡献,提高参与者的公平性(准确性)并大大改进最后的精确性。