Federated learning (FL) is a new artificial intelligence concept that enables Internet-of-Things (IoT) devices to learn a collaborative model without sending the raw data to centralized nodes for processing. Despite numerous advantages, low computing resources at IoT devices and high communication costs for exchanging model parameters make applications of FL in massive IoT networks very limited. In this work, we develop a novel compression scheme for FL, called high-compression federated learning (HCFL), for very large scale IoT networks. HCFL can reduce the data load for FL processes without changing their structure and hyperparameters. In this way, we not only can significantly reduce communication costs, but also make intensive learning processes more adaptable on low-computing resource IoT devices. Furthermore, we investigate a relationship between the number of IoT devices and the convergence level of the FL model and thereby better assess the quality of the FL process. We demonstrate our HCFL scheme in both simulations and mathematical analyses. Our proposed theoretical research can be used as a minimum level of satisfaction, proving that the FL process can achieve good performance when a determined configuration is met. Therefore, we show that HCFL is applicable in any FL-integrated networks with numerous IoT devices.
翻译:联邦学习(FL)是一个新的人工智能概念,它使互联网电话(IoT)设备能够在不将原始数据发送到中央节点进行处理的情况下学习合作模式。尽管有许多优势,但IoT设备低的计算资源以及交换模型参数的通信费用高昂,使得FL在大型IoT网络中的应用非常有限。在这项工作中,我们为FL开发了一个叫作高压联邦化学习(HCFL)的新压缩计划,用于大规模IoT网络。HCFL可以减少FL进程的数据负荷,而不改变其结构和超参数。这样,我们不仅能够大幅降低通信成本,而且能够使密集学习过程更适应低计算资源IoT设备。此外,我们调查IoT设备的数量与FL模型的趋同水平之间的关系,从而更好地评估FL进程的质量。我们在模拟和数学分析中都展示了我们的HCFL计划。我们提议的理论研究可以用作最起码的满意度,从而证明FL进程在已确定的IFL网络中可以实现多种功能。