Federated learning (FL) is a privacy-preserving machine learning setting that enables many devices to jointly train a shared global model without the need to reveal their data to a central server. However, FL involves a frequent exchange of the parameters between all the clients and the server that coordinates the training. This introduces extensive communication overhead, which can be a major bottleneck in FL with limited communication links. In this paper, we consider training the binary neural networks (BNN) in the FL setting instead of the typical real-valued neural networks to fulfill the stringent delay and efficiency requirement in wireless edge networks. We introduce a novel FL framework of training BNN, where the clients only upload the binary parameters to the server. We also propose a novel parameter updating scheme based on the Maximum Likelihood (ML) estimation that preserves the performance of the BNN even without the availability of aggregated real-valued auxiliary parameters that are usually needed during the training of the BNN. Moreover, for the first time in the literature, we theoretically derive the conditions under which the training of BNN is converging. { Numerical results show that the proposed FL framework significantly reduces the communication cost compared to the conventional neural networks with typical real-valued parameters, and the performance loss incurred by the binarization can be further compensated by a hybrid method.
翻译:联邦学习(FL)是一个保护隐私的机器学习环境,它使许多设备能够联合培训一个共享的全球模型,而无需向中央服务器披露数据。然而,FL涉及所有客户和负责协调培训的服务器之间频繁交换参数。这引入了广泛的通信间接费用,这可能是FL中通信连接有限的主要瓶颈。在本文中,我们考虑在FL环境中培训二元神经网络,而不是典型的具有实际价值的神经网络,以满足无线边缘网络的严格延迟和效率要求。我们引入了一个新的FL培训BNN框架,在此框架内,客户只将二元参数上传到服务器。我们还根据最大类似性(ML)估计提出一个新的参数更新计划,即使没有提供培训BNNN(B)期间通常需要的总体、具有实际价值的辅助参数,也能保持BNNNE的性能。此外,在理论上,我们首次得出了BNNN的严格延迟和效率培训的条件。 {NU(NU)NNNN(NU)的初始结果显示,拟议的常规性成本通过常规的典型的补偿性成本,与常规性成本相比,通过常规性标准性框架可以大幅降低通信损失。