Federated learning (FL), an emerging distributed machine learning paradigm, in conflux with edge computing is a promising area with novel applications over mobile edge devices. In FL, since mobile devices collaborate to train a model based on their own data under the coordination of a central server by sharing just the model updates, training data is maintained private. However, without the central availability of data, computing nodes need to communicate the model updates often to attain convergence. Hence, the local computation time to create local model updates along with the time taken for transmitting them to and from the server result in a delay in the overall time. Furthermore, unreliable network connections may obstruct an efficient communication of these updates. To address these, in this paper, we propose a delay-efficient FL mechanism that reduces the overall time (consisting of both the computation and communication latencies) and communication rounds required for the model to converge. Exploring the impact of various parameters contributing to delay, we seek to balance the trade-off between wireless communication (to talk) and local computation (to work). We formulate a relation with overall time as an optimization problem and demonstrate the efficacy of our approach through extensive simulations.
翻译:联邦学习(FL)是一个新兴的分布式机器学习模式,在边缘计算中,是一个充满希望的领域,在移动边缘设备上应用了新的应用。在FL,由于移动设备在中央服务器的协调下,通过共享模型更新,在中央服务器的协调下,合作根据自己的数据培训一个模型,因此培训数据是保密的。然而,在没有中央数据可用的情况下,计算节点需要将模型更新进行沟通,以达到趋同。因此,创建本地模型更新的当地计算时间,以及将模型传送到服务器和从服务器传送到服务器的时间,导致整个时间的拖延。此外,不可靠的网络连接可能阻碍这些更新的有效通信。为了解决这些问题,我们在本文件中建议建立一个延迟高效的FL机制,以缩短整个时间(计算和通信迟误)和模型汇合所需的通信回合。探索各种参数的影响,造成延迟,我们力求平衡无线通信(交谈)和地方计算(工作)之间的交易。我们把总体时间作为优化问题与通过广泛模拟来展示我们方法的功效。