Federated edge learning (FEEL) has drawn much attention as a privacy-preserving distributed learning framework for mobile edge networks. In this work, we investigate a novel semi-decentralized FEEL (SD-FEEL) architecture where multiple edge servers collaborate to incorporate more data from edge devices in training. Despite the low training latency enabled by fast edge aggregation, the device heterogeneity in computational resources deteriorates the efficiency. This paper proposes an asynchronous training algorithm for SD-FEEL to overcome this issue, where edge servers can independently set deadlines for the associated client nodes and trigger the model aggregation. To deal with different levels of staleness, we design a staleness-aware aggregation scheme and analyze its convergence performance. Simulation results demonstrate the effectiveness of our proposed algorithm in achieving faster convergence and better learning performance.
翻译:联邦边缘学习(FEEL)作为移动边缘网络的隐私保护分布式学习框架,引起了人们的极大注意。 在这项工作中,我们调查了一个新的半分散感(SD-FEEL)架构,在这个架构中,多个边缘服务器合作将更多边缘设备的数据纳入培训。尽管快速边缘聚合导致培训延迟,但计算资源中的装置差异性会降低效率。本文建议SD-FEEL为SD-FEEL提供一个非同步培训算法,以解决这一问题,在这样一个架构中,边缘服务器可以独立设定相关客户节点的最后期限,并启动模型集成。为了应对不同程度的累赘,我们设计了一个粘性-觉聚合计划,并分析其趋同性能。模拟结果表明我们拟议的算法在更快地趋同和更好的学习性能方面的有效性。