Federated edge learning (FEEL) has emerged as an effective alternative to reduce the large communication latency in Cloud-based machine learning solutions, while preserving data privacy. Unfortunately, the learning performance of FEEL may be compromised due to limited training data in a single edge cluster. In this paper, we investigate a novel framework of FEEL, namely semi-decentralized federated edge learning (SD-FEEL). By allowing model aggregation between different edge clusters, SD-FEEL enjoys the benefit of FEEL in reducing training latency and improves the learning performance by accessing richer training data from multiple edge clusters. A training algorithm for SD-FEEL with three main procedures in each round is presented, including local model updates, intra-cluster and inter-cluster model aggregations, and it is proved to converge on non-independent and identically distributed (non-IID) data. We also characterize the interplay between the network topology of the edge servers and the communication overhead of inter-cluster model aggregation on training performance. Experiment results corroborate our analysis and demonstrate the effectiveness of SD-FFEL in achieving fast convergence. Besides, guidelines on choosing critical hyper-parameters of the training algorithm are also provided.
翻译:联邦边缘学习(FEEL)是减少云基机器学习解决方案中大型通信延迟的有效替代方法,同时保护数据隐私;不幸的是,由于单一边缘组别的培训数据有限,感觉的学习表现可能受到影响;在本文件中,我们调查了一个新型的情感框架,即半分散化联邦边缘学习(SD-FEEL),允许不同边缘组别之间的模型聚合,SD-FEEL在减少培训延迟和通过从多个边缘组别获取更丰富的培训数据提高学习业绩方面享有感觉的好处;为SD-FEEL提供了每轮有三种主要程序的培训算法,包括当地模型更新、集群内和集群间模型汇总,并证明它与不独立和同样分布的(非IID)数据趋同。我们还对边缘服务器网络地形和集群间模型汇总对培训业绩的通信间接费用进行了描述。实验结果证实了我们的分析,并表明SD-FFEL在实现快速趋同方面的有效性。此外,关于选择关键超偏差矩阵的准则也是提供的。