In this work, we propose a communication-efficient two-layer federated learning algorithm for distributed setups including a core server and multiple edge servers with clusters of devices. Assuming different learning tasks, clusters with a same task collaborate. To implement the algorithm over wireless links, we propose a scalable clustered over-the-air aggregation scheme for the uplink with a bandwidth-limited broadcast scheme for the downlink that requires only two single resource blocks for each algorithm iteration, independent of the number of edge servers and devices. This setup is faced with interference of devices in the uplink and interference of edge servers in the downlink that are to be modeled rigorously. We first develop a spatial model for the setup by modeling devices as a Poisson cluster process over the edge servers and quantify uplink and downlink error terms due to the interference. Accordingly, we present a comprehensive mathematical approach to derive the convergence bound for the proposed algorithm including any number of collaborating clusters in the setup and provide important special cases and design remarks. Finally, we show that despite the interference in the proposed uplink and downlink schemes, the proposed algorithm achieves high learning accuracy for a variety of parameters.
翻译:在这项工作中,我们建议对分布式设置采用一种通信高效的双层联合学习算法,包括核心服务器和多边缘服务器以及设备组群。假设不同的学习任务,组群与任务相同。为了对无线链接实施算法,我们建议对下行链路采用带宽限制的广播计划,对上行链路采用可扩缩的团团包式空中集合计划,要求每个算法循环只需要两个单一的资源区块,独立于边缘服务器和装置的数目。这个组群面临设备在边缘服务器的上链和下行连接中干扰的干扰,而下行链路系统将严格建模。我们首先为通过模型装置设置的空间模型模型,作为波斯逊集群进程在边缘服务器上建模,并量化因干扰而出现的上行连接和下行连接错误条件。因此,我们提出了一个全面的数学方法,以获得拟议算法约束的趋同,包括设置中的任何数个协作集群,并提供重要的特例和设计意见。最后,我们表明,尽管拟议的上行和下行链路计划干扰,但拟议的算法仍能为各种参数的高学习精度。