To achieve communication-efficient federated multitask learning (FMTL), we propose an over-the-air FMTL (OAFMTL) framework, where multiple learning tasks deployed on edge devices share a non-orthogonal fading channel under the coordination of an edge server (ES). In OA-FMTL, the local updates of edge devices are sparsified, compressed, and then sent over the uplink channel in a superimposed fashion. The ES employs over-the-air computation in the presence of intertask interference. More specifically, the model aggregations of all the tasks are reconstructed from the channel observations concurrently, based on a modified version of the turbo compressed sensing (Turbo-CS) algorithm (named as M-Turbo-CS). We analyze the performance of the proposed OA-FMTL framework together with the M-Turbo-CS algorithm. Furthermore, based on the analysis, we formulate a communication-learning optimization problem to improve the system performance by adjusting the power allocation among the tasks at the edge devices. Numerical simulations show that our proposed OAFMTL effectively suppresses the inter-task interference, and achieves a learning performance comparable to its counterpart with orthogonal multi-task transmission. It is also shown that the proposed inter-task power allocation optimization algorithm substantially reduces the overall communication overhead by appropriately adjusting the power allocation among the tasks.
翻译:为了实现通信高效联邦式多任务学习(FMTL),我们提议了一个超空FMTL(OAFMTL)框架,在边缘设备上部署的多重学习任务在一个边缘服务器(ES)的协调下共享非垂直淡化通道。在OA-FMTL中,边设备的地方更新与M-Turbo-CS算法一起进行封闭、压缩,然后以超压式的方式通过上链通道发送。ES在任务间干扰下使用超空计算。更具体地说,所有任务的模型汇总都从频道观测中同时进行重建,基于涡轮压缩传感器(Turbo-CS)算法的修订版本(名为M-Turbo-CS)算法。我们分析了拟议的OA-FMTL框架以及M-TL算法的绩效和M-Turbo-CS算法的绩效。此外,我们根据分析,制定了一个通信学习优化问题,通过调整边缘设备中任务的权力分配来改进系统业绩。NUDAMTL的模型显示,我们拟议的OAMTL(OMTL)对跨任务进行可比较性平级传输。